They're not confused at all, this is just a (correct) description of TPU v1. The repository is 8 years old.
Additional text from Google's 2017 paper abstract says:
This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory.
The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency.
The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters.
what's the memory bandwidth? IIRC that is the limiting factor in LLM hardware today
Slide 21, https://files.futurememorystorage.com/proceedings/2024/20240...
TPUv3 TPUv4
HBM2 BW 900 GB/s 1200 GB/s
hence the out of date part of my comment
Recent (2024) description by Google, https://cloud.google.com/blog/transform/ai-specialized-chips...
TPUs were purpose-built specifically for AI. TPUs are an application-specific integrated circuit (ASIC), a chip designed for a single, specific purpose: running the unique matrix and vector-based mathematics that’s needed for building and running AI models..
TPU v2.. built an interconnected machine — our first TPU pod — with 256 TPU chips connected with a very high-bandwidth, custom interconnect.. liquid cooling was added with TPU v3 to help address efficiency needs, while TPU v4 introduced optical circuit switches to allow the chips in pods to communicate even faster and more reliably.
TPUs also underpin Google DeepMind’s cutting-edge foundation models, including the newly unveiled Gemini 1.5 Flash, Imagen 3, and Gemma 2, propelling advancements in AI.. Forget about a single chip, or a single TPU pod — we’re building a global network of data centers filled with TPUs.
How would you describe it instead? Curious and learning
Google does everything, both inference and training, on their TPUs.
Inference is easier, since the person deploying a model knows the architecture ahead of time and therefore can write custom code for their particular model.
When training you want to be as flexible as possible. The framework and hardware should not impose any particular architecture. This means lots of kernels and combinations of kernels. Miss one and you're out.
this seems hopelessly out of date/confused