- pavelstoevNot wrong but markdown with English may be the most used DSL, second only to a language itself. Volume over quality.
- keyword: "...talks..."
- Nokia also makes complex backbone carrier-grade network switches based on the Intellectual Property portfolio they acquired from Nortel.
- Much respect for the artist 50 Cent - converted his rap music success into respectable business ventures (Vitamin Water, others). So he is worth much more now!
- I've vibe-coded a website about vibe coding websites. I used GPT-5 and it inserted an easter egg that was found by a human front-end dev, to my amusement. Easter eggs must be in-distribution!
(No I am not sharing the link as I was downvoted for it before - search for it. Hint: built with vibe)
- It was my first engineering job, calibrating those inductive loops and circuit boards on I-93, just north of Boston's downtown area. Here is the photo from 2006. https://postimg.cc/zbz5JQC0
PEEK controller, 56K modem, Verizon telco lines, rodents - all included in one cabinet
- Here is a link to a video of what it looks like (estimated) in video.
We built this system at the UofT WIRLab back in 2018-19 https://youtu.be/lTOUBUhC0Cg
And link to paper https://arxiv.org/pdf/2001.05842
- When I think about serving large-scale LLM inference (like ChatGPT), I see it a lot like high-speed web serving — there are layers to it, much like in the OSI model.
1. Physical/Hardware Layer At the very bottom is the GPU silicon and its associated high-bandwidth VRAM. The model weights are partitioned, compiled, and efficiently placed so that each GPU chip and its VRAM are used to the fullest (ideally). This is where low-level kernel optimizations, fused operations, and memory access patterns matter so that everything above the chip level tries to play nice with the lowest level.
2. Intra-Node Coordination Layer Inside a single server, multiple GPUs are connected via NVLink (or equivalent high-speed interconnect). Here you use tensor parallelism (splitting matrices across GPUs), pipeline parallelism (splitting model layers across GPUs), or expert parallelism (only activating parts of the model per request) to make the model fit and run faster. The key is minimizing cross-GPU communication latency while keeping all GPUs running at full load - many low level software tricks here.
3. Inter-Node Coordination Layer When the model spans multiple servers, high-speed networking like InfiniBand comes into play. Techniques like data parallelism (replicating the model and splitting requests), hybrid parallelism (mixing tensor/pipeline/data/expert parallelism), and careful orchestration of collectives (all-reduce, all-to-all) keep throughput high while hiding model communication (slow) behind model computation (fast).
4. Request Processing Layer Above the hardware/multi-GPU layers is the serving logic: batching incoming prompts together to maximize GPU efficiency and mold them into ideal shapes to max out compute, offloading less urgent work to background processes, caching key/value attention states (KV cache) to avoid recomputing past tokens, and using paged caches to handle variable-length sequences.
5. User-Facing Serving Layer At the top are optimizations users see indirectly — multi-layer caching for common or repeated queries, fast serialization protocols like gRPC or WebSockets for minimal overhead, and geo-distributed load balancing to route users to the lowest-latency cluster.
Like the OSI model, each “layer” solves its own set of problems but works together to make the whole system scale. That’s how you get from “this model barely runs on a single high-end GPU” to “this service handles hundreds of millions of users per week with low latency.”
- I vibe coded a site about vibe 2 code projects. https://builtwithvibe.com/
- Hi Author - thank you very much for the clear and relatively easy-to-understand MPK overview. Could you please also comment on the similarity of your project to Hidet https://pytorch.org/blog/introducing-hidet/
Thank you !
- This story is dear to my heart. Let me tell you why - this is the tale of how my wife of 15 years, bless her heart, an occasional unstable genius, proposed a startlingly effective method for eradicating these invasive pythons.
She slammed her coffee cup down one morning with the conviction of an Old Testament prophet and declared: “Exploding rabbits.”
“Excuse me?” I said, wiping marmalade off my chin.
“Exploding. Rabbits. Stuff ‘em with quarter pound of C4, or maybe just enough tannerite to surprise the neighbors but not call down the FAA, and set them loose in the Everglades. Pythons love rabbits. Boom. Problem solved. You’re welcome, America.”
Now I’ve heard my share of madcap schemes. Once she tried to compost credit card offers. But this time she looked me square in the eye with the righteous glow of a woman who had just solved two ecological crises and accidentally founded a billion-dollar startup in the process.
“We’ll call it Hare Trigger™,” she added, deadpan. “It’s got product-market fit and explosive growth potential.”
She even sketched out a logo involving a jackrabbit with aviator goggles and a plunger.
I asked if this might attract some sort of federal attention.
“Good,” she said. “That’s called buzz. Besides, the pythons started it.”
And just like that, I found myself wondering how far true it is that behind every successful man stands an even more genius woman. Waiting for Elon to offer Series A.
- GPU sharing is a concern for sensitive data. It is more appropriate to increase the utilization rate of GPU chip internals via a variety of low-level (CUDA and below) optimizations.
- Optimizing AI performance is like peeling an onion — every time you remove one bottleneck, another layer appears underneath. What looks like a compute problem turns out to be a memory bottleneck, which then turns out to be a scheduling issue, which reveals a parallelism mismatch… and so on.
It’s a process of continuous uncovering, and unless you have visibility across the whole stack — from kernel to cluster — you’ll spend all your time slicing through surface layers with lots of tears being shed.
Fortunately, there are software automation solutions to this.
- YYC airport code is for Calgary, Canada. Why is on the US .gov site is there something I missed
- Can’t have an apologetic zoom call when zoom is down …
- Impressive!
- Model training observations from both Llama 3 and 4 papers:
Meta’s Llama 3 was trained on ~16k H100s, achieving ~380–430 TFLOPS per GPU in BF16 precision, translating to a solid 38 - 43% hardware efficiency [Meta, Llama 3].
For Llama 4 training, Meta doubled the compute, using ~32K H100s and switched to FP8 precision. Despite the precision gain, observed efficiency dropped to about 19.7%, with GPUs delivering ~390 TFLOPS out of a theoretical 1,979 FP8 TFLOPS [Meta, Llama 4].
I am not the one to critique, and rather, this is a recognition of the enormous complexity of operating GPUs at this scale. Training massive models across tens of thousands of GPUs stretches today’s AI infrastructure to its limit.
Besides accelerating inference workloads, advanced GPU optimizations can be integrated into training and fine-tuning pipelines. From various kernel optimization techniques (over 90) to increasing memory access efficiency and scaling up to cluster-wide resource coordination, efficiency can be maximized with some complex software.
References: [Meta, Llama 3] https://ai.meta.com/research/publications/the-llama-3-herd-o... [Meta, Llama 4] https://ai.meta.com/blog/llama-4-multimodal-intelligence/
- Is this how TSMC relocates to the US?
- works on my iphone 13