It is not actually supposed to block. Pipes block when they are full, but there's not enough data here to fill a pipe buffer. When pipes are broken, SIGPIPE is sent to the writer. Pipes do not block just because nobody is reading from the read end--as long as the read end is still open somewhere, a process could read from it, and that is enough.
When you see "blue", what happened is the left-hand side of the pipe got killed because the right-hand side already finished before "echo red", which closed the read end completely, and then "echo red" got killed with SIGPIPE. That takes out "echo green" with it, because "echo" is a built-in, and so "echo" is not a subprocess. If you use "/bin/echo red" instead, then "green" will always be printed (because SIGPIPE is going to /bin/echo, and not the entire shell).
In other circumstances, the "echo blue" will never read stdin, but the kernel doesn't know or care. As far as the kernel is concerned, "echo blue" could possibly read from stdin, as long as stdin is open.
But indeed the author wasn't aware that readers and witers of the pipe aren't fully synchronized because the buffer in between allows for some concurrency. My writeup wasn't very explicit about that (at least not that writing to the pipe can block when the pipe is full) but I think it's technically accurate and hope it can clear up some confusion -- a lot of readers probably do not understand well how the shell works.
Yes the pipe runs two subcommands in parallel but that is not why the blogpost is interesting (or its author surprised). It's because 'echo red' is supposed to block, thus introducing synchronization between the two branches of the pipe, yet it doesn't!
And I must confess, when reading the command my first though was: "Ok so that first echo will die with a SIGPIPE and stderr will be all about the broken pipe." And I was wrong, because of that small buffer.
I wonder what other unices do allow a write to a broken pipe to complete successfully?