OTOH ... we had already started using the first Linux system at amazon by that time, and a few years later, when a 25MHz 486 running Redhat became the first computer I actually owned (I resisted for that long!), the idea of an X Terminal seemed a bit quaint and limited.
At the time, it was typical to assume that each sysadmin could look after a dozen machines on average, maybe twenty at best. So if each of those dozen machines could support 10-20 users on X terminals, then you'd only need a single sysadmin for every 250 users. That was a big cost saving vs having a dedicated workstation on every desk.
But in the end, DOS/Windows PCs had even bigger cost savings because most users could be expected to do minor admin tasks themselves supported by cheap IT helpdesk staff rather than expensive Unix greybeards.
Two of the universities in town had labs of them for students, all booted remotely, all the storage on a bigger Sun down in the server room, ugly coaxial ethernet everywhere and those funky blue/silver mouse pads and optical mice.
My boss at the time was pretty dark on Sun, because they sold her a lab full of Sun 3 workstations without telling her the Sparcstations would be released shortly afterwards.
If you think about a lab full of computers doing relatively simple Unix work, and how much money would be saved by just having a single drive (and all other things being equal, which they of course aren't), it's not trivial.
To be clear: Plan 9 is not limited to terminal-server setups. It can function just fine as a stand alone OS.
> As long as you have a CPU with an MMU and some RAM
Those weren't cheap at the time. If you read the Gnot terminal presentation (early Plan 9 terminal) it is stated that they were cheap enough so a user could have one at home and one at work. It also stated that some things could run locally like the text editor and compute intensive tasks like compiling could be exported to a big expensive CPU servers. These machines had a few megs of ram and a 68000 CPU and monochrome graphics. The CPU servers were Sun, DEC, SGI, etc, machines that users could certainly not afford one of, let alone two.
Proving this point, there are VNC client implementations that can run on MS-DOS machines.
It is sort of like the anecdote about an early sys-admin who traced down a problem with the new department laser printer locking up for hours to one engineer who had to be told to knock it off when he explained that he was printing nothing, But the printer had, by far, the most powerful CPU in the building so he ported all his simulation programs to postscript and was running them on the printer.
As a one-time uni sysadmin back in the day, our EE lab(s) we had students running Matlab on a Sun E3500 with the display going up on a diskless ~10 year old Sun SparStation 5s that we had lying around (originally from the early 1990s).
They really didn't have that. Largely Unix workstations running X had a graphics stack that almost entirely software with no or little hardware acceleration. What made them workstations compared to PCs was the large "high" resolution monitors. The DEC lab at my university consisted of DECstation 3100s (16 MHz MIPS R2000 with 16 MB RAM and an 8-bit bitmapped display with no hardware acceleration.) The engineering department had labs with Sun and RS/6000 machines.
Commodity PCs were 386s with 4-8 MB RAM and monitors that would do 640x480 or 800x600 and video cards that would do 8 or 15/16 bpp. A great day was when someone put a linux kernel and XFree86 on a 1.2 MB floppy that could use XDMCP to connect to the DECs or Suns to turn any PC in the PC labs into an X terminal.