Preferences


ch_123
X originally was created on/ran on a graphics terminal - the DEC VAXstation 100. The VS100 was quite different to the later X thin client terminals: it required an adapter card to be installed in a host system, and the software which ran on the VS100 could directly access a chunk of shared memory on the host.

Ports to workstations with inbuilt graphics hardware came later.

References:

[1] https://www.youtube.com/watch?v=cj02_UeUnGQ

[2] https://en.wikipedia.org/wiki/VAXstation#VAXstation_100

yjftsjthsd-h
For anyone just reading the title: It's about physical thin-client X11 server machines, not xterm.
amiga386
I was driving myself mad; xterm was released in 1984 and it didn't really matter there was no XDM because there because you merely needed a window manager to tile your xterm windows...

But sure, the definition of "X terminal" here is meant to mean dedicated hardware that runs an X server connecting to a remote X11 display manager, and nothing else. Those were always somewhat niche, in the same way that once terminal emulators existed, general purpose PCs displaced hardware terminals.

In the 1990s, my university used inexpensive diskless X86 PCs running X386 (predecessor of XFree86) with just a ramdisk, booted by DHCP / BOOTP / TFTP.

lproven
I feel like I aged a decade reading that. You're not wrong but it's an interpretation that didn't even cross my mind. :-(
somat
The tricky thing about justifying an X terminal is that it requires a nice graphics system and probably a nice cpu to drive that graphics system as well, so really the only thing you don't need is storage. basically it is hard to save money because you are buying most of a nice computer anyway.
PaulDavisThe1st
1995, I work 3 days a week for Amazon but need some sort of computing device at home when I'm parenting my kid. I have a nice SPARCstation in the office, and money is actually a little tight, so I'm not getting one of those for home use. I'd already used NCD X Terminals in my previous job at UWashington CS&E, so we got one of them, connected it via a 96k modem (the NCD's could do this, using SLIP), and I was able to dial into "the office" and have a relatively normal X session in my home.

OTOH ... we had already started using the first Linux system at amazon by that time, and a few years later, when a 25MHz 486 running Redhat became the first computer I actually owned (I resisted for that long!), the idea of an X Terminal seemed a bit quaint and limited.

roryirvine
The biggest saving by far was that they needed fewer people to administer them.

At the time, it was typical to assume that each sysadmin could look after a dozen machines on average, maybe twenty at best. So if each of those dozen machines could support 10-20 users on X terminals, then you'd only need a single sysadmin for every 250 users. That was a big cost saving vs having a dedicated workstation on every desk.

But in the end, DOS/Windows PCs had even bigger cost savings because most users could be expected to do minor admin tasks themselves supported by cheap IT helpdesk staff rather than expensive Unix greybeards.

smackeyacky
For a brief-ish period, Sun would happily sell you diskless Sun workstations that did everything an XTerminal did plus offered local computing.

Two of the universities in town had labs of them for students, all booted remotely, all the storage on a bigger Sun down in the server room, ugly coaxial ethernet everywhere and those funky blue/silver mouse pads and optical mice.

My boss at the time was pretty dark on Sun, because they sold her a lab full of Sun 3 workstations without telling her the Sparcstations would be released shortly afterwards.

bigfatkitten
They did that for quite a while, right up through the Ultra 5 era. Sun Ray was the successor.
justin66
A cursory look at Byte Magazine from September of 1987: a 233MB Priam hard drive for $2888 dollars. That's with a crappy RLL interface rather than SCSI.

If you think about a lab full of computers doing relatively simple Unix work, and how much money would be saved by just having a single drive (and all other things being equal, which they of course aren't), it's not trivial.

Plenty of Unix systems could be booted from NFS. Assuming no local storage, and that you need X, the question is, what's the difference in cost and capability between a X terminal and a diskless unix station that runs X?
siebenmann
In 1989, the costs appear to have been significantly different, although on a casual search I don't see list prices for eg then-older Sun models like the 3/60. A brand new Sparcstation 1 (also 1989) was far more expensive than an NCD 16 or NCD 19, and a diskless Unix workstation would need more server support and disk space than an X terminal. Today is a different thing, but that's because PC prices have dropped so dramatically.
The Sun Ray terminals they used at my university back in the early 2000s was very nice.
rbanffy
Sadly they used a proprietary protocol. I don’t think there is much software to make them usable today, unlike X terminals, which still can be useful daily drivers (if you don’t use browser based apps).
msgodel
It's similar to the issue plan9 terminals have. As long as you have a CPU with an MMU and some RAM (which you need a fair amount of for the graphics anyway) you might as well just run the software locally. All the peripherals are relatively cheap.
MisterTea
> It's similar to the issue plan9 terminals have.

To be clear: Plan 9 is not limited to terminal-server setups. It can function just fine as a stand alone OS.

> As long as you have a CPU with an MMU and some RAM

Those weren't cheap at the time. If you read the Gnot terminal presentation (early Plan 9 terminal) it is stated that they were cheap enough so a user could have one at home and one at work. It also stated that some things could run locally like the text editor and compute intensive tasks like compiling could be exported to a big expensive CPU servers. These machines had a few megs of ram and a 68000 CPU and monochrome graphics. The CPU servers were Sun, DEC, SGI, etc, machines that users could certainly not afford one of, let alone two.

zozbot234
You don't need an MMU-capable CPU to render remote graphics. You don't even need much more RAM than a local framebuffer, which for low resolutions/color depths is very little RAM.

Proving this point, there are VNC client implementations that can run on MS-DOS machines.

msgodel
You probably do need it to run Xorg though. I'm unaware of an X server that can run on DOS.
bitwize
DESQview/X
nothingneko
wouldn’t you just need enough to render a window? i’m not sure if everything is sent pre-rendered or not
somat
Think early 90's computers, and everything required to run a X server well. lots of memory, nice graphics, a nice cpu to move those graphics around. despite being technically thin clients, Dedicated X servers were not cheap.

It is sort of like the anecdote about an early sys-admin who traced down a problem with the new department laser printer locking up for hours to one engineer who had to be told to knock it off when he explained that he was printing nothing, But the printer had, by far, the most powerful CPU in the building so he ported all his simulation programs to postscript and was running them on the printer.

throw0101c
> Dedicated X servers were not cheap.

As a one-time uni sysadmin back in the day, our EE lab(s) we had students running Matlab on a Sun E3500 with the display going up on a diskless ~10 year old Sun SparStation 5s that we had lying around (originally from the early 1990s).

rbanffy
> so he ported all his simulation programs to postscript

That’s enough punishment in itself.

wang_li
>Think early 90's computers, and everything required to run a X server well. lots of memory, nice graphics, a nice cpu to move those graphics around. despite being technically thin clients, Dedicated X servers were not cheap.

They really didn't have that. Largely Unix workstations running X had a graphics stack that almost entirely software with no or little hardware acceleration. What made them workstations compared to PCs was the large "high" resolution monitors. The DEC lab at my university consisted of DECstation 3100s (16 MHz MIPS R2000 with 16 MB RAM and an 8-bit bitmapped display with no hardware acceleration.) The engineering department had labs with Sun and RS/6000 machines.

Commodity PCs were 386s with 4-8 MB RAM and monitors that would do 640x480 or 800x600 and video cards that would do 8 or 15/16 bpp. A great day was when someone put a linux kernel and XFree86 on a 1.2 MB floppy that could use XDMCP to connect to the DECs or Suns to turn any PC in the PC labs into an X terminal.

c-linkage
Ah yes, the old ray tracer in PostScript.
I had to both administer, and operate on the early X terminals from several vendors they were interesting. Labtam made strides developing boxes using the more novel Intel chips and this may have been what they sold on when they got out of the business and moved to being an ISP in Australia.

I enjoyed using blits and the early dec Ultrix workstations.

Thin X terminals were super cool. But, also really stressed out your Ethernet, and because we didn't have good audio models in X at that time, when multimedia became viable they stopped being as useful. But for a distraction free multiple term, low overhead wm world... super good price performance cost.

wkat4242
I was surprised how a room of top notch 1280x1024 terminals was able to function so well on a shared 10mbps with pretty bad collision detection to boot. X apps of the day were super optimised for local drawing. Even games were super smooth. Toolkits like Motif were all draw calls. By the way back then we thought Motif was bloated lol :)

And then... Came the internet. People suddenly started running NCSA Mosaic in droves that bogged down the single core server. And those browsers started to push lots of bitmap stuff through the pipe to the terminals. Now that was bad, yes. When Netscape came with its image backgrounds and even heavier process people started moving away to the PC rooms :( Because all scroll content needed to be bitstreamed then.

Ps video content at that time wasn't even a thing yet. That came a bit later with realvideo first.

But there was a time when X terminals were more than sufficient, probably for a decade or so.

bmacho
> Because all scroll content needed to be bitstreamed then.

Is it better now? Can a browser locally scroll an image, without restreaming it?

londons_explore
A modern browser (ie. chromium) uses the GPU for all drawing.

Here is an awesome (slightly outdated) talk about the architecture: https://groups.google.com/a/chromium.org/g/blink-dev/c/AK_rw...

The basic idea is that HTML content is drawn in transparent 'tiles' which are layered on top of one another. When the user scrolls, the tiles don't need to be redrawn, but instead just re-composited at their new positions. GPU's are super fast at that, and even a 15 year old GPU can easily do this for tens of layers at 60 FPS.

On a linux with a remote X server, I think the tiles would all end up on the X server, with only the pretty small 'draw tile number 22 at this location' going across the network. So the answer to your question is 'yes'.

lxgr
Does all of that (i.e. GPU rasterization and GPU compositing) really work over the network with common browsers?

Based on my limited experience, the performance of running Firefox remotely on a local X11 server was very poor, and I assumed that the absence of these types of acceleration were to blame.

I could imagine XRender to work, though, which would at least support blitting most of the pixels up/down in case of scrolling, and would only require pushing new ones over the network for any newly exposed areas.

ianburrell
I think the XRender extension allows scrolling images. XRender allows quick recompositing of text and images.

But this requires the browser have a special path for remote X and not just use the GPU. Or even just a path for X that lets the X Server do the rendering.

jandrese
In theory you can send the images over as X backing stores and do just that. But I'm not sure it was implemented that way.

I remember GTK 1 was well optimized for X and you could run GTK applications over slow modem lines quite comfortably. GTK 2 went a different direction entirely and became almost unusable over the Internet. I doubt GTK 3 or 4 are any better now that they're designed for compositors.

aidenn0
It's much worse. Now fonts are rendered in the client then streamed over the network to the server as bitmaps.
ianburrell
XRender extension can load font glyphs into the X server and do the rendering locally. But that requires the browser, the X client, to know about this and not do rendering itself.
kristianp
I remember using xterms to do assignments in modula 2 in about 1993. They were 1 bit screens, I think they were square 1024x1024. Very high resolution for the time.
rbanffy
The NCD-16 had a square 1024x1024 1-bpp screen. I want one so much, but the ones that made it to Ireland and Europe seem to all have been responsibly recycled. :-(
loph
It was the interoperability of X Window System devices that enabled this.

Your X server (e.g. a X terminal) could display applications running on a variety of other vendors' hardware and operating systems. The specification enabled this interoperability. Apps running on SunOS could display on VAX workstations, and vice versa (as long as you had TCP/IP installed!)

The advantage X terminals had was that they were relatively inexpensive to buy and operate. Most did not require management, however the CPU cost moved into a computer room and you needed a pretty fast network to get reasonable performance.

hylaride
> The advantage X terminals had was that they were relatively inexpensive to buy and operate.

This was not really true. Those terminals were often extremely expensive compared to "off the shelf" PCs of the time. They required decent CPUs and memory (this was before hardware acceleration) as well as (for the time) decent networking hardware to drive it at scale for larger institutions. On top of that they were usually connected to and had to drive very high res monitors, which weren't cheap either (anecdotally, the first time I "surfed" the web was at my mom's work at a telco lab on HP-UX machines in ~1993-94; when we later got the internet at home I hated doing it on the 800x600 monitor we had).

As you alluded to, what it did provide was a central way to administer software and reduce licensing costs, which pre-2000 was almost all commercial - companies were loath to buy multiple commercial compilers/matlab/etc (and the software vendors took awhile to update their models, too). In those days sysadmins often did things by hand and user management alone was far easier on a central mainframe/server. It also did allow some vendor interoperability as you mentioned.

"Dumb" text terminals were also the way that things were already done, so they just continued on that model with a GUI until the benefits of local computing became so much more economical. In large orgs, this usually was around when windows started to become prevalent over the course of the 1990s (sometimes with X-servers or terminal clients on windows to replace the physical terminals).

loph
They were inexpensive when compared with a VAXstation II/GPX!
dfox
As for NCD X terminals (at least the later ones), surprising amount of stuff could run directly on the terminal (which ran some weird MMU-less BSD variant): mwm and motif session manager, dtterm-like terminal with telnet and serial port support, some kind of JVM and two different variants of mosaic were part of the SW package (it booted either from flash PC card or from NFS).
shrubble
I believe some used Intel i860 processors; not sure the MMU was integrated into it or not.
The one I still probably have somewhere has PowerPC and S3 video card.
Even today in my day job I still spend 90% of my time in a No Machine NX session to a remote linux desktop where I do my work.

We do it as an extreme form of access control. Our workstations cannot reach any of our systems. Thus if a laptop is stolen, nothing if real value is lost.

pjmlp
At my university we had a couple of X terminals from IBM, connecting into DG/UX, and I can certaily vouch that for early 1990's they weren't that cheap to acquire.

If memory serves me right, we had four of them on the student lab.

Everyone else could enjoy connecting to DG/UX via terminal app on Windows for Workgroups, or the older green and ambar text based phosphor terminals.

As anecdote, those big screen X terminals were quite often used to have four parallel sessions of mixes using talk and some MUD game.

bluGill
They were cheap compared to the cost of the workstation they connected to. But nobody would call them cheap even if you look at the price today without adjusting for inflation.
pantulis
We had like 4 Tektronix X Terminals that could connect to Sun workstations for those fortunate to have accounts, the rest was using VT terminals to a VAX. And yes, the talk and MUD use cases where popular ;)
beej71
The good old days. We had a bunch of X terminals hooked up with thin net to some HP735 servers in college.
HenryBemis
In those good old days my Uni was giving away those bulky Unix "manuals" (after every major upgrade they were refreshing the documentation/dossiers) and they would leave on a table a few dozens of the 'outdated' ones. Everyone would grab one and it was a first-come-first-served, and you could end up in a 'useless' dossier, but still they were amazing reads.
bluGill
I miss the days of useful manuals. They were hard and expensive to write, but they had a wealth of technical information that is often impossible to find today.
arethuza
One of the nice things when getting a new Sun workstation back in the day (say 1990 or so) was getting vast amounts of excellent printed documentation and folders into which they had to be clipped. Sun even used to supply proper books (e.g. the PostScript books) with OpenWindows to cover NeWS...
cbm-vic-20
I recall in the early 90s, X Terminals were useful for accessing applications that were licensed per-machine, or were only available on expensive hardware. X Terminals let users use those applications from anywhere on campus. Very convenient!
black3r
In my university we were doing this with Matlab in 2015...
ziml77
It's still not a dead concept. Some software remains very expensive to license. Though you're much more likely to see Citrix used for this.
aa-jv
For most of the latter part of the 80's, I used Quarterdeck Desqview as my 'terminal', which allowed me to have 4 independent concurrent MSDOS sessions running on my 386, each of which with its own video and network connectivity, so that I could telnet into my MIPS Magnum pizzabox and do some work.

At the beginning of the 90p's, I was on the hunt for an alternative to the MSDOS part when, eventually, I tried minix instead .. and that led to replacing it with Linux as soon as it was available on funet. Multiple runs to Fry's to get more RAM and some CPU upgrades later, and I was soon compiling an X/Windows setup on my brand new 486 with 16 Megabytes of RAM .. and about a week after that, I replaced my Quarterdeck setup with a functioning Linux workstation, thorns and warts and all. That was a nice kick in the pants of the operators who were threatening to take away my pizzabox, but it was short-lived joy, as not long thereafter I was able to afford an Indy, which served great for the purpose all through the 90's - and my Linux systems were relegated off the desktop to function as 'servers', once more.

But I always wondered about Quarterdecks' Desqview/X variant, and whether that would have been an alternative solution to the multi-term problem. It seems to me that this was available in 1987/88, which is odd given the articles' claims that X workstations weren't really widespread around that period.

rjsw
I ran my own port of X11 on top of Interactive Systems 386/ix running on a 386 in 1987/88.
aa-jv
Nice, I remember poking at that a few times but never being able to justify a license purchase to my boss, who believed that I had everything I needed in the form of the Magnum pizzabox, and why would anyone need a UI for programming, lol ..
lproven
> But I always wondered about Quarterdecks' Desqview/X variant

Dv/X was remarkable tech, and if it had shipped earlier could have changed the course of the industry. Sadly, it came too late.

> It seems to me that this was available in 1987/88,

No. That is roughly when I entered the computer industry. Dv/X was rumoured then, but the state of the art was OS/2 1.1, released late 1988 and the first version of OS/2 with a GUI.

Dv/X was not released until about 5Y later:

https://winworldpc.com/product/desqview/desqview-x-1x

1992. That's the same year as Windows 3.1, but critically, Windows 3.0 was in 1990, 2 years earlier.

Windows 3.0 was a result of the flop of OS/2 1.x.

OS/2 1.x was a new 16-bit multitasking networking kernel -- but that meant new drivers.

MS discarded the radical new OS, it discarded networking completely (until later), and moved the multitasking into the GUI layer, allowing Win3 to run on top of the single-tasking MS-DOS kernel. That meant excellent compatibility: it ran on almost anything, can it could run almost all DOS apps, and multitask them. And thanks to a brilliant skunkworks project, mostly by one man, David Weise, assisted by Murray Sargent, it combined 3 separate products (Windows 2, Windows/286 and Windows/386) into a single product that ran on all 3 types of PC and took good advantage of all of them. I wrote about its development here: https://www.theregister.com/2025/01/18/how_windows_got_to_v3...

It also did bring in some of the GUI design from OS/2 1.1, mainly from 1.2, and 1.3 -- the Program Manager and File Manager UI, the proportional fonts, the fake-3D controls, some of the Control Panel, and so on. It kept the best user-facing parts and threw away the fancy invisible stuff underneath which was problematic.

Result: smash hit, redefined the PC market, and when Dv/X arrived it was doomed: too late, same as OS/2 2.0, which came out the same year as Dv/X.

If Dv/X had come out in the late 1980s, before Windows 3, it could have changed the way the PC industry went.

Dv/X combined the good bits of DOS, 386 memory management and multitasking, Unix networking and Unix GUIs into an interesting value proposition: network your DOS PCs with Unix boxes over Unix standards, get remote access to powerful Unix apps, and if vendors wanted, it enabled ports of Unix apps to this new multitasking networked DOS.

In the '80s that could have been a contender. Soon afterwards it was followed by Linux and the BSDs, which made that Unix stuff free and ran on the same kit. That would have been a great combination -- Dv/X PCs talking to BSD or Linux servers, when those Unix boxes didn't really have useful GUIs yet.

Windows 3 offered a different deal: it combined the good bits of DOS, OS/2 1.x's GUI, and Windows 2.x into a whole that ran on anything and could run old DOS apps and new GUI apps, side by side.

Networking didn't follow until Windows for Workgroups which followed Windows 3.1. Only businesses wanted that, so MS postponed it. Good move.

TMWNN
I presume that X terminals did not appear at the same time as X Window because Project Athena <https://en.wikipedia.org/wiki/Project_Athena>, which created X, had its users use "real" workstations from the start, the IBM RT PC being the first. I don't know if MIT ever deployed any X terminals but, as I understand it, one of the tenets of Athena is that every workstation is a full-fledged remote login-capable node of the Athena cluster.
bediger4000
Project Athena is not given enough credit.
bitwize
One of HP's first X terminals ran on a 186. The same beleaguered 16-bit CPU that made the Tandy 2000 go, also powered X terminals in the early 90s (albeit at twice the speed).

For all its "bloat", X could support a very sophisticated GUI -- over the network -- on very limited hardware by the standards of 30 years ago, let alone today.

anthk
The Linux Gazzete had several articles on that, one of them from Andorra.

Great times.

https://linuxgazette.net/issue45/ward/ward.html

techlatest_net (dead)
HenryBemis
The title made me think: the Tesla 'copilot' didn't immediately have a "copilot".
exe34
I blocked a number of social media and news sites on my phone - I find myself doomscrolling through wikipedia in the morning now. Win!

This item has no comments currently.