Preferences


I don’t have experience with GNU/Linux going back that far, but even since 2014 (when I got started with Ubuntu), I’ve seen things get much, much better.

These days my personal and professional daily drivers are, respectively, a 2021 and a 2022 Framework laptop, both with Fedora/GNOME updated regularly. The software is so much more usable than any MS OS since Windows 7, and even MacOS of the past 2-3 years (many people I talk to have remarked on how MacOS quality and usability have been degrading recently).

The only pain points I notice with my Framework are not strictly software-related (dislaimer: I don’t play much in terms of modern games anymore); power management/suspend (an atrocity in my experience), and quality of internal peripherals (the mic, webcam, and trackpad are almost useless, so I use just use a dock and external peripherals - and they work great with minimal hassle).

Not to sound like a broken record, but 2024 will probably be the year of the GNU/Linux desktop, especially with Framework’s AMD 16-inch laptop with upgradeable GPU making a huge splash, and MS and Apple continuing to alienate developers and power users. I’m actually glad they’re getting worse because it pushes more people toward FOSS - severe but temporary punishment is actually more effective and less cruel than life in prison.

I keep reading about XYZ being the reason people will finally flock to GNU/Linux, since Windows XP, instead me and others gave up on the desktop, and went back to Microsoft/Apple/Google offerings.

While Valve admitted failure, and has to emulate Windows (yeah not quite emulation), as mechanism to have games on GNU/Linux, not even studios developing Android games with the NDK can be bothered to target GNU/Linux.

> While Valve admitted failure, and has to emulate Windows

That's hardly an admission of failure. No different in practice from "emulating" the JVM or .NET CLR for applications to run that way.

The thing that will get Linux over on the desktop is the thing that got Windows over: preinstalls, on almost all models, from all major PC vendors.

It'll never happen, but that's what it'll take.

Thing is, people just don't install an OS to begin with - as Torvalds once remarked himself.

You don't see Steam Deck users installing Windows, though they very well could, and I'm sure some would prefer to do so, but the experience in that device isn't bad enough - or bad at all for most - for people to bother, likewise the experience of Windows isn't bad enough for people to bother.

Framework won't suddenly cause a major shift, simply because they just don't sell that many devices. These kinds of things don't happen in a year.

For me the target is “GNU/Linux is viable and practical for the average user”, not “GNU/Linux has 30% market share”. I haven’t tried System 76 so I don’t know much about GNU/Linux outside the Framework line, but upgradeable graphics in the upcoming 16 inch models means you could dual-boot with Windows for AAA gaming and maybe a handful of Windows-specific apps, and GNU/Linux for machine learning and just about everything else.
Framework doesn’t even sell their laptops with Linux pre installed. How are they even that much better than Dell for instance in this regard?
Actually people are indeed installing Windows on the Steam Deck, and all upcoming competing devices are being based on a Windows 11 core.
The netbooks show how well it worked, with every OEM having their own flavour.
Linux still has no way to make stuff just work, reliably, every time, aside from containers. Windows has less "Oh look I updated and now Steam doesn't work, and git cola doesn't either" problems.

It's usable but doesn't seem to be getting better, Linux prizes quality over compatibility and rewrites things whenever they feel like, in non backwards compatible ways.

I really wish there was a Canonical sized company doing exactly what Ubuntu does, but with something like NixOS, or else just Exactly what Debian does, sans dynamic linking so there's less dependencies to have conflicts with.

Wow it sounds like you have some barely-usable hunks of crap that just happen to run software and you don't mind if all the peripherals suck and malfunction.

"Other than that, Mrs. Lincoln, how did you enjoy the play?"

Seriously, though, I feel you in terms of power management. My memories of my Thinkpad's Fedora days are painful in terms of sleep and hibernate.

My Chromebook is bad, of course, Linux under the hood, right? It likes to utterly disobey the lid switch. If I shut the lid it stays awake until otherwise idling out. If the computer is off and I open the lid, it springs to life even before I can hit the Power button. This is Not Supposed to Happen. wcyd?

The peripherals don’t “malfunction” in the strict sense, they’re just not good. To their credit, the mic and cam do have hard kill switches.

Also, the built-in fingerprint reader works great on Ubuntu and Fedora (Fedora even has it integrated with the terminal).

For more context, I rarely use laptops in my lap; I work from home and prefer at least some semblance of ergonomics, so for me they’re really just “easily-movable computers” i.e. I can work from the basement when needed, and on the dinner table when the house is empty.

I started using desktop linux again after a nearly 10 years hiatus on a spare laptop, and as I see, GNOME is getting more stupider by the release. KDE is fine, the devs don't really think they know better what you want, not like the GNOME folks.
I'm struck by how beautiful that page design was.

We don't really daily printed things that look nice anymore. Certainly web pages don't look/read well, and as much as I like Feedly/Unread, reading an article there isn't better than reading a printed magazine article.. then and still.

What am I missing? It's a three-column layout with an image in the center. It's OK, I guess. I'm not seeing any beauty. In fact, some of the text flow around the image looks quite awkward to me. It was probably done in 5 minutes with PageMaker or whatever publishing software magazines used at the time.
[Author/submitter here]

I did work in PCW's office a few times.

I think it was Quark Xpress on classic MacOS. So there would be an artfully-designed template into which my edited copy was poured.

I did like PCW's design, though.

I worked for Heise for a while, and got to read and take material from Linux Magazine, both in English and German. I find German typographical design awful myself and I think most German print tech mags look really bad...

But I think that to German eyes, British print material looks so glossy and (to borrow a musical term) over-produced that the instinctive response is to distrust it: it looks like advertising or "advertorial".

What to me looks cheap and amateurish and ugly, I suspect to a German looks trustworthy and honest.

Whereas as a Brit, American advertising looks so plastic and fake that I instinctually distrust it. It's all acres of gleaming white teeth and fake smiles, with a big glob of disclaimer squeezed in somewhere, and it makes me recoil from the product.

Cf. product naming.

The last motorcycle I owned was a Kawasaki ZZR-1100 (a "zed zed are eleven hundred"). To Brits, the numbers sound hi-tech and exciting.

In Japan it was a ZX-11. Maybe that reminds Brits of budget Sinclair computers too much?

In America it's a ZX11 Ninja. Zee ecks eleven Ninja.

To a Brit that makes it sound like a childrens' toy. I wouldn't want to be seen dead on a bike called a "ninja". I'm not 11 and I do not aspire to be a turtle.

The unfortunate reality of working with responsive layouts and unlimited canvas sizes are that it's rare to see anything on the web that looks as delightful as a page in a magazine or book, where the dimensions/aspect ratio are static. There are occasionally some pages that really demonstrate the best of what can be but creating these is significantly more expensive than designing a nice looking layout for a magazine.

I actually really miss designing webpages without any consideration for accessibility or for different sized screens because I felt like you could be a lot more creative back when you'd just throw a "Best viewed at 800x600 in Internet Explorer" disclaimer on your website.

I honestly don't see the dichotomy here. Can't you do both if you want? Use a design like the one on Liam on Linux' site and make it responsive (where necessary). Actually ... this site is already responsive enough imo.
> Can't you do both if you want? Use a design like the one on Liam on Linux' site and make it responsive (where necessary). Actually ... this site is already responsive enough imo.

You can't both enforce a very specific aspect ratio and dimensions and be responsive with any semblance of practicality. The reason the magazine page can look good is because the person designing it knows exactly the size of all the elements, the aspect ratio and dimensions of the page it's being printed to. All of the graphics are placed intentionally, and all of the type is set intentionally to exactly where they want it to be and how they want it to look, with of course the constraint of the content dictating what that will be.

On a webpage you're just doing a lot of guessing of what the viewport might look like for users. You probably could, through media queries, determine if a person's viewport is exactly right for your one desired layout and only show it if that's the case, but that'd be a waste of resources. You could also enforce the layout you want in your css by setting things by pixels and then you'll be sure that it looks exactly as you intend... except then it isn't responsive.

So hypothetically, yes, you could still have it be responsive, but there's just no point, is there, if only the people that happen to meet your very specific requirements ever see it?

That's why good web design is hard and why it's become, in my opinion, kind of boring in terms of layout design. You have to account for all those different devices looking at it and try to make it look as nice as possible while still being useable.

> a design like the one on Liam on Linux' site

I am that Liam :-) but the design isn't mine. It's a default Dreamwidth theme which I didn't choose or touch.

I think it is intentionally retro and that is why it looks good.

In contrast, compare with the WinAmp website:

https://www.winamp.com/

As soon as I saw the new site, I knew the new version of the program would be junk and wasn't even worth evaluating.

All huge, wasteful of space, nasty animations wasting my time and my CPU cycles. It's plastic sh1t, and also expensive and a lot of work has been wasted on it... and therefore it seems very likely that the program will be too.

Then to rub it in, the screenshot looks like horrendous flat cr4p that reminds me of Spotify, another egregious excrescence of gratuitous Javascript masquerading as a native program.

I'm disappointed with the keming though. "Red H at"
Unmentioned in the review is the disaster of trying to upgrade your RAM in these. I had the model 620, on which the 8 RIMM slots were on two expansion cards. RIMMs were invented by a person who appreciated the user-friendly nature of SCSI. You had to correctly deduce if and where to install "Continuity RIMMs" which shorted the bus to termination networks on the motherboard. And of course you were responsible for not losing these if you ever removed them from the system.
I had completely forgotten about RIMMs. Probably because I went out of my way to avoid them at the time - I can't really remember the details of why now though :)

One thing struck me on that review though - 128MB RAM would've been pretty low specced for a pricy dual PIII 866 workstation by late 2000. I had that much a few years earlier with a dual PII 266 in a company that wasn't exactly flush with cash. I suppose having a TNT card meant it was about as low specced as workstations got.

[Author/submitter here]

> RIMMs were invented by a person who appreciated the user-friendly nature of SCSI.

Excellent! Very nearly a tea | nose > keyboard moment. Well played. :-)

The thing is that, as a reviewer, you only get the machine for hours to days. All you can talk about is what's in front of you, not maintenance, upgrading and so on.

RDRAM was a huge mistake for the computer hardware industry, but I ascribe this more to Hanlon's razor than anything else.

Oh man, I remember attempting to install Mandrake Linux on a HP machine back in '03......Ended up taking all night and into the morning with a buddy and I, and even then some drivers were missing and did not work well... Good times though for sure.
The articles has some nice statements which did not age well:

A video card suitable for gamers but not for a workstation. We have 2-3 Business Hype cycles which rely completely on graphic cards.

Or the shocking default setting of DHCP in the TCP/IP setting. As if today 90% of network stacks would work without DHCP and IMHO a Network Admin would never rely on a front end for managing IP addresses.

[Author and submitter here]

Yep -- that's why I thought it was funny, and why I bothered rescuing and fixing the text and posting it on my blog.

Does anyone notice how different his writing is than what we see now? I miss when people wrote like this. Everything now is so direct and informal. Internet writing now is like a text message, as abrupt and emotion laden as possible, like it knows the readers have an attention span of less than ten seconds. We’ve lost something.
No, good prose still exists but you have to pay for it.

You are just overwhelmed by free low quality content.

This is a blog though, it's written like a (a often paid, yes) newspaper or something article, sure, but it isn't, afaict this is some guy's blog.

I agree with GP, it's a rare treat to be treated as an adult by such things, and I don't think it's anything to do with free, it's just the style of 'informal'.

[Author submitter here]

> This is a blog though

You are both right and totally wrong. Mostly wrong.

> it's written like a (a often paid, yes) newspaper or something article

It's written like a magazine article because it is an article that the magazine paid me to write in 2000, and I just found it by accident and put it on my blog.

So it is my blog but that's pro work that has been edited.

> but it isn't,

Wrong.

> afaict this is some guy's blog.

Then you can't tell very well. Sorry, but you can't.

This is the magazine:

https://archive.org/details/PersonalComputerWorldMagazine/PC...

I wrote the original article, 23Y ago, and I wrote the blog post too, and I am writing this.

These days I write for the Register and you can read my stuff every day there:

https://www.theregister.com/Author/Liam-Proven

Bloody hell, sorry, forgive a guy for saying he liked your writing..?
:-(

Apologies. That wasn't how I read it. My bad.

[Author/submitter here]

> I miss when people wrote like this

Good news. We still do.

https://www.theregister.com/Author/Liam-Proven

Oh wow I bought one of them at the time to run windows 2000. Nicely put together machine.
I'm sure they'll figure out the sound stack any moment now. 2001 will surely be The Year Of The Linux Desktop. What an exciting time to be alive!
The old keyboards of 90s and early 2000 laptops were ideal.
Yes they were.

This wasn't about a laptop, but still, you're not wrong.

with that much of dough now (inflation adjusted), I just hope that common server focused Linux distros are supported out of the box. my experience with dell workstation (refurbed one though), was pretty ok ish. not that buttery smooth but was ok.
PCs used to be pretty costly. The long-time editor in chief of PC Magazine coined "Machrone's Law" that the PC you wanted always cost $5,000--and that held pretty well into at least the late 90s.

Supported Linux on one or two PC models is a pretty longstanding sideline of Dell's. They've had some version of Ubuntu on a laptop for a long time.

Right, and that was in 90s dollars!

It's pretty astounding what you get for $2k 2023 dollars now, which is roughly $1k in 1998 dollars.

Plus that $2k 2023 machine will be good enough for probably 5 years even for a pretty serious professional. Versus a $3k 1998 machine that even for a young hobbyist was basically decrepit in under 3 years.

So your modern day computing run rate is like $400 2023 dollars per year, versus late 90s being easily the equivalent of $2k 2023 dollars per year.

Once GUIs became the norm, that shiny new PC you bought/built was basically not fast enough on Day 1. Whereas I still use my 2015 iMac and MacBook and they're really not even that bad for photo and video editing. I did buy an M1 Pro MacBook for multimedia but I'll be the first to admit it was sort of a luxury purchase. (And partly for reasons that may not really play out.)

My first dual floppy PC clone (who could afford a hard disk or genuine IBM?) in 1982 was somewhere in the low 4 digits all-in which was probably close to 10% of my gross salary as an engineer at the time.

But, yeah, you can get a very well equipped Mac for about $3K these days. (Which is obviously not the economy PC option.) Even if you were starting from scratch you'd probably have to really work to get it up to $5K with external monitor, external hard drives, peripherals, printer, etc.

Nice to see XFree86 is much easier to setup these days.
It is.

Until it goes wrong. Then you are totally screwed. It's all automatic and there's nothing to troubleshoot or fix.

There is something seriously bothering me recently when I look back on last 20 years of computing.

> This supports dual Pentium III processors running at up to 1GHz and up to four RIMMs; the review machine had two 64MB modules for 128MB of dual-channel RDRAM.

I recently dug out from storage an old, from 2002, reasonably high spec Thinkpad (sorry don’t have exact specs on hand). I was obviously running Linux on it at the time as I’ve been using Linux as my primary operating system since 1995! Yes seriously I have.

Anyway. I decide to boot it up, knowing it was stored it pretty much ideal conditions. It works perfectly, except battery of course.

It only has 64MB ram! But everything runs, the web browser. And it’s fast.

I only spent about 1 hour playing round. But it left me distraught. What has happened that our web browsers use 2G of memory and are slow! What really is the software progress in last 20 years? Obviously the hardware has progressed massively but I couldn’t help but be concerned with how little improvement was immediately obvious, in fact it was so fast and reactive GUI, faster than today!

I've installed a clean windows 11 on some okay hardware a few days ago (i7 10750H, 32G ram, two 5GB/s NVMe SSD, GTX 2060) and everything is so astonishingly slow, at first I was mad but now I'm just utterly amazed.

I'll sometime wait three seconds for the explorer to open. Launching any non-trivial app takes on average 10 seconds. Sometimes copying stuff from a disk to another will show the incredible speed of 2 MB/s on stuff that should be literally instant. I have to wait for git which literally never happens to me on Linux ; just opening an MSYS shell lags (it's just a console and bash ffs!). MsMpEng.exe uses 15 - 20% CPU constantly even when turning every A/V shit off. And let's not even talk about usability - I got stuck for some reason with keyboard layouts that I didn't install, and apparently in Win 11, by default, not less than THREE SHORTCUTS will switch between layouts - ctrl+shift, alt+shift and win+space, which made coding absolutely insane until I could find a way to the "advanced keyboard settings" where the "alt+shift" shortcut could be turned off (which apparently also automagically turned off the "ctrl+shift" one).

And I could find hundred of messages on boards complaining about this, with "Microsoft MVPs" answering with the most generic and useless advice like "chkdsk", "do a check from your bios", "update your drivers".

Linux on the same hardware absolutely flies even when ran from a USB 2 stick, with as much eye candy with e.g. KDE Plasma.

And let's not talk about C++ compile times which are insane in Windows with the exact same compiler & toolchain (clang 16, libc++, lld...) compared to Linux, even when taking care of installing it on a separate drive, excluding the whole drive from virus scan...

My work laptop runs windows. 16 core 3.4GHz I7, pretty respectable. It should be, it's brand new.

C++ compilation takes more than twice as long as my Linux server from 2010. It's got two 16 core Xeon 2600-something processors at 2.6GHz. GCC doesn't even fully load the processors, it's something like 70% average and still runs twice as fast.

Fortunately, the exact details of my job have changed and I can switch to Linux soon. If my work machine still can't compete with my server, I can just slap distcc on it and use my server as a remote compile node. Might do it anyway, the compile is 'only' five minutes on my server, if I can combine these machines it'll likely halve again.

I don't run windows seriously anymore, but on small tablet level devices i've used recently, disabling everything in the "security" menus will significantly speed up windows.

In particular disabling MS defender (or whatever its called this week), will provide a massive increase in disk speed as its not longer scanning every single thing your reading off the disk.

This isn't so much NFTS but all the filters/etc that windows now comes with. In the past you would only get this behavior if you installed norton/mcafee and let it turn IO problems into CPU bound ones, but now its mostly unavoidable on windows. While your linux machine probably doesn't even have an always on virus scanner available.

Last time I ran windows on my personal machine, I saw the same thing. Unfortunately my work machine has workgroup policies that prevent even temporarily disabling Microsoft McAfee
That'll probably be NTFS shafting you. It's terrible at handling small files.

Trick I found was to fire up a Linux VM and do your compilation in there.

Also Windows file handling in general. It's incredibly flexible, allowing an arbitrary number of drivers to log or modify every disk access. That's great if you want real-time antivirus software, transparent file encryption (without encrypting the full partition), data-loss prevention software, advanced file syncing etc. But it also makes workflows that access lots of small files very slow.

Instead of a VM you can also use WSL2, assuming you put the files in the linux file system instead of accessing the host file system. (Yes, technically that's also just a fancy VM running in Hyper-V; but it's a more integrated experience than e.g. VMWare)

Yeah, granted the server is running a zfs pool on 7 mechanical SAS drives. But the windows machine has an nvme drive so I thought it'd even out.

Looking into compiler optimization, I can tell it's a very deep rabbithole that I'd rather stay away from if at all possible

I actually hadn't even thought of that. I'll have to see what performance on a VM is like these days, maybe it'll be enough to do my normal daily work?

Last time I tried that was back in the windows 7 days and VMs were almost unusable.

If you provision the whole disk up front i.e. reserve space then it'll be near native. That applies to Hyper-V, VMWare Workstation and VirtualBox VMs.

WSL this is not necessarily the case. I don't use WSL for a number of reasons including that.

It’s just economics, really. 1GB of RAM was unfathomably expensive back then so if you wrote software requiring it you’d be dead in the water.

These days RAM is cheaper than development time so instead of writing tightly memory controlled programs in C we write in e.g. garbage collected JavaScript. It's easier to learn and quicker to develop with. Part of me is broken hearted that the “art” of efficient programming is lost but it was an inevitable consequence of software becoming so central to our lives.

(and hey, that same discipline does still exist today if you want to dabble in it, in embedded spaces etc)

I also feel "back then" it was quite easy for a enthusiast machine to be quite well past "top of the line" whereas now we've been in the 2-4GB "standard" for so long that most people have it - it's decently hard to have something surprisingly faster than "top of the line".

E.g, I remember "recommended specs" for computer games that would ask for 64 MB of RAM; I had 128 MB.

Something like 16GB is or exceeds recommended specs for most desktop things... But some people are running 64 or 128GB in their desktops. So, I think overkill computers are still out there. It is pretty hard to get ahead of CPU specs now though: used to be you could get server chips (and a server/workstation board) and have something that was clearly better in every area other than price. Now, desktop sockets get new architectures before servers, and server chips have more cores, but usually lower clocks and not everything will scale to more cores in a way that makes that better.
That’s the main thing I feel - individual core speed is almost always the determining factor for “teh snappy” on modern desktop/single user systems.
> 1GB of RAM was unfathomably expensive back then

Well ... about $6k which wasn't astonishing for workstation budgets. These were machines that many people used to run software that cost $100k/year.

A credit to the incredible expandability of this era of Precision Workstation machines is you could have expanded the model 620 to 3GiB of main memory, which isn't bad at all. With 2x Pentium III Xeon CPUs and 3GiB you could credibly use that machine today. All that RAM only costs $75 today.

I'll sound like a miserable old bastard here, which I am, but I'm all for bringing back Gopher and banning any non native applications on a platform. Electron I'm looking at you.

All the native apps on my Mac desktop, an ass end M1 mac mini, are really really damn fast.

The moment I put windows, any managed apps or anything web based or wrapped web applications near any computer, a world of pain opens. It's mostly energy sucking buggy crap which serves the developers more than the customers. That's a cancer in the industry which is only going to get worse when we ship towers of excrement on top of WASM.

I have pipe dreams of using Serenity OS as my daily driver.

Also, I advise you to try Haiku OS Live CD. It’s refreshing.

I was a BeOS user for a few months back in the day. That would be perfect but no I need to use things which are on macOS.
My first PC was a Pentium 90 with 16MB RAM in 1994. I bought it specifically to run Linux. I've never really used DOS/Windows as my main OS on my personal machine.

Everything was fast back then. I had X11, Netscape, ethernet in the dorm. In a lot of ways I prefer the simple text heavy web pages of the time.

I've been designing integrated circuits including many chips in popular smartphones, network switches, and more. We make stuff faster but software manages to expand making my 8 core CPU with 64GB of RAM feel about the same when web browsing compared to back then.

I don't want to go back to those days but in many ways my computer with multiple X terminals, XFCE desktop, and Firefox, is extremely similar.

Even years after that Netscape would crash without any error messages and vanish multiple times a day. The past might have been good but we take so many things for granted today. Even if a browser crashes, it usually keeps all your tabs intact when you start it back up.
Something disappears into the ether every couple of years these days, but losing most of a day's work to a crash or some fat-fingered muscle memory back in the day didn't used to be an uncommon occurrence at all.
There are a few things 1) the tradition of tight code writing has been abandoned in favor of fast deployment, to the point of people expecting day 0 bugs in software. 2) webpages themselves are bigger. 3) latency from webloading trained people to expect it locally, and a lot of apps are just wrappers around what are effectively webapps.
A couple things: your fonts render better, your text is in Unicode (may not be a big deal for Americans, just trust me that I care that I can write reliably in my own language), your kernel has to support many more abstractions.

But the real issue is that the browser does a lot more than it used to and languages with JIT compilers and automatic memory management are just going to have a much higher overhead than C++. This is doubly true when the languages are non-typed and ultra-dynamic like JS.

For a while I browsed the internet without JS enabled and sites that didn't break loaded without any visible delay.

> What has happened that our web browsers use 2G of memory and are slow!

They are much more complex - ridiculously so - both because of the constant evolution of web standards and because of everything they need to do to keep what they execute in its own sandbox.

It doesn't help that the actual content being served just keeps on bloating. Think back to the average web page in the early 2000s: it was probably still browsed over a 56k modem, CSS was starting to get support, and if it might have had a few inline images it was still mostly text embedded in plain HTML. Web apps were CGI, JS was used sparingly, and people were still using frames.

Those pages were positively anemic when compared to what we sling around nowadays. 1 MB used to be a lot of data, and it just isn't the case anymore. You have layers of abstractions piled up on each other, isolated from the rest of the system as much as possible, slurping in, parsing and rendering multi-megabyte compressed, encrypted blobs.

But yes - it is a shame. We are doing so much more to achieve pretty much the same thing, and I keep on asking myself if we really need to.

I can also add mandatory TLS to this list. Back in the day you could receive an entire webpage response in the first TCP packet with data. TLS requires more turn-arounds, especially if webpage uses multiple different domains, and network latency is not going anywhere, it's physics. I had 200 ms latency in 2000, I have 200 ms latency in 2020, I'll have 200 ms latency in 2050, that's just thousands of kilometers to travel.
Latencies have gotten better, although not tremendously so.

Depending on your last mile tech in 2000 and what you have now, you might have gone fron dialup with first hop round trips of 50-100ms, to something in the neighborhood of 5-25ms with fiber, dsl or cable. Or maybe you were already on dsl or cable and things are about the same.

Most of a 200ms round trip is on a long distance route, and fiber routes have improved, as had routing equipment in the data path. If you're outside the US and Europe, there's been a lot more local interconnections too, so routes that used to bounce through US/EU in 2000 don't in 2023.

Saving a few ms here and there adds up, but 160 ms instead of 200 ms doesn't feel that much different.

Well, part of the answer with respect to web browsers specifically is that we keep 100 tabs open and websites are often very bloated with respect to advertising-related components in particular. Security-related features probably also add some degree of overhead.

If I just want to run something simple locally on a stripped down version of Linux it would probably be pretty fast and resource-efficient.

Less resource pressure. No need to fit in 64 MB when you have 2,000 MB.

Less experienced programmers (only know the latest thing; promoted to mediocre project leads... while the Old Guard doesn't pass on their knowledge before retiring)

Poorly documented code resulting layered workarounds (MS code from the 80's and 90's was hot garbage in terms of comments; source: I worked on Win95 diplay drivers and didn't comment any of my 16-bit VXDs. See previous gripe about O.G. being bad at documentation)

Bloated libraries (ironically, reluctance to jettison old code that is not well understood, so patch around it and keep dragging the baggage to the next release.)

Insufficient test suites causing too much bloat (see previous comment).

More complex APIs (spywhere and integrations everywhere).

I haven't walked the Windows source tree in 30 years, it would be fascinating to see a breakdown of the kernal linker map summaries.

One thing that people overlook is just stuff like screen sizes. Back then your laptop was 640x480. Now it's 3840x2160, 26 times as many pixels to push. Just a single 4k framebuffer will take up 32 MiB, you are gonna need at least two (and your system might even want to use 3) so you are looking at 96 MiB of memory right there. And this leaks into everything - just keeping a single display fed at 60 Hz takes up 2 GByte/s of memory bandwidth, more than half of what your dual-channel RDRAM setup could even deliver.
While there is some truth to your point, you have to remember that resolutions where already pretty high in the 90s (CRTs, not LCDs); I remember using 1024x768 as the standard, and it only required something like a 2MB VGA card. Part of the reason we had these huge framebuffers and still work with a minuscule amount of RAM is cultural. Programs (GUIs, games) would just NOT store a bazillion copies of the framebuffer, but rather only one or two (and mostly in the video card itself), and would prefer to scroll or redraw rather then to save/restore contents of the framebuffer whenever possible.

One pretty obvious example happened in the 2000s where the major 3 operating systems switched to desktop compositing almost simultaneously. You suddenly jump from requiring memory for just one framebuffer to requiring memory for one framebuffer + a copy of all the windows you have created.

I've posted about this before, but people forget that we used to worry about bit depth for the framebuffer and available VRAM. And back then, such a low-RAM card was only doing 2D acceleration at most, with a little offscreen buffer for fonts and sprites that would be blitted into the active framebuffer or composed at the DAC by the card itself (like the cursor).

Just for your example, 1024x768 would require a minimum of 3 MB just for a single buffer of pixels if stored with today's typical 4-byte RGBA and updated in immediate mode. A double-buffering technique would take twice as much. Most often, we were using either 8-bit pseudo-color mode or maybe 2-byte 15-bit true color with only 5 bits per RGB component and no transparency. A 2MB card might support 1024x768 in 15-bit mode or 1280x1024 in 8-bit mode.

I distinctly remember on my first serious Linux machine when I downloaded a single large JPEG image from an academic site and displayed it with the "xv" command which was the go-to image viewer at the time. It required virtual memory swapping because the image buffer to store the decoded image would be larger than my entire 20MB of system RAM. This was an unusual image, interesting to me mostly for this fact. As I recall, it was a high resolution scan of an ancient manuscript.

I find this line of thinking very interesting. From above:

> And this leaks into everything - just keeping a single display fed at 60 Hz takes up 2 GByte/s of memory bandwidth, more than half of what your dual-channel RDRAM setup could even deliver.

Does anyone know how practical are any countermeasures, for instance is tinycorelinux tinyx working for anyone?

I’m particularly interested in all the used Chromebooks floating around and what are some ways to trick them out with customized Linux.

The sad part is the younger generation who think a glacially slow multi-core supercomputer is normal as they have no point of reference for the performance capabilities of older hardware that have been lost due to bad dev practices.
Do you also remember that Netscape Navigator was hot garbage? It was so bad that people would brag about how stable their OS was by how well it handled Netscape crashes.

The first computer I felt was fast and responsive is my current MacBook Air M2.

But in all honesty, I haven’t worked with my own personal computer in years. I mostly worked on corporate Windows computers and my last Mac was a latest generation MacBook Pro 16 inch with corporate installed malware.

[Author/submitter here]

> I only spent about 1 hour playing round. But it left me distraught.

I know what you mean.

One of the biggest shocks I've had recently was researching this:

https://www.theregister.com/2023/07/24/dangerous_pleasures_w...

> What really is the software progress in last 20 years?

It has become significantly cheaper and easier to build software.

That has come at the cost of an enormous complexity paid by hardware and reliability.

I question the reliability part. Certainly there is more complexity (in some respects), more dependencies, etc. But I'd probably argue that most of the software I use today is more reliable than equivalent software (if it existed) that I used a couple decades ago.
Browsers are doing insane work very efficient: rendering pages of true type fonts and ultra res images while executing hundreds of js per second in VM and transforming page layout according to dynamically changing CSS rules. For all 10-100 sites opened at once.

It is Linux that now slow and bloated, not speaking of layers of GTK lard that can not render a window with simple controls without consuming hundreds of megabytes.

Truly this is the year of Linux on the desktop!
Its popularity always growing by an epsilon.
I can't believe companies were totally cool with shipping very expensive family computers in 2000 with "no sound support." Lol.
I remember the intense frustration of my first computer of having to select which sound card you had for certain games. Oh sure, your computer did come with sound, but not for this game as your card wasn't on the list of supported cards.
My memory was pretty much every cheap ISA/PCI card could emulate the SoundBlaster 16 though. I kinda loved fiddling with the sound configs.

(I had a very much not cheap Ensoniq SoundScape Elite)

[Author/submitter here]

> very expensive family computers in 2000 with "no sound support."

This was not a family computer. It was an expensive high-end professional machine.

But saying that... yes, you're right, and that is the primary reason why I called it out in the review 23 years ago.

I felt it was inexcusable then (and now it looks even worse, of course).

The secondary reason, which I couldn't spell out so clearly without losing the magazine several £million in advertising per year, was this:

If one of the biggest PC companies selling its own hardware with this OS can't get the sound chip working, this tells you several things:

* They are not very expert at doing this and that means they don't really know what they are doing. So, if you want all your devices to work, smoothly, and keep working, you should choose a specialist vendor instead.

* That this OS is not really ready for prime time yet because even a major vendor can't get its sound support working, so watch out, and don't buy this for any kind of audio work.

In other words, pointing out a minor flaw was my way of putting up a big warning sign.

"Precision Workstation" was not marketed to home users. It was a real workstation. The Dell home systems were Inspiron and Dimension.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal