Preferences

Nevermark parent
I have spent some percentage of my life attempting to rewrite all software from first principles up.

Software is so spectacularly broken. Applications that don’t let me adjust the position of a little button for my work habits. Why is that impossible!?! A global software and commerce system, where you can buy candy or transfer $ billions, both with cute warnings like “Please, oh, please, sir! Please don’t hit the back button!”

I can sum up the results of my quest quite simply: “The rewrites continue…”

Is this chasing windmills? The case for that seems solid on the surface, but…

It is true that every rewrite of a specific set of features, or a platform for enabling better support for efficiently and correctly commingling an open class of features, inevitably runs into trouble. Some early design choice is now evidently crippling. Some aspect can now be seen to have two incompatible implementations colliding and setting off an unnecessary complexity explosion. Etc.

But on the other hand, virtually every major rewrite points to a genuinely much improved sequel. Whose dikes keeping out unnecessary complexity hold up longer with less finger holes to plug, for a better return. Before its collapse.

Since there must be a simplest way to do things, at least in any scoped area, we have Lyapunov conditions:

Continual improvement with a guaranteed destination. A casual proof there is a solution.

It’s a dangerous phantom to pursue!

——

It would be interesting to compile a list from the heady 90’s, when corporations created boondoggles like Pink and Cyberdog, and had higher aspirations for things like “Object Linking and Embedding”.

You just don’t see as many romantic technological catastrophes like those anymore. I miss them!


throwanem
> Is this chasing windmills?

Yes. Well, "tilting at," jousting specifically. The figure relates to the comical pointlessness of such an act; the windmill sail will in every case of course simply remove the lance from the rider and the rider from the saddle, and turn on heedlessly, as only a purblind or romantic fool could omit trivially to predict.

> You just don’t see as many romantic technological catastrophes like those anymore.

The 90s were a period of unparalleled economic surplus in the United States. There was more stupid money than at any other time and place in history, and stupid money always goes somewhere. Once that was tulips. This time it was this.

> I miss them!

I miss the innocence of the time, however amply undeserved. But I was young myself then.

Nevermark OP
> I miss the innocence of the time, however amply undeserved. But I was young myself then.

I see things slightly differently.

Big failures whose practical and theoretical lessons and new wisdoms are then put to use, more carefully, ambitions unabated, teach things, and take technology to unexpected places.

But big failures, institutionalized as big failures, become devastating craters of resources, warding off further attempts for years or decades … but only after the fact. That didn’t need to be their legacy.

throwanem
The abatement of American ambitions is something the world has long, not to say desperately, awaited.

Not that Americans should not aspire; indeed, the world has long loved us best when we dream most generously the utopias of which we forever will dream as long as we call ourselves Americans. It's only that generosity, not the reverie, of which we've lately lost the habit.

I don't know if this is post-hoc justification, but I see myself as somebody who wants to know what everything is and how everything works - so to me, re-implementing (and always failing to take it to completion) is the means to an end. I think I spent the first 25 years of my life studying, so learning has become the goal itself. Work is there to provide funds to support me while I learn. Re-implementing the basics of something is a terrific tool for learning how it works.
Nevermark OP
A fellow autodidact!

Yes, implementing things, even those that others have already done, reveals depths that no study of others’ artifacts or solutions ever could.

alexisread
If you're looking at really from first principles, it's hard to beat forth systems. You can type in Plankforth (https://github.com/nineties/planckforth) in a hex editor ie. this can be built from zero software, by effectively morse-coding it into bare memory.

In terms of accessibility though, I'd recommend Forthkit (https://github.com/tehologist/forthkit), Miniforth (https://compilercrim.es/bootstrap/), Sectorforth (https://github.com/cesarblum/sectorforth), Sectorlisp (https://justine.lol/sectorlisp2/) Freeforth (https://github.com/dan4thewin/FreeForth2 contains an inlining cross-compiler for MSP430)

The problem with forths is that they don't seem as scalable as say lisp, from a social perspective. At a larger level, Project Oberon (https://projectoberon.net/) builds from the base CPU on FPGA, and A2 (https://en.wikipedia.org/wiki/A2_(operating_system)) show what can be done to scale up.

Steps (https://github.com/robertpfeiffer/cola/tree/master/function/...) also was supposed to do this, but the available code is rather disjointed and not really easy to follow.

Nevermark OP
Forth has been a great inspiration. A demonstration that great flexibility, and low level control, can be had with very low overhead or complexity.

As you note too, Forth is also useful as a counter demonstration of how important abstractions are. Without powerful abstractions (or simple abstractions that can be composed into powerful abstractions), Forth fails to scale, most especially across a team or teams, and for any expectation of general reuse, beyond basic operations.

The first version of Forth I used I wrote myself, which is probably a common event as you point out. Forth language documentation is virtually its own design doc.

Lisp is the other language I began using after buying a book and writing my own.

Thanks greatly for the links! I will be following up on those. Any insight from anywhere.

This item has no comments currently.