Preferences

And if you think stairbuilding is complex, you should look into handrailing [0]

The money quote from the article: "I’ve learned that the correct way to build a house is to design the handrail first, then design the stair, and the rest of the house will follow."

Part of the reason you don't notice the complexity of everyday things is that humans have spent thousands of years perfecting the details and methods that let you take your stairs and handrails for granted. And, like, 90% of everything else.

The McMaster-Carr catalog isn't >2" thick because they're trying to confuse you. It's that thick because everything they sell solves a different problem, that somebody worked out the details in years ago, and we live in a world where the experience and solutions embodied in every part gets mass produced and delivered the next day for the cost of mere money.

And then there are manufactured handrail parts, which are a grotesque and simplified perversion of a properly carved handrail (discussed in the link). But at least regular people can afford a handrail for their stairs for safety too.

[0] https://www.thisiscarpentry.com/2009/07/15/drawing-a-volute/


I spent a bit working at a custom stairbuilder. Handrails that smoothly curve in three dimensions, either to connect straight sections or along a curved stair, are called "tangent" handrails. Constructing these tangent handrails is challenging. It requires both geometrical rigor in devising the plans as well as an individual craftsperson with a very high level of manual/visual skill. While we would work out the curvature of the rail in CAD or paper plans, a huge amount of implementation detail was left to a traditional woodworker, such as carving the profile of the tangent rail to create transitions pleasing to the eye and touch. Additionally, the variable structure of wood grain meant that the choice and orientation of the wood stock often had to be done piece-by-piece, often with custom fixtures.

I was a resident computer nerd there, so one of my projects was adapting a 5-axis CNC router to rough out these tangent rails. We were successful to some degree, but the fixturing and programming for individual tangent sections was very complex. It worked out that the CNC approach was only economical when the design of the stair required many multiples (rare in our case). Otherwise the skill and adaptability of individual craftsman was much more efficient, and the CNC was relegated to constructing jigs or 2D pieces.

My training was in high-end artisan furniture making, and the design challenges I saw in custom stairs were way more complex than almost any other field of woodworking I am familiar with, including the more technically avant-garde furniture makers. The only clear exception is wooden boat building, which appropriately enough was the background of many of my coworkers at that company.

I’ve been fascinated with wooden boat-building for most of my software career, and that interest has driven me towards the marine space. It’s cool to read of other people who have a cross of software/computer skills and woodworking.

I feel that there is a deep similarity in the kind of problem-solving and craftsmanship that fine woodwork and software development both require. I’m convinced some of my (minimal) skills I developed working with my father in woodworking and metal work translates to a better ability to visualize complex systems.

Boat building is a great example of the phenomenon the OP talks about. I agree it’s particularly challenging, mostly because nothing is planar. You could spend a lifetime exploring the nuances of stitch-and-glue plywood construction, or bead and cove cedar strip, vacuum-bagging fiberglass, etc. In wooden boats, projects are measured in decades…

The no-code people imagine that we are on the verge of living in a world where that catalog's equivalent in software is just a few months away.

If they were right, it still wouldn't solve the problem analysis problem, but it would certainly help out the parts problem. It would also help move software into a professional engineering phase.

I've got a phrase for when I see people trying to "abstract" over certain types of problems by writing absurdly complicated code that ultimately saves negative time.

"Data abstraction is impossible."

You can abstract an algorithm. Like, how you sort. Merge sort, bubble sort, etc. Normally it doesn't matter to the rest of the code, you just want a sorted list. You can abstract a type or an object. Hey, given some type T and some function T -> int, you can get an int.

You CAN'T abstract a chunk of data which has semantically distinct members inside of it. If someone sends you a structure that has a field in it called "name", then you have no idea what that actually means. You have to get them to tell you what they mean by "name". The hard part being that inevitably "name" means one thing unless the "aux" field is set to a number that evenly divides the "misc" field at which point "name" has to be filtered by some arcane rules.

This is why we have lists like "falsehoods programmers believe about time|names|etc". Time means different things for different applications. Names mean different things to different contexts and cultures. You have to find someone to tell you what all of the individual pieces of the object's internals means to them.

no-code will probably see some success in niche domains where things have been fixed for years. Ultimately, many applications will need hand written code to handle all of the meaningful sub-components of the data being interacted with.

> "Data abstraction is impossible."

That's an amazing point. It explains where the object oriented proponents of decades past went wrong in thinking there would be vendors for "Car" or "Employee" objects.

Yeah, if we look at the real world "car" object, we've certainly over decades established some common requirements that you could call interfaces: turn signals, headlights, a rough common size, etc. We've abstracted it + restricted the interface (through laws and regulations) to the point that you can build a highway or an intersection by following well-understood principles.

But if you want to do something more complicated... it breaks down. Imagine trying to write software to do things with a generic "Transmission" interface. All the complicated logic would still be the responsibility of the vendor so you would be out of luck trying to come up with something more granular than "check_status()" and "repair(parts: List<Object>)" as an interface. Which is so limited as to be pointless.

No, because you can absolutely abstract over data if it is encapsulated, which is the whole trick of objects,

You can use a Point from any vendor, without caring if it has x and y, or r and theta, or some other representation inside, as long as it correctly implements the specified interface.

But very few interesting real world objects can be consistently described across different pieces of software like a Point can, which is where it ties into the article.
Yeah, very much this.

Not to mention, if you want to serialize Point, then you have to know what its internals look like and what they mean. There is no data abstraction.

[Okay, so when you serialize Point, you can also serialize all code that will ever do anything with point. However, we generally don't do this (partially because of security issues). But it doesn't really help that much because whatever the serialized code produces when run has to be some datatype where do understand the semantic meanings behind its internals.]

This feels like what you are describing is algorithmic or type abstraction. For your example, it's something like: Given some type Point which (I guess) interacts with some type Graph you should be able to run the Plot method. Or something. And in this case, yeah you can abstract, but it's not what I'm calling "data abstraction." [Maybe my name is just not quite right.]

Data abstraction would be getting some arbitrary Point object and then being able to look into the internals and do the right thing. Whether it's x and y or r and theta.

Now you might say, "that's just bad programming if you do that," but the point is that often times we don't have a choice in the matter because either of the constraints of the project OR because the "do the right thing" part interacts with non-code.

So for example, if you want to send your Point over the network, then you either 1) Need to know the internals, what they mean, and how they work or 2) have to send along code for the remote device to execute to handle your custom internals. Case 2 is generally not done (and presents some security issues regardless).

But this isn't just a technical issue. Names are notoriously difficult because data abstraction doesn't exist. How many first, middle, and last names does someone have? Is there a canonical ordering for the names? Does someone always have a name? What do the prefixes mean? What do the postfixes mean? These questions and more all have to do with what you're trying to use the name for AND what culture the name comes from. There is nothing you can do to abstract away the problem of names. And if you try you'll just end up with a lot of messy, complex code, that breaks all the time.

You may be missing the main point of the parent (no pun intended). Their proposition is that in reality the problem is not how you define a point, but that a "point" means different things to different people and thus cannot be abstracted. I deeply agree with this observation. There are exceptions of course, but my experience is that this holds in the general case.
Are you abstracting over the data or over the behaviour?
> thinking there would be vendors for "Car" or "Employee" objects.

For car object: Tesla, Ford, GM

For employee object: Randstad, Kelly, Trueblue

Much better implementation than the C++ one, too.

...and the corollary to that is turning things into data is often a much simpler alternative to abstraction.

For example suppose you have a program with configuration. You might have a "ConfigurationProvider" and abstract that with an "IConfigurationProvider" and mock implementations of that interface for unit tests.

Or you could simply define a Plain Old Data structure with fields for the configuration data. Then you can construct an instance of that data structure in different ways as appropriate.

The key idea there is: "data" is something that's already automatically decoupled from where the data came from. For code you have to do work to achieve that decoupling, and it will always leak.

> "Data abstraction is impossible."

Data abstraction is hard and many people are no good at it, but it's not impossible; it can't be, because it's required.

don't think of data abstraction as one and done, think of it as a process. Maintaining (as in, maintenance-ing rather than obeying) your data abstractions as you make modifications to code is the key to clean code, it's where the real refactoring takes place.

if your brain works like mine does, it is paralyzing to work on code without data abstractions, like always thinking of a staircase as all of its component parts rather than as a staircase

On the other hand some of the most powerful and widely used programs are those that operate on byte arrays with schemas in-band (the web) or otherwise specified at runtime (SQL).

Imagine if you needed to write C structs in Nginx or Postgres corresponding to all the business objects and data types they’re going to touch. Worse, imagine you need to thread them through all the implementation!

Many business applications are written that way. I think there are opportunities on the table to stop doing it.

So. There's json parsers, serde, row polymorphism, anon structs, macros, monads, and dictionaries.

They all do the same type of thing, but that isn't abstracting data. That's abstracting the population of data or the serializing of it.

However none of that is ever going to know that the name field needs to strip out the first three characters, rot 13, and then parse as a guid.

Someone still needs to know what the data you're getting means. And there's no way to shake that.

Although people do try and then we get lists like: things programmers don't understand about names.

This is why we have ontologies. Still at a early stage of data abstraction but it is getting there.
Well, they are going to be disapointed when they'll realize we haven't even figured out the catalog yet.

Software is half a century old, we are still using flint to produce it. The only reason it looks fancy is because it makes more sparks, and a lot faster.

We're just starting to get out of the alchemy stage right now. Wise men and magicians everywhere telling royalty that if they only pay them treasures that they will reveal the future and show them miracles. Meanwhile it turns out that NSS has a straight-forward vulnerability that everyone just somehow missed.

If you look at the history of chemistry, mechanical engineering, or astronomy then you kind of get the impression that we're probably 150-300 years away from software development working the way everyone already imagines it working.

I swear my lord, this Python oil will do miracles with your data ailments.
Lord: I doth smeared mine data with thine Python and lo, it persists in having holes and errors even unto duplicates. Guards! Execute this man!
You remind me of a quote I heard at a conference (paraphrased badly).

The Data Revolution is like going from the bronze age to the iron age.

It will get industrialized eventually, but that time frame will take much longer and will look completely different from what you expect.

I believe we miss solid common ontology - an agreement what individual pieces of data mean. Then individual pieces of software are either incompatible or inconsistent and must be constantly built anew, resulting in all kinds of bugs.
Heck, chemistry is practically a baby compared to the 1,000+ years of engineering
> If you look at the history of chemistry, mechanical engineering, or astronomy then you kind of get the impression that we're probably 150-300 years away from software development working the way everyone already imagines it working.

If it is to happen, a serious plateauing of "progress" will need to happen all the way up the layers of the tech stack to the point where app developers or data engineers are now working with tools stable enough that previous generations would at least recognise let alone even be able to work with.

Engineering maturity "nirvana" for software will happen not by us getting more experienced/better at it, but by "progress" stalling in all the tools we use.

Software changes a lot basically because it is easy to change. And because we as an industry value constant work towards making it even easier to keep changing - eg cloud, containers, devops, agile etc etc

In the physical world, things have stabilised and reached a kind of local optimum across the vast majority of areas. Sure there are still incremental improvements happening, and occasional revolutionary improvements in different areas. Mostly the main bits being radically changed are the bits that coincidentally touch computers downstream from the constant churn happening to software.

Also another factor is when I think of physical engineering (and I used to work in civil/structural engineering in the 90s), for nearly all work going on, the scale of the problems being solved hasn't changed. Most building sites are the same size they were, most materials are the same, some regulations have changed, but ubiquitous CAD tools etc seems to be the major change.

So maybe, when we can't make transistors smaller, and we can't make cpus faster, and can't increase memory or storage things will slow down. eg as each layer up the stack plateaus, each layer above it eventually changes from working on new capabilities to making what it already does more efficient, that will eventually (it will take a while) bubble all the way up. Progress slows down a lot and us end developers and engineers are working on a stable set of tools very much in a local (or even global) optimum. Also maybe when/if the worlds population stabilises (in decades/centuries/whenever), that might lead to an eventual limit on how much user data can be mined from people and the scale of the data we deal with might stabilise too even beyond the limit reached where we've stopped collecting more because of capacity limits talked about above.

Heh, I started off trying to disagree with your quoted statement by saying I don't think it would ever stabilise like other disciplines. But by laying out one way it could happen, I think I may have ended up agreeing with you :)

That catalog is here, in the form of libraries.

What no-code people miss is that by the time a piece of functionality is available as a no-code widget, it's been available as a library for years. No-code solutions will never replace developers. They instead free up developer time, allowing them to create more niche or more technically challenging things.

Wix and other website builders have completely replaced developers who make static HTML pages, but those developers have moved on to making more complex things with React and canvas and WebSockets.

Business requirements grow to match these advances. Twenty years ago, an independent store was ahead of its time if it had a website. Today, a store without online shopping - or at least an online catalog with stock information - is at risk. No-code solutions will never be enough, because competitors will use no-code and also pay developers to code better functionality.

Yeah when most people end up working cutting hair, driving cars, carrying bricks and so on... I feel this is going to remain a pipe dream to replace the people copy pasting stack overflow in creative ways (productive developers) any time soon.
>for the cost of mere money

mere money? talk about overlooking a piece of remarkable complexity

Money is fungible. All dollars, regardless of how you, got them are interchangeable.

Knowledge is not. Spending 20 years mastering piano will do little good if you need to build a staircase.

Shit, spend a couple years learning how to build furniture and then go build a chicken coop. There's shockingly little overlap between those two skill sets! That's not a hypothetical. There's a lot of knowledge that your body learns that doesn't come into play doing carpentry.

that money is fungible is part of the design of money, that complex/subtle consideration is one thing that went into the construction of a monetary system. We could use non-fungible money if it seemed better.

the fungibility of money is similar to the idea that all the steps in a staircase should be the same height.

Giordano Bruno said that whatever has the most value and least cost of storage becomes money.
Some money is more fungible than others. Crypto is a little less so: look at the DAO that tried to buy a copy of the US Constitution and failed, only small $ contributors can't get their donation back without losing most/all in gas.
> mere money? talk about overlooking a piece of remarkable complexity

Wrong framing. "Mere money" because we virtually all have money, there are many ways to make it, and the possession of it does not encapsulate any skillset or knowledge; exchanging it for access to artifacts generated with a massive knowledge repository is like trading sand for glass.

Stumpy Nubs Woodworking recently put out a video about making hand rails in a way very similar to how it would have been done when hand-molding planes were still in common use

https://www.youtube.com/watch?v=P6ScjHsD4mI

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal