I was a resident computer nerd there, so one of my projects was adapting a 5-axis CNC router to rough out these tangent rails. We were successful to some degree, but the fixturing and programming for individual tangent sections was very complex. It worked out that the CNC approach was only economical when the design of the stair required many multiples (rare in our case). Otherwise the skill and adaptability of individual craftsman was much more efficient, and the CNC was relegated to constructing jigs or 2D pieces.
My training was in high-end artisan furniture making, and the design challenges I saw in custom stairs were way more complex than almost any other field of woodworking I am familiar with, including the more technically avant-garde furniture makers. The only clear exception is wooden boat building, which appropriately enough was the background of many of my coworkers at that company.
I feel that there is a deep similarity in the kind of problem-solving and craftsmanship that fine woodwork and software development both require. I’m convinced some of my (minimal) skills I developed working with my father in woodworking and metal work translates to a better ability to visualize complex systems.
Boat building is a great example of the phenomenon the OP talks about. I agree it’s particularly challenging, mostly because nothing is planar. You could spend a lifetime exploring the nuances of stitch-and-glue plywood construction, or bead and cove cedar strip, vacuum-bagging fiberglass, etc. In wooden boats, projects are measured in decades…
If they were right, it still wouldn't solve the problem analysis problem, but it would certainly help out the parts problem. It would also help move software into a professional engineering phase.
"Data abstraction is impossible."
You can abstract an algorithm. Like, how you sort. Merge sort, bubble sort, etc. Normally it doesn't matter to the rest of the code, you just want a sorted list. You can abstract a type or an object. Hey, given some type T and some function T -> int, you can get an int.
You CAN'T abstract a chunk of data which has semantically distinct members inside of it. If someone sends you a structure that has a field in it called "name", then you have no idea what that actually means. You have to get them to tell you what they mean by "name". The hard part being that inevitably "name" means one thing unless the "aux" field is set to a number that evenly divides the "misc" field at which point "name" has to be filtered by some arcane rules.
This is why we have lists like "falsehoods programmers believe about time|names|etc". Time means different things for different applications. Names mean different things to different contexts and cultures. You have to find someone to tell you what all of the individual pieces of the object's internals means to them.
no-code will probably see some success in niche domains where things have been fixed for years. Ultimately, many applications will need hand written code to handle all of the meaningful sub-components of the data being interacted with.
That's an amazing point. It explains where the object oriented proponents of decades past went wrong in thinking there would be vendors for "Car" or "Employee" objects.
But if you want to do something more complicated... it breaks down. Imagine trying to write software to do things with a generic "Transmission" interface. All the complicated logic would still be the responsibility of the vendor so you would be out of luck trying to come up with something more granular than "check_status()" and "repair(parts: List<Object>)" as an interface. Which is so limited as to be pointless.
You can use a Point from any vendor, without caring if it has x and y, or r and theta, or some other representation inside, as long as it correctly implements the specified interface.
Not to mention, if you want to serialize Point, then you have to know what its internals look like and what they mean. There is no data abstraction.
[Okay, so when you serialize Point, you can also serialize all code that will ever do anything with point. However, we generally don't do this (partially because of security issues). But it doesn't really help that much because whatever the serialized code produces when run has to be some datatype where do understand the semantic meanings behind its internals.]
Data abstraction would be getting some arbitrary Point object and then being able to look into the internals and do the right thing. Whether it's x and y or r and theta.
Now you might say, "that's just bad programming if you do that," but the point is that often times we don't have a choice in the matter because either of the constraints of the project OR because the "do the right thing" part interacts with non-code.
So for example, if you want to send your Point over the network, then you either 1) Need to know the internals, what they mean, and how they work or 2) have to send along code for the remote device to execute to handle your custom internals. Case 2 is generally not done (and presents some security issues regardless).
But this isn't just a technical issue. Names are notoriously difficult because data abstraction doesn't exist. How many first, middle, and last names does someone have? Is there a canonical ordering for the names? Does someone always have a name? What do the prefixes mean? What do the postfixes mean? These questions and more all have to do with what you're trying to use the name for AND what culture the name comes from. There is nothing you can do to abstract away the problem of names. And if you try you'll just end up with a lot of messy, complex code, that breaks all the time.
For car object: Tesla, Ford, GM
For employee object: Randstad, Kelly, Trueblue
Much better implementation than the C++ one, too.
For example suppose you have a program with configuration. You might have a "ConfigurationProvider" and abstract that with an "IConfigurationProvider" and mock implementations of that interface for unit tests.
Or you could simply define a Plain Old Data structure with fields for the configuration data. Then you can construct an instance of that data structure in different ways as appropriate.
The key idea there is: "data" is something that's already automatically decoupled from where the data came from. For code you have to do work to achieve that decoupling, and it will always leak.
Data abstraction is hard and many people are no good at it, but it's not impossible; it can't be, because it's required.
don't think of data abstraction as one and done, think of it as a process. Maintaining (as in, maintenance-ing rather than obeying) your data abstractions as you make modifications to code is the key to clean code, it's where the real refactoring takes place.
if your brain works like mine does, it is paralyzing to work on code without data abstractions, like always thinking of a staircase as all of its component parts rather than as a staircase
Imagine if you needed to write C structs in Nginx or Postgres corresponding to all the business objects and data types they’re going to touch. Worse, imagine you need to thread them through all the implementation!
Many business applications are written that way. I think there are opportunities on the table to stop doing it.
They all do the same type of thing, but that isn't abstracting data. That's abstracting the population of data or the serializing of it.
However none of that is ever going to know that the name field needs to strip out the first three characters, rot 13, and then parse as a guid.
Someone still needs to know what the data you're getting means. And there's no way to shake that.
Although people do try and then we get lists like: things programmers don't understand about names.
Software is half a century old, we are still using flint to produce it. The only reason it looks fancy is because it makes more sparks, and a lot faster.
If you look at the history of chemistry, mechanical engineering, or astronomy then you kind of get the impression that we're probably 150-300 years away from software development working the way everyone already imagines it working.
The Data Revolution is like going from the bronze age to the iron age.
It will get industrialized eventually, but that time frame will take much longer and will look completely different from what you expect.
If it is to happen, a serious plateauing of "progress" will need to happen all the way up the layers of the tech stack to the point where app developers or data engineers are now working with tools stable enough that previous generations would at least recognise let alone even be able to work with.
Engineering maturity "nirvana" for software will happen not by us getting more experienced/better at it, but by "progress" stalling in all the tools we use.
Software changes a lot basically because it is easy to change. And because we as an industry value constant work towards making it even easier to keep changing - eg cloud, containers, devops, agile etc etc
In the physical world, things have stabilised and reached a kind of local optimum across the vast majority of areas. Sure there are still incremental improvements happening, and occasional revolutionary improvements in different areas. Mostly the main bits being radically changed are the bits that coincidentally touch computers downstream from the constant churn happening to software.
Also another factor is when I think of physical engineering (and I used to work in civil/structural engineering in the 90s), for nearly all work going on, the scale of the problems being solved hasn't changed. Most building sites are the same size they were, most materials are the same, some regulations have changed, but ubiquitous CAD tools etc seems to be the major change.
So maybe, when we can't make transistors smaller, and we can't make cpus faster, and can't increase memory or storage things will slow down. eg as each layer up the stack plateaus, each layer above it eventually changes from working on new capabilities to making what it already does more efficient, that will eventually (it will take a while) bubble all the way up. Progress slows down a lot and us end developers and engineers are working on a stable set of tools very much in a local (or even global) optimum. Also maybe when/if the worlds population stabilises (in decades/centuries/whenever), that might lead to an eventual limit on how much user data can be mined from people and the scale of the data we deal with might stabilise too even beyond the limit reached where we've stopped collecting more because of capacity limits talked about above.
Heh, I started off trying to disagree with your quoted statement by saying I don't think it would ever stabilise like other disciplines. But by laying out one way it could happen, I think I may have ended up agreeing with you :)
What no-code people miss is that by the time a piece of functionality is available as a no-code widget, it's been available as a library for years. No-code solutions will never replace developers. They instead free up developer time, allowing them to create more niche or more technically challenging things.
Wix and other website builders have completely replaced developers who make static HTML pages, but those developers have moved on to making more complex things with React and canvas and WebSockets.
Business requirements grow to match these advances. Twenty years ago, an independent store was ahead of its time if it had a website. Today, a store without online shopping - or at least an online catalog with stock information - is at risk. No-code solutions will never be enough, because competitors will use no-code and also pay developers to code better functionality.
mere money? talk about overlooking a piece of remarkable complexity
Knowledge is not. Spending 20 years mastering piano will do little good if you need to build a staircase.
Shit, spend a couple years learning how to build furniture and then go build a chicken coop. There's shockingly little overlap between those two skill sets! That's not a hypothetical. There's a lot of knowledge that your body learns that doesn't come into play doing carpentry.
the fungibility of money is similar to the idea that all the steps in a staircase should be the same height.
Wrong framing. "Mere money" because we virtually all have money, there are many ways to make it, and the possession of it does not encapsulate any skillset or knowledge; exchanging it for access to artifacts generated with a massive knowledge repository is like trading sand for glass.
The money quote from the article: "I’ve learned that the correct way to build a house is to design the handrail first, then design the stair, and the rest of the house will follow."
Part of the reason you don't notice the complexity of everyday things is that humans have spent thousands of years perfecting the details and methods that let you take your stairs and handrails for granted. And, like, 90% of everything else.
The McMaster-Carr catalog isn't >2" thick because they're trying to confuse you. It's that thick because everything they sell solves a different problem, that somebody worked out the details in years ago, and we live in a world where the experience and solutions embodied in every part gets mass produced and delivered the next day for the cost of mere money.
And then there are manufactured handrail parts, which are a grotesque and simplified perversion of a properly carved handrail (discussed in the link). But at least regular people can afford a handrail for their stairs for safety too.
[0] https://www.thisiscarpentry.com/2009/07/15/drawing-a-volute/