- Advances in rocket engine design & tech to enable deep throttling
- Control algorithms for propulsive landing maturing (Google "Lars Blackmore", "GFOLD", "Mars Landing", and work through the references)
- Forward thinking and risk-taking by SpaceX to further develop tech demonstrated by earlier efforts (DC-X, Mars Landing, etc.)
Modern simulation and sensor capabilities helped, but were not the major enabling factors.
Is this basically a technical way of saying "people realized it could be done"? Like the 4 minute mile, once it was done once, many people accomplished the same feat soon after. The realization that it was possible changed people's perception.
I'm sure engineers and science-fiction writers have known for a long time that it could be done.
Unfortunately then he stopped taking his dried frog pills and look where he is now...
One major reason for this is the mixing plate at the top of the combustor. Fuel and oxygen are distributed to tiny nozzles which mix together. The better the mixing, the more stable the burn. If you get unstable burning -eg momentarily better mixing in one area- it will cause a pressure disturbance which will further alter the burning power in different areas of the combustion chamber. At low throttle, this can be enough to cause the engine to turn off entirely.
Fluid simulations have made a huge difference. It's now possible to throttle engines down to 5% because mixing is much more stable (manufacturing improvements in the nozzles have also helped) and combustion is more protected from pressure variations.
The extra stability also just makes it easier to control a rocket period. Less thrust variation to confuse with drag properties, less bouncing, better sensor data.
I guess I’m trying to connect the dots on how a simulation improves the actual vehicle dynamics.
Simulation inside the engine can find resonances, show where shockwaves propagate, and show you how to build injectors (pressure, spray etc) so they are less affected by the path of reflections. Optimizing things like that smoothly along a range of velocities and pressures without a computer is not very feasible, and you need a minimum of computing power before you start converging to accurate results. The unpredictability of turbulence means low-resolution simulations will behave very differently.
Modern pressure vessels can reach 5% empty mass, thats a factor of 20
Rockets have stages, a good approximate is to stage half your rocket to get rid of the most empty mass. This also means your first stage has to have double the thrust to lift itself and its stage. Now you're at a factor of 40 just to hover.
Now you actually have to take off, usually around 1.2 to 1.4 thrust to weight.
So a more realistic scenario means your rocket engine has to throttle down to exactly 2% power while the laval nozzle is optimised for takeoff thrust only.
SpaceX Merlin 1D: ~40% Rocketdyne F-1 (Saturn V): ~70% Space Shuttle Main Engine (RS-25): ~67% Blue Origin BE-4: ~20–25%
Falcon 9 does the "hover slam" where they have to turn off the engine exactly at touch down, or the rocket starts to go back up again. Throttle is too high for the weight of the booster at that point in flight.
So they need to "hoverslam", that is, arrive at the landing pad rapidly decelerating so that their altitude hits zero just as their speed hits zero. This was thought to be very hard, but I don't think SpaceX has lost a stage due to estimation failure there. It helps that there is significant throttle range and fairly rapid throttle response on the engines, so they can have some slack. (Plan to decelerate at 2.5g for the last ~20s or so, with the ability to do anything between ~1.5g to 4g, so you can adjust throttle based on measured landing speed.)
Their Superheavy has more engines, allowing them to bring the TWR below 1, enabling hovering.
Maybe disposable rocket designs lost the hat and got too overengineered and expensive? Saturn V costs seem absurd to me when the USSR was also making similar rockets presumably far cheaper. Maybe the US defense spending model is just a poor one for getting a lean product developed compared to nations and groups that absolutely must be lean to achieve anything at all.
This means that 3d-printed copper (alloy) is an amazing process and material for them. You can build the kind of structurally integrated cooling channels that the people building rockets in the 60's could only dream about, and it's not a gold-plated part that required a million labor hours to build, it's something you can just print overnight.
https://www.voxelmatters.com/wp-content/uploads/2024/08/Spac...
so now the main problem is building the hardware, there are a lot of solutions for the software part.
Before there were no general-purpose simulators, and barely usable computers (2 MHz computer with 2 KB of memory...), so all you could do was hardcoding the path and use rather constrained algorithms.
I think there is also a distinction to be made between offline (engineering) and onboard computing resources. While onboard computers have been constrained in the past, control algorithms are typically simple to implement. Most of the heavy lifting (design & optimization of algorithms) is done in the R&D phase using HPC equipment.
Mass-produced hardware drove prices down, and availability way up, in many industries: motors, analog electronics, computers, solar panels, lithium batteries, various sensors, etc. Maybe reusable rockets, enabled by all that, are going to follow a similar trajectory as air transportation.
It would seem to me that Intel and AMD were not friendly to custom designs at that time, and MIPS was not significantly evolving.
A fast, low-power CPU that can access more than 4gb and is friendly to customization seems to me to be a recent development.
While cool and all, this type of sim is a tiny, tiny slice of the software stack, and not the most difficult by a long shot. For one, you need software to control the actual hardware, that runs on said hardware's specific CPU(s) stack AND in sim (making an off the shelf sim a lot less useful). Orbital/newtonian physics are not trivial to implement, but they are relatively simple compared to the software that handles integration with physical components, telemetry, command, alerting, path optimization, etc. etc. The phrase "reality has a surprising amount of detail" applies here - it takes a lot of software to model complex hardware correctly, and even more to control it safely.
inb4 blue origin / DC-X did it first
This Honda landing neither went to space nor was orbital, so it was a similar test to the DC-X test.
In the past, there was not much reasons to go into space, commercially, so who would have paid for it? But today there are many more use-cases for sending things to space that are willing to pay for the service.
If you know that something can be done, and there is a potential market for such a project, it then becomes easier to get the funding. Chicken or the egg...
One thing we also need to point out, is that SpaceX uses like 80% of their yearly launches, for their own communication / sat service. This gave a incentive for that investment.
Is the same reason why, despite SpaceX throwing those things up constantly, there really is a big lag of competitors with reusable rockets. Its not that they where / not able to quickly get the same tech going. They simply have less market, vs what SpaceX does non-stop. So the investments are less, what in time means less fast development.
SpaceX is a bit of a strange company, partially because they used a lot of the public funds to just throw shit at the wall, and see what sticks. This resulted in them caring less if a few rockets blew up, as long as they got the data for the next one with less flaws. It becomes harder when there is more oversight of that money, or risk averse investors. Then you really want to be sure that thing goes up and come back down into one piece from the first go.
A lot of projects funding are heavily based upon the first or second try of something, and then (sometimes unwisely) funding is pulled if it was not a perfect success story.
Dragon 9 was based on conservative and boring technology but it was cost optimized before it was reusable, then reusability crushed the competition.
For that matter, Starship is boring. "Throw at the wall and see what sticks" isn't "trying a bunch of crazy stuff" but trying a bunch of low and medium risk things. For instance, development of the Space Shuttle thermal tiles was outrageously expensive and resulted in a system that was outrageously expensive to maintain. They couldn't change it because lives were at stake. With Starship they can build a thermal protection system which is 90% adequate and make little changes that get it up to 100% adequate and then look at optimizing weight, speed of reuse and all that. If some of them burn up it is just money since there won't be astronauts riding it until it is perfected.
Falcon 9 didn't have three versions of which two were obsolete. Falcon 9 didn't put optional goals on the critical path, which are now delaying and preventing commercial launches.
This is where I think the business acumen came into play. Because the govt is self-insured, it allowed SpaceX to pass the high risk off to the taxpayer. Once the tech matured, the risk was low enough to be palatable for private industry use.
And FWIW, I don’t mean that as disparaging to SpaceX, just an acknowledgment of the risk dynamics.
It cost them more than Falcon 9 development.
Same with Starlink.
This isn't Concorde
SpaceX invested in reusability long before they had any idea about their own launch services.
> Its not that they where / not able to quickly get the same tech going. They simply have less market
BlueOrigin has been trying for nearly as long as SpaceX and have infinite money and don't care about market. Apparently having lots of money doesn't make you able to 'quickly get the same tech'.
RocketLab was to small and had to first grow the company in other ways. And the CEO initially didn't believe in large rockets. And their own efforts of re-usability, despite excellent engeeners didn't pan out to 'quickly get the same tech'.
Arianespace had enough market in theory, they just didn't want to invest money. And now that they do, they are completely failing at at 'quickly getting the same tech' despite them getting lots and lots of money. More money in fact then SpaceX used to develop the Falcon 9 initially. And at best they get some demonstrators out of it.
ULA has invested many billions in their next generation rockets, and they were absolutely not confident that they could 'quickly get the same tech'.
Tons of money flowed into the rocket business, specially if you include Blue. Japan, India, Europe, China and US market have all ramped up investment. And nobody has replicated what SpaceX did more then 10 years ago.
So as far as I can tell, there is exactly 0 evidence that people who can invest money can replicate the technology and the operations.
> partially because they used a lot of the public funds to just throw shit at the wall
The used all their costumers rockets to do tests after they had performed the service. Some of those rockets were bought by 'the public'. And the first reflown rockets didn't carry public payloads. Other companies could have done the same with not that much investment, they just didn't care to.
What result SpaceX caring less, is because they were already so good at building rockets that even their non-reusable rockets were cheaper then anybody else, even with reusable tech like legs attached. Falcon 9 was so much better then anything else that even without re-usabiltiy they were profitable.
Their business didn't depend on re-usability. I don't think the other rocket companies could even imagine something like that to be possible.
The Space Shuttle was wrong in so many ways, not least that it was a "pickup truck" as opposed to a dedicated manned vehicle (with appropriate safety features) or a dedicated cargo vehicle. Because they couldn't do unmanned tests they were stuck with the barely reusable thermal tiles and couldn't replace them with something easier to reuse (or safer!)
Attempts at second generation reusable vehicles failed because rather than "solving reuse" they were all about single-stage to orbit (SSTO) [2] and aerospike engines and exotic composite materials that burned up the money/complexity/risk/technology budgets.
There was a report that came out towards the end of the SDI [3] phase that pointed out the path that SpaceX followed with Dragon 9 where you could make rather ordinary rockets and reuse the first stage but expend the second because the first stage is most of the expense. They thought psychology and politics would preclude that and that people would be seduced by SSTO, aerospikes, composites, etc.
Funny though out of all the design studies NASA did for the Shuttle and for heavy lift vehicles inspired by the O'Neill colony idea, there was a sketch of a "fly back booster" based on the Saturn V that would have basically been "Super Heavy" that was considered in 1979 that, retrospectively, could have given us Starship by 1990 or so. But no, we were committed to the Space Shuttle because boy the Soviet Union was intimidated by our willingness and ability to spend on senseless boondoggles!
[1] The first few times the shuttle went up they were afraid the tiles would get damaged and something like the Columbia accident would happen, they made some minor changes to get them to stick better and stopped worrying, at least in public. It took 100 launches for a failure mode than affects 1% of launches to actually happen.
[2] https://en.wikipedia.org/wiki/Single-stage-to-orbit
[3] https://en.wikipedia.org/wiki/Strategic_Defense_Initiative (which would have required much cheaper launch)
I wonder what the STS system would have been like if the DoD's cross-range requirement hadn't been imposed.
That's a huge engineering difference, roughly like the difference between a car and a helicopter. The Falcon 9 was also 4x taller, meaning 16x more force to correct a lean. A little burp would send the rocket right back up in the air.
Really, what SpaceX did was radically different from the tests in the 90s from the rockets, to the controls, to the reusability goals. Otherwise they wouldn't have built Grasshopper.
Now NewGlen is kinda a knockoff of Delta Clipper, but that's a different beast.
The real friction in building a reusable rocket isn't the engineering, it's setting "let's build a reusable rocket" as a design goal, and getting a whole bunch of engineers and a whole bunch of dollars to start on that goal.
You have to start with a whiteboard sketch and board-room presentation that shows it's achievable, and then send the engineers out to refine the sketch into something worth funding, and then work for months or years to build a rocket that would be a disaster if it's not achievable.
This.
What I wanted to emphasize was how, after Bannister finally broke through the 4-minute barrier, many others did it soon after: 3 more in 1954; 4 in 1955; 3 in 1956; 5 in 1957; 4 in 1958.
* Better motors for gimballing
* Launch-thrust engines that throttle down low enough and preciesly enough for landing
* Better materials to handle stress for flip over manover etc without added weight
* More accurate position sensors
* Better understanding and simulation of aerodynamics to develop body shape and write control algorithms.
> Launch-thrust engines that throttle down low enough and preciesly enough for landing
In large part this is due to improved simulation- spaceX made their own software: https://www.youtube.com/watch?v=ozrvfRHvYHA&t=119s
Experimentation was also a large factor- pintle injectors have been around for a long time, but were not used in production rockets until SpaceX (who moved from a single pintle to an annular ring). Pintle injectors are very good for throttling.
> Better materials to handle stress for flip over manover etc without added weight
We're still using the same materials- good ol inconel and aluminum. However 3d printing has made a pretty big difference in engines.
More rockets use carbon fiber, but that isn't new exactly and the main parts are still the same variety of aluminum etc. Titanium has become more common, but is still pretty specialized- the increased availability was probably the biggest factor but improved cutting toolings (alloys and coatings) and tools (bigger, faster, less vibration) have also made a big difference.
Reusable, propulsively landed stages for rockets capable of putting payloads into Earth orbit is stupendously harder. The speeds involved are like 10-100x higher than these little hops. The first stages of Falcon 9 and Starship are still the only rockets that have achieved that. Electron has only re-used a single engine.
There might be more in a year or two (New Glenn, Neutron, Starship, a Chinese one), but for now, I would call it extremely difficult, not easy.
I wouldn’t say anything has fundamentally changed in the rocket coordination tech itself, just the private sector being able to rationalize the cost of the trials with ROI
I mean for/example the Apollo lander was a tail landing rocket and lunar landing is way fucking harder because a thick atmosphere gives you some room for error.
Most launch suppliers just make rockets single-use and write it off because it's not like you're launching weekly. Who knows how much it costs in labor and parts to refurbish landed rockets, it's probably cheaper to just keep making new ones.
^ you know what to say in response to this; we're all in the process of finding out which one is more correct.