CS Professor at the University of Utah. I study web browsers, floating point, programming languages, and automated reasoning.
Website: pavpanchekha.com Email: me+hn@pavpanchekha.com
- pavpanchekhaThat pretty much is how CSS works! At the most basic level, Flow level is about widths down, heights up. But this basic model doesn't let you do a lot of things some people want to do, like distributing left-over space in a container equally among children (imagine a table). So then CSS added more stuff, like Flex-box, which also fundamentally works like this though adds a second pass.
- Author here—it is from Tufte CSS. I have a blog post [1] about how floats work. It is a nice example of there being unintuitive and also more-intuitive ways to achieve things in CSS. These days I believe CSS Anchor Positioning provides a simpler way to do this, but I haven't used it yet.
- Author here. You're right that a lot of CSS's edge cases and implicit rules stem from other choices and implicit rules that maybe need to be reconsidered. But take this logic a step further. The way text with mixed font sizes is laid out is kinda weird—should we just get rid of that? Mixed Chinese-Latin text is weird (search "idiographic baseline"); should we get rid of that? In fact, variable-size characters are weird, maybe just stick to all-Chinese? I'm joking, of course, but my point isn't that a simpler system is inconceivable, just that it would be inconvenient.
- Author here. I suppose it depends on what "rely on" means, but... have you ever used CSS to center text? Did you think much at all about what happens if the zoom level is high enough and the screen size small enough that the text doesn't fit? I assume not (I don't think I'd ever thought about that before I read that part of the standard), so in that sense you were relying on this behavior. I do think that in most cases where it activates, the quirk implemented by CSS probably improves the layout.
- Author here. This specific quirk of CSS is minor, and probably if CSS didn't have this quirk it'd be fine. But I'd guess that you've at least once in your life been on your phone and been browsing a website which used a really long word (or a really long line of code!) in centered text (maybe a heading) and you've scrolled right to read the whole thing. Are you sure your website doesn't have such a thing, if you have centered text somewhere?
So, yes, CSS could have fewer edge cases and workarounds---what I refer to in the post as less implicit knowledge---and then it would be simpler. But the resulting layouts would probably be worse. And a radical simplification like a constraint system would probably be even simpler and the results (I assert) would be even worse. It's fine to want a better life for browser developers, but I don't think it's unthinkable for CSS to create new edge cases and sometimes-surprising behavior if it also results in, typically, better outcomes.
- Fixed
- Author here. The problem isn't the technical challenge of writing a constraint solver. It's making sure that the resulting layout looks good, despite contradictory guidance from the designer.
Yes, a constraint solver can figure out which constraints it's violating. And for under-specification, it can produce a layout that satisfies the constraints. But the layout the constraint-solver chooses might be really bad—if all the text is placed at 0,0 the result is unreadable. And over-specified constraints might occur for some user on some weird device after deployment, when there's no developer to respond to errors.
Determining whether a set of constraints could be over- or under-specified for some set of parameters is computationally very challenging (this is what SAT and SMT solvers do, basically). But besides the computational challenge, I think it is practically very challenging—this is drawing off my experience doing this for four years—to write non-conflicting constraints for real-world designs. How would you write constraints for text wrapping around a figure? For mixed-font text lining up nicely?
- 25 points
- Your argument can't be so strong as to imply that IPv4 should actually have used 24-bit addresses, though.
- Great reply, much appreciated. I searched a bit for a number like ~70B and didn't find one. Perhaps 36 bits wouldn't have actually worked. I do think we'd have wasted more Class As, but that's where the "minor market mechanisms" would have happened—most of the class As did get sold and would in this universe too. Again, if total internet-connected devices is now 70B that wouldn't help, you'd still need NATs.
- It's not an argument for longer roads or can kicking? The thesis is stated at the top: with 9 bit bytes we'd avoid a couple of bad numerological coincidence (like that the number of Chinese characters is vague but plausibly around 50k, or the population of the earth is small integer billions). By avoiding the coincidences we'd avoid certain problems entirely. Unicode is growing but we're not discovering another Chinese! Not will world population ever hit 130B, at least as it looks now.
If you think there's some equally bad coincidence go ahead and tell me but no one has yet. I think I do a good job of that in the post. (Also it's amazing you and maybe everyone else assume I know nothing except what ChatGPT told me? There are no ads on the website, it's got my name face and job on it, etc. I stand by what I wrote.)
- Author here. I agree that this would have a similar effect; we'd probably still end up with 36-bit or 48-bit IP addresses (though 30-bit would have been possible and bad). We'd probably end up with a transition from 24-bit to 48-bit addresses. 18-bit Unicode still seems likely. Not sure how big timestamps would end up being; 30-bit is possible and bad, but 48-bit seems more likely.
- Author here, copied from another comment above.
Actually I doubt we'd have picked 27-bit addresses. That's about 134M addresses; that's less than the US population (it's about the number of households today?) and Europe was also relevant when IPv4 was being designed. In any case, if we had chosen 27-bit addresses, we'd have hit exhaustion just a bit before the big telecom boom, a lucky coincidence meaning the consumer internet would largely require another transition anyway. Transitioning from 27-bit to I don't know 45-bit or 99-bit or whatever we'd choose next wouldn't be as hard as the IPv6 transition today.
- Author here. The argument was that by numerological coincidence, a couple of very important numbers (world population, written characters, seconds in an epoch, and plausible process memory usage) just happen to lie right near 2^16 / 2^32. I couldn't think of equally important numbers (for a computer) near ~260k or ~64B. We just got unlucky with the choice of 8-bit bytes.
- Author here. My argument in the OP was that we maybe would never need to transition. With 36-bit addresses we'd probably get all the people and devices to fit. While there would still be early misallocation (hell, Ford and Mercedes still hold /8s) that could probably be corrected by buying/selling addresses without having to go to NATs and related. An even bigger address space might be required in some kind of buzzword bingo AI IoT VR world but 36 bits would be about enough even with the whole world online.
- Author here. The point is that with 9-bit bytes we'd have designed IPv4 to have 36-bit addresses from the beginning. There wouldn't have been a transition.
- Author here. I kind of doubt it. Copied from a comment earlier:
I doubt we'd have picked 27-bit addresses. That's about 134M addresses; that's less than the US population (it's about the number of households today?) and Europe was also relevant when IPv4 was being designed. In any case, if we had chosen 27-bit addresses, we'd have hit exhaustion just a bit before the big telecom boom that built out most of the internet infrastructure that holds back transition today. Transitioning from 27-bit to I don't know 45-bit or 99-bit or whatever we'd choose next wouldn't be as hard as the IPv6 transition today.
- Author here. Really great comment; I've linked it from the OP. (Could do without the insults!) Most of the changes you point out sound... good? Maybe having fewer arbitrary limits would have sapped a few historically significant coders of their rage against the machine, but maybe it would have pulled in a few more people by being less annoying in general. On colors, I did mention that in the post but losing an alpha channel would be painful.
- Author here. Actually I doubt we'd have picked 27-bit addresses. That's about 134M addresses; that's less than the US population (it's about the number of households today?) and Europe was also relevant when IPv4 was being designed.
In any case, if we had chosen 27-bit addresses, we'd have hit exhaustion just a bit before the big telecom boom that built out most of the internet infrastructure that holds back transition today. Transitioning from 27-bit to I don't know 45-bit or 99-bit or whatever we'd choose next wouldn't be as hard as the IPv6 transition today.