Preferences

fleabitdev
Joined 369 karma

  1. The rods are only active in low-light conditions; they're fully active under the moon and stars, or partially active under a dim street light. Under normal lighting conditions, every rod is fully saturated, so they make no contribution to vision. (Some recent papers have pushed back against this orthodox model of rods and cones, but it's good enough for practical use.)

    This assumption that rods are "the luminance cells" is an easy mistake to make. It's particularly annoying that the rods have a sensitivity peak between the blue and green cones [1], so it feels like they should contribute to colour perception, but they just don't.

    [1]: https://en.wikipedia.org/wiki/Rod_cell#/media/File:Cone-abso...

  2. Protanopia and protanomaly shift luminance perception away from the longest wavelengths of visible light, which causes highly-saturated red colours to appear dark or black. Deuteranopia and deuteranomaly don't have this effect. [1]

    Blue cones make little or no contribution to luminance. Red cones are sensitive across the full spectrum of visual light, but green cones have no sensitivity to the longest wavelengths [2]. Since protans don't have the "hardware" to sense long wavelengths, it's inevitable that they'd have unusual luminance perception.

    I'm not sure why deutans have such a normal luminous efficiency curve (and I can't find anything in a quick literature search), but it must involve the blue cones, because there's no way to produce that curve from the red-cone response alone.

    [1]: https://en.wikipedia.org/wiki/Luminous_efficiency_function#C...

    [2]: https://commons.wikimedia.org/wiki/File:Cone-fundamentals-wi...

  3. I'd be very interested in hearing alternative estimates, but here's my working:

    The lowest cost I could find to rent a server SSD was US$5 per TB-month, and it's often much higher. If we assume that markets are efficient (or inefficient in a way that disadvantages gaming PCs), we could stop thinking there and just use US$2.50 as a conservative lower bound.

    I checked the cost of buying a machine with a 2 TB rather than 1 TB SSD; it varied a lot by manufacturer, but it seemed to line up with $2.50 to $5 per TB-month on a two-to-five-year upgrade cycle.

    One reason I halved the number is because some users (say, a teenager who only plays one game) might have lots of unused space in their SSD, so wasting that space doesn't directly cost them anything. However, unused storage costs money, and the "default" or "safe" size of the SSD in a gaming PC is mostly determined by the size of games - so install size bloat may explain why that "free" space was purchased in the first place.

    > whether "marginal cost of wasted hard drive storage" is even a thing for consumers

    As long as storage has a price, use of storage will have a price :-)

  4. Isn't it a little reductive to look at basic infrastructure costs? I used Hetzner as a surrogate for the raw cost of bandwidth, plus overheads. If you need to serve data outside Europe, the budget tier of BunnyCDN is four times more expensive than Hetzner.

    But you might be right - in a market where the price of the same good varies by two orders of magnitude, I could believe that even the nice vendors are charging a 400% markup.

  5. In this case, the bug was 131 GB of wasted disk space after installation. Because the waste came from duplicate files, it should have had little impact on download size (unless there's a separate bug in the installer...)

    This is why the cost of the bug was so easy for the studio to ignore. An extra 131 GB of bandwidth per download would have cost Steam several million dollars over the last two years, so they might have asked the game studio to look into it.

  6. Back of the envelope, in the two years since the game was released, this single bug has wasted at least US$10,000,000 of hardware resources. That's a conservative estimate (20% of people who own the game keep it installed, the marginal cost of wasted SSD storage in a gaming PC is US$2.50 per TB per month, the install base grew linearly over time), so the true number is probably several times higher.

    In other words, the game studio externalised an eight-figure hardware cost onto their users, to avoid a five-to-six-figure engineering cost on their side.

    Data duplication can't just be banned by Steam, because it's a legitimate optimisation in some cases. The only safeguard against this sort of waste is a company culture which values software quality. I'm glad the developers fixed this bug, but it should never have been released to users in the first place.

  7. Experienced software developer, currently available for freelance work. I'd be happy to sign either a conventional months-long software development contract, or a short consulting contract.

        Location: UK
        Remote: Yes
        Willing to relocate: No
        Resume/CV: By request
        Email: (my username) at protonmail dot com
    
    My preferred coding style is high-rigour and highly-documented, which tends to be a great fit for contract work. I have a track record of delivering quality results under minimal supervision.

    My specialist skills:

    - The Rust language, which has been my daily driver for more than a decade.

    - Multimedia (2D rendering, vector graphics, video, audio, libav, Web Codecs...)

    - Performance optimisation (parallel programming, SIMD, GPU acceleration, soft-realtime programming...)

    Fields in which I'm highly experienced, but below specialist level:

    - Web development, with a frontend bias (TypeScript, React, JS build systems, the Web platform, WebGL, WebGPU, Node, WebAssembly...)

    - Native development (native build systems, FFIs, low-level Win32, basic fluency in C/C++...)

    - Leadership, communication and technical writing, all learned in a previous career.

    I should also mention a modest level of experience in computer vision, greenfield R&D, game engine development, programming language development, and data compression. I'm comfortable with ubiquitous tools like Bash, Make, Git, GitHub, Docker and Figma. (Sorry for the keyword spam; you know how it is.)

    I'm currently offering a 50% discount for any contract which seems highly educational. My areas of interest include acoustics, DSP, embedded programming, Svelte, Solid, functional languages, Swift, and backend development in general.

    I can be flexible on time zones. No interest in relocating long-term, but I'd be happy to visit your team in person for a few days to break the ice.

    Thanks for reading, and I look forward to hearing from you :-)

  8. If you set out to build a practical UI from first principles using 1995 technology, I think you'd end up with something a lot like Windows 95. It's like a checklist of all the things we should be doing.

    Luminance contrast is used to create a hierarchy of importance. Most backgrounds are medium grey, so that all text and icons are low-importance by default. Text fields, dropdowns, check boxes and radio buttons are black-on-white: a subtle call to action. Window, button and scrollbar edges always include pure white or pure black. Active toggle buttons have a light grey background, sacrificing the "3D shading" metaphor in the name of contrast.

    Most colour is limited to two accents: pale yellow and navy blue. Small splashes of those colours are mixed together in icons to make them recognisable at a glance. Deactivated icons lose all colour. The grey, yellow and blue palette is highly accessible for colour-blind people, and the yellow and blue accents also occupy unique points in the luminance space (the yellow sits between white and grey, the blue sits between grey and black).

    Despite all of this restraint, the designers weren't afraid of high contrast and high saturation; white text on a navy blue background shows up very sparingly, always as a loud "YOU ARE HERE" beacon. The designers understood that navigation is more important than anything else.

    The graphics are strictly utilitarian, with no unnecessary texture or visual noise. The entire UI is typeset using just two weights of 9px MS Sans Serif. The only skeuomorphic elements are some program and folder names, a tiny resizing grip at the corner of each window, and a simple simulation of depth when push buttons are clicked. 3D edges are used to make the scene layout easier to parse, not to make it look physical or familiar.

    Related components are almost always visually grouped together, using borders, filled rectangles and negative space. (I suspect the designers would have used fewer borders and more fills if the palette of background colours had been a little larger.) Dark and light backgrounds are freely mixed in the same UI, which requires both white and black text to be present. The depth of recursion (boxes in boxes in boxes...) is fairly shallow. Homogeneous collections of components are always enclosed in a strong border and background, which enables sibling components to be displayed with no borders at all between them.

    All of these tasteful design choices were fragile, because you can only preserve them if you understand them. Windows XP made the background colours lighter, which reduced the available dynamic range of luminance cues; it tinted many backgrounds and components yellow or blue, which made chrominance more noisy; it introduced gradient fills and gradient strokes, which were less effective at grouping components; it added soft shading to icons, which made their shapes less distinct; and so on. Almost every change broke something, and so after just one major revision of the UI, most of the magic was already gone.

  9. That could be made to work by stacking a transparent OLED panel in front of a transparent LCD panel. The LCD would absorb light, and the OLED would emit light.

    I just tried to search for some examples, but I can't find any. Maybe the displays can't be made thin enough to eliminate parallax between the two images?

  10. Interesting approach. It doesn't even introduce an extra rounding error, because converting from 32-bit XYB to RGB should be similar to converting from 8-bit YUV to RGB.

    However, when decoding an 8-bit-quality image as 10-bit or 12-bit, won't this strategy just fill the two least significant bits with noise?

  11. Emulators also struggle to faithfully reproduce artwork for the Game Boy Color and the original Game Boy Advance. Those consoles used non-backlit LCD displays with a low contrast ratio, a small colour gamut, and some ghosting between frames. Many emulators just scale the console's RGB colour values to the full range of the user's monitor, which produces far too much saturation and contrast.

    It's a shame, because I really like the original, muted colour palette. Some artists produced great results within the limitations of the LCD screen. Similar to the CRT spritework in this article, it feels like a lost art.

    However, there's an interesting complication: quite a lot of Game Boy Color and Game Boy Advance developers seem to have created their game art on sRGB monitors, and then shipped those graphics without properly considering how they would appear on the LCD. Those games appear muddy and colourless on a real console - they actually look much better when emulated "incorrectly", because the two errors exactly cancel out!

  12. I'd like to see a fantasy console which comes with built-in CRT emulation. Some of the features of CRT displays seem artistically interesting, especially the steeper gamma, soft pixel edges, and lightness-dependent bloom; it would be great to see modern pixel artists explore these old techniques again.
  13. try-finally is leaky in JavaScript, in any case. If your `try` block contains an `await` point, its finaliser may never run. The browser also has the right to stop running your tab’s process partway through a JavaScript callback without running any finalisers (for example, because the computer running the browser has been struck by lightning).

    For this reason, try-finally is at best a tool for enforcing local invariants in your code. When a function like process.exit() completely retires the current JavaScript environment, there’s no harm in skipping `finally` blocks.

  14. I'm not an expert, but in the worst case, you might need to decode dense 4x4-pixel blocks which each depend on fully-decoded neighbouring blocks to their west, northwest, north and northeast. This would limit you to processing `frame_height * 4` pixels in parallel, which seems bad, especially for memory-intensive work. (GPUs rely on massive parallelism to hide the latency of memory accesses.)

    Motion vectors can be large (for example, 256 pixels for VP8), so you wouldn't get much extra parallelism by decoding multiple frames together.

    However, even if the worst-case performance is bad, you might see good performance in the average case. For example, you might be able to decode all of a frame's inter blocks in parallel, and that might unlock better parallel processing for intra blocks. It looks like deblocking might be highly parallel. VP9, H.265 and AV1 can optionally split each frame into independently-coded tiles, although I don't know how common that is in practice.

  15. Happy to hear that they've introduced video encoders and decoders based on compute shaders. The only video codecs widely supported in hardware are H.264, H.265 and AV1, so cross-platform acceleration for other codecs will be very nice to have, even if it's less efficient than fixed-function hardware. The new ProRes encoder already looks useful for a project I'm working on.

    > Only codecs specifically designed for parallelised decoding can be implemented in such a way, with more mainstream codecs not being planned for support.

    It makes sense that most video codecs aren't amenable to compute shader decoding. You need tens of thousands of threads to keep a GPU busy, and you'll struggle to get that much parallelism when you have data dependencies between frames and between tiles in the same frame.

    I wonder whether encoders might have more flexibility than decoders. Using compute shaders to encode something like VP9 (https://blogs.gnome.org/rbultje/2016/12/13/overview-of-the-v...) would be an interesting challenge.

  16. Reactivity works by replaying code when its inputs have changed. Events can make this very expensive and impractical, because to properly replay event-driven code, you'd need to replay every event it's ever received.

    When we replace an event stream with an observable variable, it's like a performance optimisation: "you can ignore all of the events which came before; here's an accumulated value which summarises the entire event stream". For example, a mouse movement event listener can often be reduced to an "is hovered" flag.

    Serialising program state to plain data isn't always easy or convenient, but it's flexible enough. Reducing all events to state almost solves the problem of impure inputs to reactive functions.

    Unfortunately, reactive functions usually have impure outputs, not just impure inputs. UI components might need to play a sound, write to a file, start an animation, perform an HTTP request, or notify a parent component that the "close" button has been clicked. It's really difficult to produce instantaneous side effects if you don't have instantaneous inputs to build on.

    I can't see an obvious solution, but until we come up with one, reactive UI toolkits will continue to be ill-formed. For example, a React component <ClickCounter mouseButton> would be broken by default: clicks are delivered by events, so they're invisible to React, so the component will display an incorrect click count when the mouseButton prop changes.

  17. Interesting fact: the 50ms to 100ms grace period only works at the very beginning of a user interaction. You get that grace period when the user clicks a button, but when they're typing in text, continually scrolling, clicking to interrupt an animation, or moving the mouse to trigger a hover event, it's better to provide a next-frame response.

    This means that it's safe for background work to block a web browser's main thread for up to 50ms, as long as you use CSS for all of your animations and hover effects, and stop launching new background tasks while the user is interacting with the document. https://web.dev/articles/optimize-long-tasks

  18. I was also surprised to read this, because Linear has always felt a little sluggish to me.

    I just profiled it to double-check. On an M4 MacBook Pro, clicking between the "Inbox" and "My issues" tabs takes about 100ms to 150ms. Opening an issue, or navigating from an issue back to the list of issues, takes about 80ms. Each navigation includes one function call which blocks the main thread for 50ms - perhaps a React rendering function?

    Linear has done very good work to optimise away network activity, but their performance bottleneck has now moved elsewhere. They've already made impressive improvements over the status quo (about 500ms to 1500ms for most dynamic content), so it would be great to see them close that last gap and achieve single-frame responsiveness.

  19. I can see some real advantages to this layout. There are only two key shapes rather than twelve, so transposing at sight would become much easier. A printed stave would span sixteen semitones rather than thirteen. The hand positions for chords and scales look about as comfortable as a normal piano.

    I thought this keyboard layout might make the pianist's hand-span one tone wider, but unfortunately, that wouldn't be the case. A normal piano spaces its black keys further apart than its white keys. On my digital piano, an isomorphic layout would bring the raised keys about 4mm closer together, which seems unplayable - but leaving the octave span unchanged would win those 4mm back.

    It would be much more difficult to reposition your hands without looking at them, but changing the texture of the white keys and black keys might help.

  20. I've wondered whether photorealism creates its own demand. Players spend hours in high-realism game worlds, their eyes adjust, and game worlds from ten years ago suddenly feel wrong; not just old-fashioned, but fake.

    This is also true for non-photorealistic 3D games. They benefit from high-tech effects like outline shaders, sharp shadows, anti-aliasing and LoD blending - but all of that tech is improving over time, so older efforts don't look quite right any more, and today's efforts won't look quite right in 2045.

    When a game developer decides to step off this treadmill, they usually make a retro game. I'd like to see more deliberately low-tech games which aren't retro games. If modern players think your game looks good on downlevel hardware, then it will continue to look good as hardware continues to improve - I think this is one reason why Nintendo games have so much staying power.

    This has been the norm in 2D game development for ages, but it's much more difficult in 3D. For example, if the player is ever allowed to step outdoors, you'll struggle to meet modern expectations for draw distance and pop-in - and even if your game manages to have cutting-edge draw distance for 2025, who can say whether future players will still find it convincing? The solution is to only put things in the camera frustum when you know you can draw them with full fidelity; everything in the game needs to look as good as it's ever going to look.

  21. Getting rid of subpixel AA would be a huge simplification, but quite a lot of desktop users are still on low-DPI monitors. The Firefox hardware survey [1] reports that 16% of users have a display resolution of 1366x768.

    This isn't just legacy hardware; 96dpi monitors and notebooks are still being produced today.

    [1]: https://data.firefox.com/dashboard/hardware

  22. Yes - I think it caught my attention because it was such a mystery. It was a welcome thing to hear from one of the most powerful people in the world, but it came like a bolt from the blue. As far as I know, he never revisited the topic.
  23. Last year, an interviewer asked Francis how he envisages hell. His response stayed with me: “It’s difficult to imagine it. What I would say is not a dogma of faith, but my personal thought: I like to think hell is empty; I hope it is.”
  24. Sorry, I wasn't very clear - I think that using an object pool in a GCed language is like writing code in a dialect of that language which has no allocator.
  25. It's difficult to design a language which has good usability both with and without a GC. Can users create a reference which points to the interior of an object? Does the standard library allocate? Can the language implement useful features like move semantics and destructors, when GCed objects have an indefinite lifetime?

    You'd almost end up with two languages in one. It would be interesting to see a language fully embrace that, with fast/slow language dialects which have very good interoperability. The complexity cost would be high, but if the alternative is learning two languages rather than one...

  26. When half of a team are lone wolves because they see no other way to get things done, and the other half are rules lawyers who don't care whether things get done, I think that's a clear sign of management failure. Both of these personality types show up when team members have no faith that they can efficiently work together with their teammates: "drones" reject the idea of working efficiently, and "cowboys" reject the idea of working together.

    There are lots of ways for management to go wrong, but if you feel like this article describes your small business, here are some low-hanging fruit:

    - Have you sought out management coaching, or are you trying to become a good manager by trial and error?

    - Do you treat your employees as trusted, respected professionals (meaning: people who might know better than you)? Do your employees treat one another that way? Do they give you that same respect?

    - When you've made a mistake, big or small, how do you discover it? Do your employees and co-founders feel safe and secure enough to give you frequent negative feedback? Do they trust that you'll act on it?

    - Do you have the time and resources to provide good management for all of the employees that you're directly responsible for? Have you been properly hiring, promoting and delegating to spread out that workload as the team grows?

    - Leaders are just team members whose job is to produce decisions, in the same way that a software engineer's job is to produce code. Are you actually doing your job, by consistently producing high-quality decisions? Who's keeping track of that?

    - Your most impactful responsibilities are hiring, firing, promotions, setting salaries, and choosing how to balance quality against speed. Are you giving all of those decisions the care and effort which they deserve?

  27. To provide fonts for N different scripts would multiply the file size by roughly N, and there are a lot of scripts in common use:

    https://en.wikipedia.org/wiki/List_of_writing_systems#/media...

    Variable fonts would help, but I don't think it would be enough, especially if the goal is to provide a wide selection of fonts.

  28. It's a nice thought, but internationalisation would make that difficult. On my system, the Noto international font weighs about 170 MiB compressed (gzip-compressed TTF, sans and serif styles, all weights, including symbols and colour emoji).

    Even just focusing on Latin scripts, a few hundred fonts would roughly double the compressed download size for browsers and Electron apps.

    It might be possible to come up with a better compression scheme which exploits redundancy between different typefaces, but as far as I know, that tech doesn't exist yet.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal