Preferences

dgroshev
Joined 1,489 karma

  1. Indeed, I understand your reasoning, you talk about that in the podcast in the RFD. This is why I wasn't talking about the lack of feedback, but the lack of human interaction. While there is nothing constructive to be done about the disappointment of rejection, this part is very much in your power to change, and that's why I think it's constructive feedback and not just venting.

    That said, the RFD does say this:

    > Candidates may well respond to a rejection by asking for more specific feedback; to the degree that feedback can be constructive, it should be provided.

    Even just replying with refusal to provide feedback would still be more humane and decent.

  2. Hey fellow failed applicant!

    I had a very similar experience, except I got the automated email after two months, not three — you sound like a stronger candidate, so maybe that's why I got rejected sooner, which'd be fair enough. Still, spending about a week's worth of evenings between the suggested materials, reflecting, writing, and editing 15 pages for one job application and having zero human interaction feels uniquely degrading.

    I disagree with your point about that being fine. I think it's not good enough to replicate the bare minimum of what the rest of the industry does while asking for so much more from candidates.

    A standard custom, well researched cover letter takes an order of magnitude less effort. When it's cookie cutter rejected by someone spending a few seconds on the CV, it's at least understandable: the effort they'd spend writing a rejection (or replying back) is higher than the amount of effort they spent evaluating the application.

    With Oxide however, Brian made a point that they "definitely read everyone's materials" [1]. Which means reading at the very least five pages per candidate. If that's still the case, having an actual human on the other side of the rejection would add a very small amount of time to the whole process, but the company decided to do the absolute least possible. It's a choice, and I think this choice goes against their own principle of decency:

    "We treat others with dignity, be they colleague, customer, community or competitor."

    I wish Oxide best of luck. They have lots of very smart, very driven people that I'd love to work with, and I love what they are doing. Hope this feedback helps them get better.

    [1]: https://youtu.be/wN8lcIUKZAU?t=1400

    P.S. Don't you dare, dear reader, consider the emdash above an LLM smell.

  3. Hundreds of us!

    I adore VyOS

  4. As it happens, the commit I linked fixes a segfault, which shouldn't normally happen in memory-safe code.
  5. I'm not sure about exquisite and small.

    Bun genuinely made me doubt my understanding of what good software engineering is. Just take a look at their code, here are a few examples:

    - this hand-rolled JS parser of 24k dense, memory-unsafe lines: https://github.com/oven-sh/bun/blob/c42539b0bf5c067e3d085646... (this is a version from quite a while ago to exclude LLM impact)

    - hand-rolled re-implementation of S3 directory listing that includes "parsing" XML via hard-coded substrings https://github.com/oven-sh/bun/blob/main/src/s3/list_objects...

    - MIME parsing https://github.com/oven-sh/bun/blob/main/src/http/MimeType.z...

    It goes completely contrary to a lot of what I think is good software engineering. There is very little reuse, everything is ad-hoc, NIH-heavy, verbose, seemingly fragile (there's a lot of memory manipulation interwoven with business logic!), with relatively few tests or assurances.

    And yet it works on many levels: as a piece of software, as a project, as a business. Therefore, how can it be anything but good engineering? It fulfils its purpose.

    I can also see why it's a very good fit for LLM-heavy workflows.

  6. I mostly agree, but it also depends on the size and the shape of the fillet. Large sweeping curves that stay close to horizontal for a long distance are bad, but a tight corner can still look better in G2/G3 than just G1. On the top at least, because fillets on the bottom create sharp overhangs that don't print well.

    Also, if you have that option, filler + sanding + paint can hide the layers completely, but preserve the overall shape.

  7. It seems small in absolute terms, but it's suprisingly visible, even to "normal" people, which was the entire point of making a physical object!

    I gave that object to a dozen people without explanation. Only one of them was a designer. All of them preferred G3 after comparing corners by look and touch for a few seconds. Honestly, I was surprised that it was this unanimous; I deliberately made the difference small.

  8. It's a lovely video! I linked it in the description, and I strongly recommend the other videos too.
  9. Another alternative is Rhino+Grasshopper with direct g-code generation, which allows for some wild tricks, including full colour printing: https://www.instagram.com/medium_things/

    You can read more here:

    https://controlmad.com/en/training/10h-grasshopper-g-code-fo...

    https://interactivetextbooks.tudelft.nl/rhino-grasshopper/Gr...

  10. There is nothing stopping the industry from standardising on an alternative form of expressing consent, for example on browser installation. GDPR is agnostic to the form the consent takes, as long as it's informed and freely given.

    However, by far the biggest browser is funded by a corporation that wants tracking data across the web. I'm not very surprised that the corporation haven't made it easy to refuse just once.

    Thanks Google.

  11. No pop-ups on apple.com!
  12. The way modern CAD systems work is by having a tree of features/actions that is then used to construct an analytical representation of a 3D object. The features/actions can rely on "sketches" (2D drawings that are coupled with a real time geometric constraint solver) and can be "projected" into sketches, creating new reference lines, that can then be used by the sketch constrain solver, generating a sketch that can be used for more 3D features.

    This is already complex and fiddly enough. Just having a stable 2D drawing environment that uses a constraint solver but also behaves predictably and doesn't run into numerical instability issues is already an achievement. You don't want a spline blowing up while the user is applying constraints one by one! And yet it's trivial compared to the rest of the problem.

    Having 3D features analytically (not numerically!) interacting with each other means someone needs to write code that handles the interactions. When I click on a corner and apply a G2 fillet to it, it means that there's now a new 3D surface where every section is a spline with at least 4 control points. When I then intersect that corner with a sphere, the geometric kernel must be able to analytically represent the resulting surface (intersecting that spline-profiled surface with a sphere). If I project that surface into a sketch, the kernel needs to represent its outline from an arbitrary angle — again, analytically. Naturally, there is an explosion of special cases: that sphere might either intersect the fillet, just touch it (with a single contact point), or not touch it at all, maybe after I made some edits to the earlier features.

    Blender at its core is comparatively trivial. Polygons are just clumps of points, they can be operated on numerically. CAD is hell.

  13. I'm very skeptical that one person can make a dent. Paging through the releases, they seem to focus on constructive solid geometry and code-driven shape generation, which I believe is a dead end.

    The tricky bit is having a G2 (or even G3) fillet that intersects a complex shape built from surface patches and thickened, with both projected into a new sketch, and keeping the workflow sane if I go and adjust the original fillet. I hope one day we'll see a free (as in speech) kernel that can enable that, until then it's just Parasolid, sadly.

  14. Same, but I don't think it's possible without a large and sustained investment into a free geometric modelling kernel, which can probably be only done by a government.

    Parasolid is powering practically every major CAD system. Its development started in 1986 and it's still actively developed. The amount of effort that goes into those things is immense (39 years of commercial development!) and I don't believe it can be done pro-bono in someone's spare time. What's worse, with this kind of software there is no "graceful degradation": while something like a MIP solver can be useful even if it's quite a bit slower than Gurobi, a kernel that can't model complex lofts and fillets is not particularly useful.

    3D CAD is much harder than Blender and less amenable to open source development.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal