Preferences

dekhn
Joined 31,815 karma

  1. Blender also has a high learning curve but you typically don't need a PhD to understand the errors (instead you just watch youtube videos and copy what they do).

    Removing faces from an STL and adding other objects is quite straightforward. Previously, Autodesk had Meshmixer and 123D, I guess Meshmixer is still available: https://meshmixer.org/ and I found it to be great for quick editing of the type you're describing.

  2. We will never fix health care in the US. It will eventually bankrupt the nation.
  3. Prove is a strong word. There are few cases in real-world programming where you can prove anything.

    I prefer to make this probabilistic: use testing to reduce the probability that your code isn't correct, for the situations in which it is expected to be deployed. In this sense, coding and testing is much like doing experimental physics: we never really prove a theory or disprove it, we just invalidate clearly wrong ones.

  4. You don't work for Google in SRE, do you?
  5. I read pretty much all of Varley's stuff when I was a young teen. I ended up dedicated my career to biotech and machine learning, with the hope that I could achieve some fraction of what was possible in Varley's worlds, only to learn that even basic genetic engineering was taboo at the time (early 1990s to early 2000s) and that hasn't really changed much since.
  6. Ah yes: "Congratulations! You have just completed the cycle of recapitulating the collection of processes which have brought us the present!"
  7. In general, cross compilers can do dynamic linking.
  8. Morningstar Vegan Breakfast Sausage Patties. Great with eggs.
  9. It's not just risky, it's hard to know if it really "worked" for many reasons. This is why we run double-blind, randomized control trials- to be convinced that the treatment "worked".
  10. Yes, I took an existing vision model that could run at realtime on my laptop, and fine-tuned it with a few hundred manually labelled images of tardigrades.

    I don't like the openflexure design at all. I mean... obviously it works for a lot of people, but I just don't want a flexure based stage. I like real 2-axis stages based on rolling bearings, basically cloning the X and Y parts of this: https://www.asiimaging.com/products/stages/xy-inverted-stage...

    UC2 is another cool project: https://openuc2.com/ but I found their approach constraining.

    Frankly I think you could just buy an inexpensive 3D printer that had an open firmware, and replace the extruder with an objective, a tube, and a camera, and you'd have something up and running cheaper for less time.

  11. Uh, OK. So a few decades ago a scientist I respect built his own scientific tool from parts (https://www.nature.com/articles/35073680) and I was really blown away by that idea, especially because most scientific tools are very expensive and have lots of proprietary components. I asked around at the time (~2001) and there wasn't a lot of knowledge on how to control stepper motors, assemble rigid frames, etc.

    Although my day job is running compute infra, I have a background in biophysics and I figured I could probably do something similar to Joe Derisi, but lacked the knowledge, time, and money to do this either in the lab, or at home. So the project was mostly on the backburner. I got lucky and joined a team at Google a decade ago that did Maker stuff. At some point we set up a CNC machine to automate some wood cutting projects and I realized that the machine could be adapted to be a microscope that can scan large areas (much larger than the field of view of the objective). I took a Shapeoko and replaced the cutting tool with a microscope head (using cheap objectives, cheap lens tube, and cheap camera) and demonstrated it and got some good images and lots of technical feedback.

    As I now had more time, money, and knowledge (thanks, Google!) I thought about what I could do to make scientific grade microscopes using 3d printer parts, 3d printing and inexpensive components. There are a lot of challenges, and so I've spent the past decade slowly designing and building my scope, and using it to do "interesting" things.

    At the current point, what I have is: an aluminum frame structure using inexpensive extrusion, some 3d printed junction pieces, some JLCPCB-machined aluminum parts for the 2D XY stage, inexpensive off-the-shelf lenses and industrial vision camera, along with a few more adapter pieces, and an LED illuminator. It's about $1000 material, plus far more time in terms of assembly and learning process.

    What I can do: the scope easily handles scanning large fields of view (50mm x 50mm) at 10X magnification and assembles the scans into coherent fullsize images (often 100,000x100,000 pixels). It can also integrate a computer vision model trained to identify animacules (specifically tardigrades) and center the sample, allowing for tracking as the tardigrade moves about in a large petri dish. This is of interest to tardigrade scientists who want to build models of tardigrade behavior and turn them into model organisms.

    Right now I'm working on a sub-sub-sub-project which is to replace the LED illuminator with a new design that is capable of extremely bright pulses for extremely short durations, which allows me to acquire scans much faster. I am revelling in low-level electronic design and learning the tricks of trade, much of which is "5 minutes of soldering can save $10,000".

    I had hoped to make this project into my fulltime job, but the reality is that there is not much demand for stuff like this, and if it does become your job, you typically focus on getting your leadership to give you money to buy an already existing scope designed by experts and using that to make important discoveries (I work in pharma, which does not care about tardigrades).

    Eventually- I hope- I will retire and move on to the more challenging nanoscale projects- it turns out that while you can build microscopes that are accurate to microns with off-the-shelf hardware is fairly straightforward, getting to nanoscale involves understanding a lot of what was learned between the 1950s and now about ultra-high-precision, which is much more subtle and expensive.

    Here's a sample video of tardigrade tracking- you can see the scope moving the stage to keep the "snout" centered. https://www.youtube.com/watch?v=LYaMFDjC1DQ And another, this is an empty tardigrade shell filled with eggs that are about to hatch, https://www.youtube.com/watch?v=snUQTOCHito with the first baby exiting the old shell at around 10 minutes.

  12. One of my favorite sci-fi authors. Was never really appreciated for how ahead of his time he was.

    My personal favorite: humans are kicked off earth by superpowerful aliens. They colonize the rest of the solar system. Genentic modification allows people to live in space as giant leaf-like organisms. Two religious cults are fighting to paint Saturn's rings completely red.

  13. Flensing trampolines are out of my budget, so it's just giblet wiggling for me.
  14. Really all this happened before: https://www.nytimes.com/1993/01/17/us/new-presidency-boomers... The moment the dead went from counterculture to the unofficial band of the clinton white house was the real shift IMHO.
  15. History is made by people who reflabor the exahenge.

    I build microscopes instead of telescopes (as a hobby). I can't tell you how many times I've taken a mostly working system and stripped it down to make some important change that affects most of the design to get only a tiny incremental improvement. Sometimes that improvement makes all the difference (for example, being smart when 3d printing a piece that carries something heavy so it doesn't deflect) and sometimes it's just an itch I need to scratch. Eventually, I learned to make two: a microscope that gets built and used, and then a microscope that is a prototype. Then I'm not tempted to take the daily driver and pull the engine.

  16. How complex are you talking about? I've done animations with hundreds of elements and it's fine.
  17. No, this feature already exists.
  18. Maybe you could take the stat timings, the read timings (both from strace) and somehow instrument Python to output timing for unmarshalling code (or just instrument everything in python).

    Either way, at least on my system with cached file attributes, python can startup in 10ms, so it's not clear whether you truly need to optimize much more than that (by identifying remaining bits to optimize), versus solving the problem another way (not statting 500 files, most of which don't exist, every time you start up).

  19. It's not interpreting- Python is loading the already byte compiled version. But it's also statting several files (various extensions).

    I believe in the past people have looked at putting the standard library in a zip file instead of splatted out into a bunch of files in a dirtree. In that case, I think python would just do a few stats, find the zipfile, loaded the whole thing into RAM, and then index into the file.

  20. Exactly this. The time to start python is roughly a function of timeof(stat) * numberof(stat calls) and on a network system that can often be magnitudes larger than a local filesystem.

This user hasn’t submitted anything.