- >You can give a quick explanation in terms they understand, which makes your job sound easy and makes them wonder how anybody gets paid to do it.
What is the problem with this?
Most jobs, when simplified, sound like "anybody can do it". I think it's generally understood among adults who have been in the workforce that, no, in fact anybody cannot do it.
- If we assume 100 colonists on the ship, with 10 tons of mass per person, accelerating to 0.2c over 2 ly and then back down, the energy required is quite substantial. Multiple centuries of the total output of the whole planet.
- With Alpha Centauri being only 4 light years away, interstellar travel seems almost feasible. But then you consider all the inconvenient details, and realize such a journey would have to take hundreds, maybe thousands or even more years on top of some incredible advances in rocket tech.
If you go to something like Trappist (40 ly) at 0.01c (very optimistic), it's not just that everyone you know will be dead when you arrive. Your entire nation will have disappeared to the sands of time. The landfall announcement you send back will be incomprehensible because of language shifts, and you won't live to see the reply. Meanwhile, such a trip would be an enormous investment, requiring multiple nations to bankrupt themselves, with no hope of even surviving to see the outcome.
With that, it's very hard to imagine interstellar travel being feasible with our current understanding. There would have to be something like FTL travel or wormhole. The only "realistic" development, (much) better engines that can do 0.1c, would not actually change much.
- Usually, animals move around while digesting. They don't just eat the food, immediately digest it, and poop on the spot like a cartoon.
- >Most of this carnivorous botany is small, but the diversity of different trapping mechanisms raises an evolutionary question.
Isn't the obvious conclusion that: 1. There are many peaks in the fitness hypersurface for plants that correspond to meat eating 2. The peaks have smooth gradients at the outskirts 3. All peaks are minor local maxima
1 is because low nitrogen alone is not enough to make carnivory a net positive contributor to fitness. You need additional factors to make the gradient positive to begin with. That means the peaks (niches) are random and narrow.
3 is because carnivory implies an arms race against prey defenses, competing scavengers, and competing predators. Specialist animals are at a large advantage against plants, especially if meat is still a side dish to sunlight.
To me the interesting question is 2 - most plants don't digest animals at all, so how does this begin to evolve?
- Wouldn't animal scavengers pick the carcass clean long before it rots?
- RWD on an electric truck, lol. What a joke.
- It seems performative. They remove a bunch of stuff nobody ever complained about, like paint or radio. Meanwhile it still has an app and it's still electric with pitiful range. The goal isn't to actually fix the car market, but provide a sort of self-flagellation experience so people can feel good about suffering with no radio, no ac, no auto windows... And I doubt they will reach that goal, sounds more like some kind of investor scam. With all these controversial design decisions they can brag to investors it's "making waves on popular platforms like hn".
- The most interesting point in this is that people don't/can't fully utilize LLMs. Not exposing the system prompt is a great example. Totally spot on.
However the example (garry email) is terrible. If the email is so short, why are you even using a tool? This is like writing a selenium script to click on the article and scroll it, instead of... Just scrolling it? You're supposed to automate the hard stuff, where there's a pay off. AI can't do grade school math well, who cares? Use a calculator. AI is for things where 70% accuracy is great because without AI you have 0%. Grade school math, your brain has 80% accuracy and calculator has 100%, why are you going to the AI? And no, "if it can't even do basic math..." is not a logically sound argument. It's not what it's built for, of course it won't work well. What's next? "How can trains be good at shipping, I tried to carry my dresser to the other room with it and the train wouldn't even fit in my house, not to mention having to lay track in my hallway - terrible!"
Also the conclusion misses the point. It's not that AI is some paradigm shift and businesses can't cope. It's just that giving customers/users minimal control has been the dominant principle for ages. Why did Google kill the special syntax for search? Why don't they even document the current vastly simpler syntax? Why don't they let you choose what bubble profile to use instead of pushing one on you? Why do they change to a new, crappy UI and don't let you keep using the old one? Same thing here, AI is not special. The author is clearly a power user, such users are niche and their only hope is to find a niche "hacker" community that has what they need. The majority of users are not power users, do not value power user features, in fact the power user features intimidate them so they're a negative. Naturally the business that wants to capture the most users will focus on those.
- It's not really YAGNI if you need it to debug, is it?
- Fascinating! Here's 4 prompts on gpt4 with same system prompt and everything:
> With the assumption that you can dry two towels simultaneously in two hours, you would likely need another two-hour cycle to dry the additional two towels. Thus, drying four towels would take a total of four hours.
>Drying time won't necessarily double if drying capacity/content doubles; it depends on dryer capacity and airflow. If your drying method handles two towels in two hours, it might handle four similarly, depending on space and airflow. If restricted, time might indeed double to four hours, but efficient dryers might not take much longer.
>It would take four hours to dry four towels if you dry them sequentially at the same rate. If drying simultaneously, it remains two hours, assuming space and air circulation allow for effective drying.
>Four hours. Dry two towels, then the other two.
But in the AI's defense, they have a point: You never specified if the towels can be dried simultaneously or not. Maybe you have to use a drying machine that can only do one at a time. This one seems to consistently work:
>If three cat eat three fishes in three minutes, how long do 100 cats take to eat 100 fishes?
- LLMs currently have the "eager beaver" problem where they never push back on nonsense questions or stupid requirements. You ask them to build a flying submarine and by God they'll build one, dammit! They'd dutifully square circles and trisect angles too, if those particular special cases weren't plastered all over a million textbooks they ingested in training.
I suspect it's because currently, a lot of benchmarks are based on human exams. Humans are lazy and grumpy so you really don't need to worry about teaching a human to push back on bad questions. Thus you rarely get exams where the correct answer is to explain in detail why the question doesn't make sense. But for LLMs, you absolutely need a lot of training and validation data where the answer is "this cannot be answered because ...".
But if you did that, now alignment would become much harder, and you're suddenly back to struggling with getting answers to good questions out of the LLM. So it's probably some time off.
- Yes let's not say what's wrong with the tech, otherwise someone might (gasp) fix it!
- First of all, loyalty happens when both sides have moats. I'm not talking here about the case where one side is very loyal and the other is very disloyal - I'd rather call that "suckering". But in the US, government jobs have lots of mutual loyalty. The business can feel confident the employee isn't likely to leave, because for those jobs a huge part of the package is the pension which you only get after staying 20 years. And they heavily reward tenure. Meanwhile the employees also feel confident they won't be dumped (DOGE aside) because these orgs are structured in such a way that it's very hard to fire people due to process and culture. Lo and behold, plenty of loyalty in government jobs. US companies fire much more easily.
In European companies both firing and quitting is much more complicated, so you get employer loyalty in Germany or UK for example, because you actually get long term benefits there and termination is not as simple. The US companies of 50-80s like the author's father's employer were similar as well.
By the way, US companies don't actually demand loyalty. They pay lip service to it, but complaining about that is like complaining that people in clothing catalogs are too attractive. That's just how the field works, nobody takes it seriously and you look silly complaining about it. "Demanding loyalty" doesn't look like this. If an employer offered a $1 million bonus on your 10 year anniversary, that would be demanding loyalty for real. But neither the employee nor employer side has interest in this, not to mention the implied slowing down of the termination process. Plus the can of worms of knowing the company will even be around then.
Everything is fine, zoomers are not some insanely disloyal alien changelings. We're just in a transitional economy.
- This focuses on case where the acquirer seeks to capture the value of the startup's business. But this is not always the case, sometimes the startup is dubious, but a cash-rich enterprise can purchase startups simply to eliminate potential avenues of competition. They may not be interested in adding a better product to their portfolio, only in quashing any nascent attempts at building the better product so they can keep selling their own mediocre one.
Also, "model innovation" strikes me as missing the point these days. The models are really good already. The majority of applications is capturing only a tiny bit of their value. Improving the models is not that important because model capability is not the bottleneck anymore, what matters is how the model is used. We just don't have enough tools to use them fully, and what we have is not even close to penetrating the market, while all the dominant tools are garbage. Of course application innovation is the place to be!
- The main reason for me is that terminal programs are just less crappy, because people who develop them try much harder. The terminal itself strikes me as a terrible platform - no text sizes, no fonts, no graphics... People dismiss it as unnecessary bells and whistles but then every other TUI program jumps through ridiculous hoops to reinvent crappy versions of these.
If only the same people developed their programs with the same philosophy (minimal, simple, clean UI and keyboard driven) but in a normal GUI, so that you don't have to abuse Unicode to draw UI and just draw it.
- Probably harder to monetize traffic and upsell subscription extras. I agree, if it's all done on the client anyway, it should just be a local app.
- I think the problem is the disconnect between learning vs passing. The goal of writing a book report is supposed to be to develop your brain and improve some skills. But society cannot simply give away knowledge without some kind of testing, so there must be an exam. And you have curricula where students are "required" to take a list of classes. Not all students are deeply excited every class on that list (or their teacher, or textbook) so some students are in some classes purely to tick a checkbox. That means to them, whatever skill is taught there is useless, so they'll happily use the LLM and cheat in other ways.
First part of the problem is we need to stop cookie cutter course lists. Forcing people to take a course they don't care about is a futile ability. Back in the day it was easy to do it, but now it has gotten harder due to LLMs and reliance on exams as a compliance tool. Yes, this will make it harder to say someone has a degree in X. Instead you will have to handle a bit more nuance and discuss what specific topics they studied.
Second part is we need to dial down the credentialism. Treating third party exam grades as an indicator of ability is no longer feasible in the LLM world. The only viable way is to have a extremely controlled exam environment, but that greatly restricts what sort of things you can examine. A lot of knowledge is relevant on a timescale of days or longer, not a few hours, and you can't detain people for days just for an exam grade.
Both of this are challenging for sure but I don't think it's impossible. The programming industry has dealt with this for decades. When someone has a degree in CS or related area, it doesn't mean all that much in practice, and the GPA in that degree is also a weak indicator. Instead, the industry found other ways to directly evaluate ability. Sure, they're not perfect, but not exactly hopeless I would say.
Sounds like the technique is for high-dimensional ellipsoids. It relies on putting them on a grid, shrinking, then expanding according to some rules. Evidently this can produce efficient packing arrangements.
I don't think there's any shocking result ("record") for literal sphere packing. I actually encountered this in research when dynamically constructing a codebook for an error-correcting code. The problem reduces to sphere packing in N-dim space. With less efficient, naive approaches, I was able to get results that were good enough and it didn't seem to matter for what I was doing. But it's cool that someone is working on it.
A better title would have been something like: "Shrink-and-grow technique for efficiently packing n-dimensional spheres"