Preferences

strgcmc
Joined 734 karma

  1. This is why (flawed though the process may be in other ways), a company like Amazon asks "customer obsession" questions in engineering interviews. To gather data about whether the candidate appreciates this point about needing to understand user problems, and also what steps the candidate takes to try and learn the users' POV or walk a mile in their shoes so to speak.

    Of course interview processes can be gamed, and signal to noise ratio deserves skepticism, so nothing is perfect, but the core principle of WHY that exists as part of the interview process (at Amazon and many many other companies too) is exactly for the same reason you say it's your "favorite".

    Also IIRC, there was some internal research done in the late 2010s or so, that out of the hiring assessment data gathered across thousands of interviews, the single best predictor of positive on-the-job performance for software engineers, was NOT how well candidates did on coding rounds or system design but rather how well they did at the Customer Obsession round.

  2. I think it comes down to, having some insight about the customer need and how you would solve it. Having prior experience in the same domain is helpful but is neither a guarantee nor a blocker, towards having a customer insight (lots of people might work in a domain but have no idea how to improve it; alternatively an outsider might see something that the "domain experts" have been overlooking).

    I just randomly happened to read about the story of, some surgeons asking a Formula 1 team to help improve its surgical processes, with spectacular results in the long term... The F1 team had zero medical background, but they assessed the surgical processes and found huge issues with communication and lack of clarity, people reaching over each other to get to tools, or too many people jumping to fix something like a hose coming loose (when you just need 1 person to do that 1 thing). F1 teams were very good at designing hyper efficient and reliable processes to get complex pit stops done extremely quickly, and the surgeons benefitted a lot from those process engineering insights, even though it had nothing specifically to do with medical/surgical domain knowledge.

    Reference: https://www.thetimes.com/sport/formula-one/article/professor...

    Anyways, back to your main question -- I find that it helps to start small... Are you someone who is good at using analogies to explain concepts in one domain, to a layperson outside that domain? Or even better, to use analogies that would help a domain expert from domain A, to instantly recognize an analogous situation or opportunity in domain B (of which they are not an expert)? I personally have found a lot of benefit, from both being naturally curious about learning/teaching through analogies, finding the act of making analogies to be a fun hobby just because, and also honing it professionally to help me be useful in cross-domain contexts. I think you don't need to blow this up in your head as some big grand mystery with some big secret cheat code to unlock how to be a founder in a domain you're not familiar with -- I think you can start very small, and just practice making analogies with your friends or peers, see if you can find fun ways of explaining things across domains with them (either you explain to them with an analogy, or they explain something to you and you try to analogize it from your POV).

  3. I'm sure nobody cares, but the data I can check shows a couple interesting observations (won't call them conclusions, that's too strong):

    - Yes, you can find certain slices of 1.8% of customers, that would represent 50%+ of revenue... But this is usually pretty close to simply listing out the top 1.8% of all accounts by spend

    - Therefore, to support the original claim, one would essentially have to definitionally accept that nearly all of the top revenue accounts are "problem gamblers" and almost no one else is... But this doesn't pass a basic smell test, because population wise there are more "poor" problem-gamblers than there are "rich" ones, because there are a lot more poor people in general than rich ones, so it's very unlikely that nearly all of the 1.8% of total population problem gamblers also happen to overlap so much with the top 1.8% customer accounts by revenue.

  4. I got curious and validated your source [1], to pull the exact quote:

    "The proportion of Connecticut gambling revenue from the 1.8% of people with gambling problems ranges from 12.4% for lottery products to 51.0% for sports betting, and is 21.5% for all legalized gambling."

    Without going into details, I do have some ability to check if these numbers actually "make sense" against real operator data. Will try to sense-check if the data I have access to, roughly aligns with this or not.

    - the "1.8% of people" being problem gamblers does seem roughly correct, per my own experience

    - but those same 1.8% being responsible for 51% of sportsbook revenue, does not align with my intuition (which could be wrong! hence why I want to check further...)

    - it is absolutely true that sportsbooks have whales/VIPs/whatever-you-call-them, and the general business model is indeed one of those shapes where <10% of the customers account for >50% of the revenue (using very round imprecise numbers), but I still don't think you can attribute 51% to purely the "problem gamblers" (unless you're using a non-standard definition of problem-gambler maybe?)

  5. In such scenarios (data engineering / DS / analytics is my personal background), I have learned not to underestimate the value of, explicitly declaring within Team X, that person X1 is dedicated to line L1, person X2 is dedicated to line L2, etc. (aka similar to your last line about embedding a person with that line of business).

    In theory, it doesn't actually "change" anything, because Team X is still stuck supporting exactly the same number of dependencies + the same volume and types of requests.

    But the benefit of explicit >>> implicit, the clarity/certainty of knowing who-to-go-to-for-what, the avoidance of context switching + the ability to develop expertise/comfort in a particular domain (as opposed to the team trying to uphold a fantasy of fungibility or that anyone can take up any piece of work at any time...), and also the specificity by which you can eventually say, "hey I need to hire more people on Team X, because you need my team for 4 projects but I only have 3 people..." -- all of that has turned out to be surprisingly valuable.

    Another way to say it is -- for Team X to be stretched like that initial state, is probably dysfunctional, and in a terminally-fatal sense, but it's a slow kind of decay/death. Rather than pretending it can work, pretending you can virtualize the work across people (as if people were hyper-threads in a CPU core, effortlessly switching tasks)... instead by making it discrete/concrete/explicit, by nominating who-is-going-to-work-on-what-for-who, I have learned that this is actually a form of escalation, of forcing the dysfunction to the surface, and forcing the organization to confront a sink-or-swim moment sooner than it otherwise would have (vs if you just kept limping on, kept trying to pretend you can stay on top of the muddled mess of requests that keep coming in, and you're just stuck treading water and drowning slowly).

    ---

    Of course, taking an accelerationist stance is itself risky, and those risks need to be managed. But for example, if the reaction to such a plan is something like, "okay, you've created clarity, but what happens if person X1 goes on vacation/gets-hit-by-bus, then L1 will get no support, right?"... That is the entire purpose/benefit of escalating/accelerating!

    In other words, Team X always had problems, but they were hidden beneath a layer of obfuscation due to the way work was being spread around implicitly... it's actually a huge improvement, if you've transformed a murky/unnameable problem into something as crispy and quantifiable as a bus-factor=1 problem (which almost everyone understands more easily/intuitively).

    ---

    Maybe someday Team X could turn itself into a self-service platform, or a "X-as-a-service" offering, where the dependent teams do not need to have you work with or for them, but rather just consume your outputs, your service(s)/product(s), etc. at arms-length. So you probably don't always want to stay in this embedded or explicit "allocation" model.

  6. The most apt way that I've read somewhere, to reason about AI, is to treat it like an extremely foreign, totally alien form of intelligence. Not necessarily that the models of today behave like this, but we're talking about the future aren't we?

    Just framing your question against a backdrop of "human benevolence", as well as implying this is a single dimension (that it's just a scalar value that could be higher or lower), is already too biased. You assume that logic which applies to humans, can be extrapolated to AI. There is not much basis for this assumption, in much the same way that there is not much basis to assume an alien sentient gas cloud from Andromeda would operate on the same morals or concept of benevolence as us.

  7. A purely technology-minded compromise to this question (aka how to support both the "good" and "bad" kinds of recording), is probably something along the lines of expiry and enforcing a lack of permanence as the default (kind of like, the digital age recording-centric version of "innocent until proven guilty", which honestly is one of the greatest inventions in the history of human legal systems). Of course, one should never make societal decisions purely from a technological practicality standpoint.

    Since you can't be sure what is "bad"/illegal, and people will just record many things anyways without thinking too much about it --> then the default should be auto-expiring/auto-deletion after X hours/days, unless some reason or some confirmation is provided to justify its persistence.

    For example, imagine we lived in a near-future where AI assistants were commonplace. Imagine that recording was ubiquitous but legally mandated to default into being "disappearing videos" like Snapchat, but for all the major platforms (YouTube, TikTok, X, Twitch, Kick, etc.). Imagine that every day, you as a regular person doing regular things, get maybe 10000 notifications of, "you have been recorded in video X on platform Y, do you consent for this to be persisted?", and also law enforcement has to go through a judge (kind of like a search warrant) to file things like "persistence warrants", and then maybe there is another channel/method for concerned citizens who want to persist video of a "bad guy" doing "bad things" where they can request for persistence (maybe it's like an injunction against auto-deletion until a review body can look at the request)... Obviously this would be a ton of administrative overhead, a ton of micro-decisions to be made -- which is why I mentioned the AI-assistant angle, because then I can tell my personal AI helper, "here are my preferences, here is when I consent to recording and here is when I don't... knowing my personal rules, please go and deal with the 10000 notifications I get every day, thanks". Of course if there's disagreement or lack of consensus, some rules have to be developed about how to combine different parties wishes together (e.g. take a recording of a child's soccer game, where maybe 8 parents consent and 3 parents don't to persistence... perhaps it's majority rule so persistence side wins, but then majority has to pay the cost of API tokens to a blurring/anonymization service that protects the 3 who didn't want to be persisted -- that could be a framework for handling disputed outcomes?)

    I'm also purposefully ignoring the edge-case problem of, what if a bad actor wants to persist the videos anyways, but in short I think the best we can do is impose some civil legal penalties if an unwilling participant later finds out you kept their videos without permission.

    Anyways, I know that's all super fanciful and unrealistic in many ways, but I think that's a compromise sort of world-building I can imagine, that retains some familiar elements of how people think about consent and legal processes, while acknowledging the reality that recording is ubiquitous and that we need sane defaults + follow-up processes to review or adjudicate disputes later (and disputes might arise for trivial things, or serious criminal matters -- a criminal won't consent to their recording being persisted, but then society needs a sane way to override that, which is what judges and warrants are meant to do in protecting rights by requiring a bar of justification to be cleared).

  8. I think they just read my first sentence and decided to take offense immediately. Shrug.

    All I meant was, I didn't want to go down a path of talking about Trump... that's a very very dead horse to beat. I thought there were interesting elements to this person's ideas that were worth further discussion, that could be divorced/split-off from the Trump lightning rod, so I tried to do that. I generally thought I agreed with their original ideas, and wanted to build on them or respond to them, without getting sucked into wasting breath on Trump (nobody benefits, regardless if you have left or right leaning views).

    I'm sure I could fix some gaps in the way I explained myself, but oh well, just another day on the internet.

  9. True of course that dollars is the end goal, but frankly it'd be better if they just took the dollars out of my pocket directly, instead of poisoning my brain first so that they can trick me into giving some dollars...

    Obviously I'm being hyperbolic, but I think eventually if society survives past this phase, our descendants will look back and judge us for letting psychological manipulation be a valid economic process as a way to generate dollars, in much the same way we might judge our ancestors for ever building up a whole industry to hunt whales for oil for fuel (meaning, they might acknowledge that fuel is important and necessary to power an industrializing society, but they would mock us for not understanding how to refine petroleum sooner, and how silly going through the tech tree of fucking whale hunting is, just to get some fuel).

    It is fucking silly/absurd/dangerous, that we go through the tech tree branch of psychological manipulation, just to be able to sell some ads or whatever.

  10. I think you're veering too far into politics on what was originally not a very political OP/thread, but I'll indulge you a tiny bit and also try to bring the thread back to the original theme.

    You said a lot of words that I basically boil down to a thesis of, the value of "truth" is being diluted in real-time across our society (with flood-the-zone kinds of strategies), and there are powerful vested interested who benefit from such a dilution. When I say powerful interests, I don't meant to imply Illuminati and Freemasons and massive conspiracies -- Trump is just some angry senile fool with a nuclear football, who as you said has learned to reflexively use "AI" as the new "fake news" retort to information he doesn't like / wishes weren't true. But corporations also benefit.

    Google benefited tremendously from inserting itself into everyone's search habits, and squeezed some (a lot of) ad money out of being your gatekeeper to information. The new crop of AI companies (and Google and Meta and the old generation too) want to do the same thing again, but this time there's a twist -- whereas before the search+ads business could spam you with low-quality results (in proto-form, starting as the popup ads of yesteryear), but it didn't necessarily directly try to attack your view of "truth". In the future, you may search for a product you want to buy, and instead of serving you ads related to that product, you may be served disinformation to sway your view of what is "true".

    And sure negative advertising always existed (one company bad-mouthing another competitor's products), but those things took time and effort/resources, and also once upon a time we had such things as truth-in-advertising laws and libel laws but those concepts seem quaint and unlikely to be enforced/supported by this administration in the US. What AI enables is "zero marginal cost" scaling of disinformation and reality distortion, and in a world where "truth" erodes, instead of there being a market incentive for someone to profit off of being more truth-y than other market participants, on the contrary I would except that the oligopolistic world we live in would conclude that devaluaing truth is more profitable for all parties (a sort of implicit collusion or cartel-like effect, with companies controlling the flow of truth, like OPEC controlling their flow of oil).

  11. As a manager, I am considering to enforce a rule on my team that -- no README in any repo should ever go stale ever again --> it should be near-trivial for every dev to ask Claude Code to read the existing README, read/interpret the code as it practically currently stands, read what's changed in the PR, then update the README as necessary. This does not mean Claude will be perfect or that engineers don't need to check that its summaries make sense (they do, and the human is always accountable for the changes at the end of the day); but this does mean that, the typical amount of laziness that we are all guilty of often, should not be eliminated as a reason as to why READMEs go stale.
  12. As a thought-exercise -- assume models continue to improve, whereas "using claude-code daily" is something you choose to do because it's useful, but is not yet at the level of "absolute necessity, can't imagine work without it". What if it does become, that level of absolute necessity?

    - Is your demand inelastic at that point, if having claude-code becomes effectively required, to sustain your livelihood? Does pricing continue to increase, until it's 1%/5%/20%/50% of your salary (because hey, what's the alternative? if you don't pay, then you won't keep up with other engineers and will just lose your job completely)?

    - But if tools like claude-code become such a necessity, wouldn't enterprises be the ones paying? Maybe, but maybe like health-insurance in America (a uniquely dystopian thing), your employer may pay some portion of the premiums, but they'll also pass some costs to you as the employee... Tech salaries have been cushy for a while now, but we might be entering a "K-shaped" inflection point --> if you are an OpenAI elite researcher, then you might get a $100M+ offer from Meta; but if you are an average dev doing average enterprise CRUD, maybe your wages will be suppressed because the small cabal of LLM providers can raise prices and your company HAS to pay, which means you HAVE to bear the cost (or else what? you can quit and look for another job, but who's hiring?)

    This is a pessimistic take of course (and vastly oversimplified / too cynical). A more positive outcome might be, that increasing quality of AI/LLM options leads to a democratization of talent, or a blossoming of "solo unicorns"... personally I have toyed with calling this, something like a "techno-Amish utopia", in the sense that Amish people believe in self-sufficiency and are not wholly-resistant to technology (it's actually quite clever, what sorts of technology they allow for themselves or not), so what if we could take that further?

    If there was a version of that Amish-mentality of loosely-federated self-sufficient communities (they have newsletters! they travel to each other! but they largely feed themselves, build their own tools, fix their own fences, etc.!), where engineers + their chosen LLM partner could launch companies from home, manage their home automation / security tech, run a high-tech small farm, live off-grid from cheap solar, use excess electricity to Bitcoin mine if they choose to, etc.... maybe there is actually a libertarian world that can arise, where we are no longer as dependent on large institutions to marshal resources, deploy capital, scale production, etc., if some of those things are more in-reach for regular people in smaller communities, assisted by AI. This of course assumes that, the cabal of LLM model creators can be broken, that you don't need to pay for Claude if the cheaper open-source-ish Llama-like alternative is good enough

  13. To your last point -- I didn't say large number of junior contractors would write good code or whatever. The change that is happening in the startup scene now, as compared to say 10 years ago, is more about lowering the barrier to MVP and making it easier/cheaper for startups to experiment with finding product market fit, than anything to do with "productivity" or code quality or whatever.

    We're probably just talking past each other, because the thing you care about is not the thing I care about. I am saying that, it used to cost some reference benchmark of $X/idea to iterate as a startup and experiment with ideas, but then it became 0.5X because gig workers or overseas contractors became more accessible and easier to work with, and now it's becoming 0.1X because of LLMs and coding agents. I am not making any sort of argument about quality being better/good/equal, nor am I making any sort of conversion chart between 10 interns or 100 LLM agents equals 1 senior engineer or something... Quality is rarely (never?) the deciding factor, when it comes to early pre-seed iteration as a startup tries to gasp and claw for something resembling traction. Cost to iterate as well as benefits of having more iterations, can be improving, even if each iteration's quality level is declining.

    I'm simply saying, if I was a founder, and I had $10k to spend to test new ideas with, I can test a helluva lot more ideas today (leveraging AI), vs what I could have done 5 years ago (using contractors), vs what I could have done 10-20 years ago (hiring FTEs, just to test out ideas, is frankly kind of absurd when you think about how expensive that is). I am not saying that $10k worth of Claude Code is going to buy me a production grade super fantastic amazing robust scalable elegant architecture or whatever, but it sure as heck can buy me a good enough working prototype and help me secure a seed round. Reducing that cost of experimentation is the real revolution (and whether interns can learn or will pay off over time is a wholly orthogonal topic that has no bearing to this cost of experimentation revolution).

  14. I do agree that "unlimited interns who don't improve much" is less practically useful than it might seem at first, but OTOH "never improve much" seems unrealistic, given the insane progress of the field in the last 3ish years (or think back 5 years and tell me who was realistically predicting tools like Claude Code to even exist by 2025).

    Also, there's a decently large subset of small startups where there's 1 technical founder and a team of contract labor, trying to build that first MVP or cranking out early features in a huge rush to stay alive, where yeah, cheap unlimited interns might actually be meaningfully useful or economically more attractive than whatever they're doing now. Founders kind of have a perverse incentive, where a CTO doesn't need to solo code the first MVP, and also doesn't need to share/hand-out equity or make early hires quittteee as early, if unlimited interns can scale that CTO's solo productivity for a bit longer than the before-times.

  15. Your comment somehow reminded me of this quote: "In a society that profits from your self doubt, liking yourself is a rebellious act." (https://www.goodreads.com/quotes/9371890-in-a-society-that-p...)

    That's not a direct response to your concern, but I think this quote applies in a parallel manner -- I've seen this quote applied as a statement about what it means to be "punk", and how simply being content with yourself (meaning you don't fall victim to all the ways society attacks/preys on insecurities or tries to sell you drugs or makeup or clothes or surgery or whatever to change yourself), is actually incredibly "punk". You don't have to dress up weird, or go out and do graffiti, or get into fights... just being content with yourself is "punk", within a capitalist/post-capitalist world.

    So, in a similar vein, I think this author is saying that, "caring" is also a form of being "punk", in a world where seemingly not-caring is mainstream now. The thing is, being "punk" doesn't need an external "why" reason to justify it... the whole point of "punk" culture is about authenticity, that just being yourself is what's important, that you don't need a special reason to reject capitalist consumerism or mainstream opiate-of-the-masses media or to dress how you feel instead of how society thinks you should look. In that way, being "punk" is quite Buddhism-aligned actually, to center on existence and enlightenment through self-realization, instead of pursuit of external "why" reasons for doing X or Y.

    Caring is the punk thing to do, because it is who you actually are. You don't need a special reason to care, if you subscribe to any kind of "punk" mindset/philosophy about life. Don't care because it will yield better material rewards, get you laid, or whatever. Care, just because.

    At least, that's the argument... up to you if you buy it or not.

  16. But this thread here has either misinterpreted or willingly ballooned the problem up, into this strawman of an unfixable culture or a terrible company which no one engineer could possibly fix...

    The OP here, basically has a simple (and common!) 3-way collaboration/communication problem:

    - OP did not get along with 1 single fellow coworker that he was assigned to work with; this coworker reportedly does not listen to reason, does not read the research or background info that OP shared, etc.

    - OP tried to seek help from a manager/lead type person, but that person was also not useful (i.e. not able to force a course-correction towards better collaboration).

    Note: OP did not actually indict his entire team, or the entire eng organization, as all being hopelessly useless. OP said he had a problem with 2 specific people, and asked for tips to deal with that (small!) scenario. But instead of giving "small" advice for a "small" (and again, common and usually fixable/at-least-improvable) problem, both the toxic hive-mind as well as the HN commentators here have completely avoided trying to solve the actual root issue (which isn't nearly the impossibly-large-turnaround effort that everyone's making it out to be)... What we have here, is fundamentally an XY problem (https://xyproblem.info/), in that OP asked for help with X, but got advice about Y.

    EDIT: Okay so I guess I should offer some concrete advice to OP for what I'm calling his "small" original problem -- usually there are 2 categories of options from this point: either escalate again, or try to resolve interpersonally without escalation.

    - Escalation route: OP tried the 1st manager/tech-lead, who couldn't bring a resolution... that's... pretty common actually! So escalate 1 more level, calmly and professionally. Whether it's a skip-level director/VP, or a project manager, or whichever stakeholder is appropriate in OP's context -- explain politely what steps you have tried to solve the problem so far, why the counter-proposal / alternative is bad or won't work, and emphasize that you are still happy to collaborate further, but you are currently at an impasse and need a more senior person to weigh in. Then, OP needs to be prepared to "disagree and commit", if the decision doesn't go his way. NOTE: if the decision doesn't go his way, it could mean 1 of 2 things: a more senior person brought in extra context or expertise that OP did not know about and hence made a better decision that OP can learn to appreciate, OR it could mean everyone is an idiot and OP is the only sane person in the company... there's no reason to jump to the most negative conclusion as the only one, but certainly I acknowledge it's possible (I just don't think it's good advice to assume the worst, without even trying a simple +1 extra round of escalation... OP could at least try 1 more time).

    - Non-escalation interpersonal route: OP can find a professional way to say to the problematic coworker, "frankly, I still disagree with your approach, and it's my job to document my disagreement with our manager(s), but at the end of the day, if you insist on doing it your way, then go ahead". Sometimes, the only/best way to learn, is to let someone else try and fail. This isn't callousness or retribution, this is actually a common lesson for mentors who might otherwise struggle to try and protect their mentees from ever possibly making a mistake or being wrong about something... an overbearing/overprotective mentor would need to learn how/when to take a step back, to let a mentee try and fail and learn-how-to-learn from their failures. Of course, OP is not this coworker's mentor, and does not need to feel obligated to assume that role, but I am simply pointing out that letting someone go off and do something you disagree with, can actually be an act of caring (rather than a form of not-giving-a-fuck).

  17. Not mistaken at the time probably, assuming that was a few years ago.

    Now, today? Yes it would be a mistake to continue to believe that US is somehow immune to "this kind of bullshit".

    Decisions can only be judged relative to specific points in time and relevant context. Situations change. Looking up to America made more sense in the 80s/90s/00s/10s even, but by now in the 20s? There is nothing noble or pure left, not that we were ever all that noble or pure to begin with; still, the corruption is naked and explicit now, case in point - see $400M gift plane from Qatar.

  18. All good, v1.2.2 works fine, thank you!
  19. This is probably stupid, but at the hope of helping others through exposing my own ignorance -- I'm having trouble actually installing and running the preview... I've downloaded the preview release duckdb binary itself, then when I try to run "duckdb -ui", I'm getting this error:

    Extension Autoloading Error: An error occurred while trying to automatically install the required extension 'ui': Failed to download extension "ui" at URL "http://extensions.duckdb.org/0069af20ab/osx_arm64/ui.duckdb_..." (HTTP 403) Extension "ui" is an existing extension.

    Is it looking to download the preview version of the extension, but getting blocked/unauthorized (hence the 403 forbidden response)? Or is there something about the auto-loading behavior that I'm supposed to disable maybe?

  20. Years ago back when there was no AI biasing your interactions with other children... The entire argument is that the new generations don't have the same opportunity to form friendships in an unfettered way like you did in your childhood, because of the pervasive influence of AI that didn't exist during your own childhood.
  21. That's very naive. AI replaces any and all "less fun" forms of interaction, for children who are too undeveloped to appreciate any goal/metric besides "fun". Meeting new unfamiliar kids is not fun, ergo kids will fallback to the known-safe, comfortable, fun companion that is AI.

    If you believe the argument makes enough sense to justify replacing adults, then why do you think children will still want to play with each other, instead of just playing with AI? If anything, other children are MORE likely to display shitty behaviors than adults, given that you know, they're fucking children who don't know any better.

    Have you spent any time around children?

  22. I'm sorry but your take is just too stupid to ignore, and I apologize in advance because ad hominem is not my goal.

    I'll boil this down to the simplest possible explanation of why your statement is idiotic -- children who are trained to prefer AI, will never learn how to form friendships with their peers (aka other children, who won't always give perfectly timed emotional responses or always be calm or always answer what they need to hear). Other children are not able to "rethink their behavior" yet because they are children still.

    A world in which, I ask my child "do you want to play with Timmy next door, or stay inside and play with Alexa/Siri/etc." and my child always prefers to pick Alexa, is one of the most dystopian outcomes I could possibly imagine for childrearing.

    Forming friendships and human connections is a skill. Learning new skills is hard and not always fun. A soothing AI companion that always says what you want to hear, is going to trap children in a dopamine loop that prevents that kind of social skill development from ever happening.

  23. Perhaps this is closer to Vanguard's ownership model, i.e. that Vanguard is an investment management company that is owned by its funds' shareholders?

    I definitely think there is a way to make this viable at small scale in a tech/SaaS context. But to survive and grow to larger scale, I think you basically have to ensure your business following this model is not "too profitable" or else someone will want to crush and replace you, and hoard the profits for themselves.

    It's kind of forgotten about now, but it's a bit of a minor miracle that Vanguard's unusual structure survived the early days, then grew to become an investing behemoth. I suppose the reason nobody tried to kill Vanguard to steal their customers is because the business model was pretty boring and profits were unsexy enough that others just let them do their thing (I mean, low cost index funds, and the boring type of customers attracted to them? Talk about a ceiling on profits, compared to what a more adventurous fund manager could make elsewhere selling a typical 2-and-20 deal to greedier customers...).

  24. I mean, let's go further than that even... Imagine if somebody invented commercially viable nuclear fusion and achieved effectively free and infinite electricity, then by your logic, either of two things could happen:

    - Bitcoin becomes immediately worthless, because suddenly many actors could afford to tap infinite energy to mount a 51% attack and destroy the security of the network

    - Or, self-interested miners would try to "corner" the market, nearly infinite though it may be... Industrial players might try to outrace each other in terms of bringing more fusion power plants online that are dedicated towards mining -- "white hats" trying to preserve Bitcoin, vs "black hats" trying to destroy, and it's just an arms race... Maybe the end result is actually that, humanity only has productive need for 5% of the total energy that fusion generates (hey it's my thought experiment, just go with it), so what do you do with the other 95% that is otherwise pure excess? ... Maybe it all goes back into Bitcoin mining anyways, because maybe God loves a recursive joke

  25. I wouldn't argue about usefulness, but certainly Bitcoin mining is more directly economically rewarding than an indirect contribution towards protein folding progress for the field at large.

    Every 1 hr of mining at X hash rate will yield a fairly predictable (though fluctuating) $Y of output, but every 1 hr of protein folding will not necessarily yield any direct or predictable income. From the perspective of profit-maximizing energy producers, I suspect they will prefer the former, unless someone can quantify or monetize the latter more easily and directly.

  26. Comments in other threads explained why there will always be a gap between generation vs storage capacity, or why it's economically unwise to build so much storage as to cover peak generation.

    For my purposes, I assumed a reasonable allocation towards storage was already going to happen, and that there would still be excess unused capacity beyond that level.

  27. There seems to be a blindingly obvious solution, to capture some economic value from the otherwise excess solar power generated -- Bitcoin mining. In theory, it's very similar to this application, using otherwise wasted gas flares from oil drilling: https://www.cnbc.com/2022/02/12/23-year-old-texans-made-4-mi...

    Please note, I am not advocating for Bitcoin mining as a productive means of consuming electricity that could have gone to something else (either industrial or commercial or residential) -- I'm specially talking about what the article is covering, aka EXCESS generation that truly has nowhere else to go... Might as well convert that electricity into Bitcoin, to capture something (better than nothing) from the generated electricity.

  28. You probably would enjoy this book "Subtract", about the science of less: https://leidyklotz.com/media/
  29. I dunno, this seems like a pretty good understanding to me. What more would you ask for, to represent "understanding" that satisfies you?

    ---

    (I posed the Wikipedia version of this puzzle, to Claude Sonnet 3.5 free tier)

    "A farmer with a wolf, a goat, and a cabbage must cross a river by boat. The boat can carry only the farmer and a single item. If left unattended together, the wolf would eat the goat, or the goat would eat the cabbage. How can they cross the river without anything being eaten? "

    Let me think through this step by step, analyzing the constraints from your message:

    1. Items to transport: wolf, goat, cabbage 2. Constraints:

       - The boat can only carry the farmer plus one item
       - Wolf will eat goat if left alone together
       - Goat will eat cabbage if left alone together
       - The farmer must control the boat
    
    Let me solve this:

    1. Initial state: Everyone on starting bank (wolf, goat, cabbage, farmer)

    2. First crossing:

       - The farmer must take the goat first
       - (If they took the wolf, the goat would eat the cabbage)
       - (If they took the cabbage, the wolf would eat the goat)
       - Result: Goat is on far bank; wolf and cabbage on starting bank
    
    3. Second crossing:

       - Farmer returns alone
       - Takes the wolf across
       - Returns with the goat
       - Result: Wolf on far bank; goat and cabbage on starting bank
    
    4. Third crossing:

       - Leaves goat, takes cabbage across
       - Result: Wolf and cabbage on far bank; goat on starting bank
    
    5. Final crossing:

       - Returns empty to get goat
       - Takes goat across
       - Done! All items safely across
    
    This solution ensures nothing gets eaten because the wolf and goat, or goat and cabbage, are never left alone together.

    Would you like me to explain any part of the solution in more detail?

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal