Preferences

w10-1
Joined 3,198 karma

  1. OMG I'm so happy you never had to write documentation in the 1990's!
  2. This is a very significant result for a high-traffic real situations -- work to be proud of.

    To summarize:

    When a prosthetic lower leg was attached, they connected antagonist muscles to the leg, with sensors from those muscles in a control loop to the leg (ankle), mimicking how proprioception works. (The sensors are the new interface technology.)

    The patient knew and could move the position of the foot when he couldn't see it. He walked up stairs with the usual natural coordinated movements. And he felt like the leg was part of him.

    It's one thing to (cortically) plan and execute and track prosthetics visually; it's another for the cerebellum to autonomously monitor and control them, and not to feel cut off.

    This seems workable as a standard of care for arms and hands as well. And in this case, it was installed years after the leg was lost, so it works for retrofits (granting this is N=1 young patient in otherwise excellent condition).

  3. This is good and detailed, but misses a broader trend: how "worse is better" started to win - first Java over C++, then python and javascript over Java, and here markdown over Word and docbook.

    When markdown emerged, docbook was getting even more elaborate, and vendors everywhere had for decades been locking people into frameworks and languages with fantastic features that were hard to use -- and then the internet bubble had popped. Then people realized they'd thrown away years building complex system, and had little tolerance for promises.

    Markdown is something you can use in its native form. It's both source and destination, with a touch of future-proofing: if the opportunity arrives, you can polish it into anything, and mostly parse it yourself.

    (What's surprising to me is that pandoc barely registers when compared with markdown on google trends since 2004; pandoc is the reason I switched completely to markdown in ~2010)

  4. > Mathematics is the language of science

    So, biology and medicine are not sciences? Or are only sciences to the extent they can be mathematically described?

    The scientific method and models are much more than math. Equating the reality with the math has let to myriad misconceptions, like vanishing cats.

    And silly is good for a title -- descriptive and enticing -- to serve the purpose of eliciting the attention without which the content would be pointless.

  5. It's very helpful to see the big picture of pull vs push-pull and caching; often capabilities just get tacked on as needed, to eventually build a mess.

    Before LSP, an earlier generation of java compilers circa 2001 (eclipse, then javac) supported incremental compilation and model queries. This effort extended into runtime hot-reload of compatible code (which was ambitious, but has mostly been limited to changing function bodies).

    Here's a reddit post with a nice video on point with an excellent series of references (the author may have seen, but they didn't post references to what they read):

    https://www.reddit.com/r/ProgrammingLanguages/comments/ge0s3...

    The database framing and input caching you mention suggests a new compiler might benefit from using a database instead of in-memory trees and such. In particular, I wonder if datalog style database, with declared code as rules with type consequences, would help. (i.e., the general type rules stay, while declared instances show as new relations with consequence per the general rules). Often those datalog systems built via relation models (A -> B via R) have an extra revision field, and update just by issuing new revisions (i.e., without actually deleting the old), resulting in systems where you can backtrack in time (and don't pay for deletions/memory managment until needed). Such revision history might be helpful for calculating proposed fixes (by unwinding to a last-known-good state, and then changing subsequent edit-declarations/operations). However, all such databases I'm familiar now with force you to copy data to get results, and few have robust query caching or planning or extensible query functions. It would be interesting if someone purpose-built a relation and rules database for compilers.

  6. As others have noted:

    - Whether this is needed or helps depends not on percentage of owners or buyer/sellers, but on the effect in the market of such players. REIT's have an outsize effect because they are repeat players, and thus lucrative clients for brokers, who qualify themselves by skewing local markets accordingly.

    - Policy-wise, it's hard to distinguish by size: second homes, mom-and-pop with a few rentals, REIT, private equity. (This is how corporations get free-speech rights.)

    - Politically, it's a shame that a real problem is addressed via scapegoating

    - Practically, it will have little effect since REIT's and home builders are sitting on a lot of inventory that they can't sell, so they've stopped accumulating (and they're resorting to secondary offerings to pay off the original investors). Indeed, to the extent this stimulates buying, they're all for it.

    - Ethically, the US has been a magnet for money laundering, much of it via real estate, which has pushed up asset prices and de-conditioned professionals. Scapegoating only delays reform.

  7. dead simple: (AI) automation gains can't be modeled via linear task-savings due to "the structure of bottlenecks and how automation reshapes worker time around them"

    Nothing in the article about targeting the rate-limiting factors.

    And on the first line of the first page, this gem of gratitude: "We thank Refine.ink, ChatGPT 5.2 Pro, and Claude Opus 4.5 for research assistance"

    And on this government-sponsored paper, a warning that copying ANY portion of the text REQUIRES I accompany it with full credit, including the copyright notice, so that quote above puts me into noncompliance.

  8. The story in the patterns book about architecture: the architect got tired of arguing with the bishops about a church, so they put a hot tub in the corner on the next design. The bishops agreed that the hot tub must go, and the design was approved.
  9. This is a stunning catalog of UI hypocrisy, a call to action for fans and haters.

    But what's interesting is why such hypocrisy persists - in particular, why so bad now?

    Yes, designers might need to make work for themselves; yes, a new OS has to seem new, to justify upgrades and convince younglings that this isn't the oldsters' ride; but haventt these always been true?

    What's different is Apple's slide into disorganization, as it spreads work around the globe in exchange for market access, and internal leaders coast in their mutual non-aggression pacts.

    What remains of the center can issue global orders (adopt the liquid glass aesthetic; put icons on every action) and the periphery can comply - nominally, imperfectly, and inconsistently. Quality issues come to be tolerated like chronic inflammation, and even deployed in passive-aggressive turf battles.

    "Back in the day" everyone would be pulled into a rock-tumbler room and grind it out. That's neither possible nor wanted today (as game theory effaced the requisite obliviousness).

    What to do? Many YC companies have bonding time, where scattered teams join up for intense periods to restore alignment. Otherwise, the Apple might be ripe for a round of organizational consolidation.

    Personally, I think internal competition with some misses and inconsistencies are a good thing long term. Inflammation is not cancer, and there are better ways to tamp it down.

  10. The threshold question is crossover: what Android development experience is required for Swift developers, and what Swift experience is required for Android/Kotlin developers? By saying "without touching XML, Java, or Kotlin", are you implying that Swift developers without Android experience could be successful?

    Then the questions is: roughly what percentage of Kotlin or Flutter apps could be writable in Swift? Today and next year?

  11. While AI might have amplified the end, the drop-off preceded significant AI usage for coding.

    So some possible reasons:

    - Success: all the basic questions were answered, and the complex questions are hard to ask.

    - Ownership: In its heyday, projects used SoF for their support channel because it meant they don't have to answer twice. Now projects prefer to isolate dependencies to github and not lose control over messaging to over-eager users.

    - Incentives: Good SoF karma was a distinguishing feature in employment searches. Now it wouldn't make a difference, and is viewed as being too easy to scam

    - Demand: Fewer new projects. We're past the days of Javascript and devops churn.

    - Community: tight job markets make people less community-oriented

    Some non-reasons:

    - Competition (aside from AI at the end): SoF pretty much killed the competition in that niche (kind of like craigslist).

  12. Writing for publication is a ridiculous amount of work, smoothing and digesting to the point of pablum, because it's just hard to please everybody. Now that LLM's can tailor to chapter-level discussions, why write?

    Still, that's what it takes to reach N > friends+students.

    It's beyond ironic that AI empowerment is leading actual creators to stop creating. Books don't make sense any more, and your pet open source project will be delivered mainly via LLM's that conceal your authorship and voice and bastardize the code.

    Ideas form through packaging insight for others. Where's the incentive otherwise?

  13. > all compete with each other

    It's common business practice to set up internal innovation competitions, and blend the best.

  14. TIL the scale of bitcoin derivatives in 2020 (hence volatility): ~2T on 2B market activity. Jeepers!

    --- Starting in late 2020, as shown in The Economist's graphic, the spot market in Bitcoin became dwarfed by the derivatives markets. In the last month $1.7T of Bitcoin futures traded on unregulated exchanges, and $6.4B on regulated exchanges. Compare this with the $1.8B of the spot market in the same month. ---

  15. This is the 2002 law article, before the 2006 book on networks that reflected early interest in network effects, arguing that open-source is an emergent mode of production. The same analysis could arguably be applied to the creator or influencer economy. It aged well.

      https://en.wikipedia.org/wiki/Yochai_Benkler
    
    But he adopting the techniques of transaction cost economics (TCE) while at the same time posing straw man TCE claims (e.g., that TCE says there are only integrated firms and markets). TCE says transaction costs matter most in determining economies of transactions at high scale, and its methods can show how a broad variety of costs end up shaping activity and institutions. It also explains which innovations are disruptive (surprise: they change the transaction costs) and thus how digitalization has had such a huge impact so quickly.

    The TCE analysis has become second nature in business strategy, but surprisingly rare in policy circles, its intended audience.

    And his analysis in hindsight is a bit wishful. Roughly speaking, while open-source reduces coordination costs, it doesn't reduce the underlying complexity. The big open-source projects get that way through major corporate sponsorship, and they run with very clear dictatorial/oligarchic or bureaucratic decision-making if they evolve.

    One of the principles of TCE methodology is to compare not ideal with real, but two actual and viable forms of organization. In this case he's projected ideal benefits without any of the real costs. That was forgivable in 2002 or even 2006, but it would be malpractice now.

  16. I like the implication that we can have an alternative to uv speed-wise, but I think reliability and understandability are more important in this context (so this comment is a bit off-topic).

    What I want from a package manager is that it just works.

    That's what I mostly like about uv.

    Many of the changes that made speed possible were to reduce the complexity and thus the likelihood of things not working.

    What I don't like about uv (or pip or many other package managers), is that the programmer isn't given a clear mental model of what's happening and thus how to fix the inevitable problems. Better (pubhub) error messages are good, but it's rare that they can provide specific fixes. So even if you get 99% speed, you end up with 1% perplexity and diagnostic black boxes.

    To me the time that matters most is time to fix problems that arise.

  17. Yes, and the terms are much more protective for enterprise clients, so it pays to pay. Similar to a protection racket, they (Z.ai et al) raise a threat and then offer to relieve the same threat.

    The real guarantee comes from their having (enterprise) clients who would punish them severely for violating their interests, and then sliding under the same roof (because technical consistency of same service?). The punishment comes in the form of becoming persona non-grata in investment circles, applied to both the company and the principals. So it's safe for little-company if it's using the same service as that used by big-company - a kind of free-riding protection. The difficulty with that is it does open a peephole for security services (and Z.ai expressly says it will comply with any such orders), and security services seem to be used for technological competition nowadays.

    In fairness, it's not clear the TOS from other providers are any better, and other bigger providers might be more likely to have established cooperation with security services - if that's a concern.

  18. The finding is that older diesel engines and renewables produce measurable adverse effects in microglial stem cells, but new diesel formulations in new engines do not. The implication is that policy-makers should accelerate the transition to newer diesel and abandon reusable diesel. Since Europe has been gung-ho for diesel for decades, this finding could have significant regulatory and market effects.
  19. Appears to be cheap and effective, though under suspicion.

    But the personal and policy issues are about as daunting as the technology is promising.

    Some the terms, possibly similar to many such services:

        - The use of Z.ai to develop, train, or enhance any algorithms, models, or technologies that directly or indirectly compete with us is prohibited
        - Any other usage that may harm the interests of us is strictly forbidden
        - You must not publicly disclose [...] defects through the internet or other channels.
        - [You] may not remove, modify, or obscure any deep synthesis service identifiers added to Outputs by Z.ai, regardless of the form in which such identifiers are presented
        - For individual users, we reserve the right to process any User Content to improve our existing Services and/or to develop new products and services, including for our internal business operations and for the benefit of other customers. 
        - You hereby explicitly authorize and consent to our: [...] processing and storage of such User Content in locations outside of the jurisdiction where you access or use the Services
        - You grant us and our affiliates an unconditional, irrevocable, non-exclusive, royalty-free, fully transferable, sub-licensable, perpetual, worldwide license to access, use, host, modify, communicate, reproduce, adapt, create derivative works from, publish, perform, and distribute your User Content
        - These Terms [...] shall be governed by the laws of Singapore
    
    To state the obvious competition issues: If/since Anthropic, OpenAI, Google, X.AI, et al are spending billions on data centers, research, and services, they'll need to make some revenue. Z.ai could dump services out of a strategic interest in destroying competition. This dumping is good for the consumer short-term, but if it destroys competition, bad in the long term. Still, customers need to compete with each other, and thus would be at a disadvantage if they don't take advantage of the dumping.

    Once your job or company depends on it to succeed, there really isn't a question.

  20. To emphasize the dynamics: (1) No person will migrate until most of their connectors migrate, and their connectors cannot migrate until everyone does. It's deadlock, for every thread you care about. (2) Automation in job applications and a declining job market have both made networking more essential, so there's no tolerance for lost connections, so you'd also have to solve those problems too before all would switch. (3) Even if users don't like it and could surmount the coordination costs of switching, if companies continue to rely on it, switching would be a career-limiting move; and because companies cannot signal their recruitment strategies without triggering a stampede to game their system, companies tend to keep quiet, so no company would lead an exodus.

    Still, no one (outside influencers) likes how work networking and recruitment happens today, so user might do both linkedin and some new system if one created a more effective networking and recruitment mode (e.g., for some well-defined, high-value subset, like recent Stanford MBA's, YC alumni, FinTech, ...).

  21. Any theory of how people behave works so long as (key) people follow it.

    It's not really game theory but economics: the supply curve for nicely contended markets, and transaction costs for everything. Game theory only addresses the information aspects of transaction costs, and translates mostly only for equal power and information (markets).

    The more enduring theory is the roof; i.e., it mostly reduces to what team you're on: which mafia don, or cold-war side, or technology you're leveraging for advantage. In this context, signaling matters most: identifying where you stand. As an influencer, the signal is that you're the leading edge, so people should follow you. The betas vie to grow the alpha, and the alpha boosts or cuts betas to retain their role as decider. The roof creates the roles and empowers creatures, not vice-versa.

    The character of the roof depends on resources available: what military, economic, spiritual or social threat is wielded (in the cold war, capitalism, religion or culture wars).

    The roof itself - the political franchise of the protection racket - is the origin of "civilization". The few escapes from such oppression are legendary and worth emulating, but rare. Still, that's our responsibility: to temper or escape.

  22. old guy gave all his money and energy to start a school to keep civilization from going bonkers. i never knew him and he never knew me but we still are related.
  23. VSCode, IntelliJ, Eclipse...
  24. Kudos to Cloudflare for clarity and diligence.

    When talking of their earlier Lua code:

    > we have never before applied a killswitch to a rule with an action of “execute”.

    I was surprised that a rules-based system was not tested completely, perhaps because the Lua code is legacy relative to the newer Rust implementation?

    It tracks what I've seen elsewhere: quality engineering can't keep up with the production engineering. It's just that I think of CloudFlare as an infrastructure place, where that shouldn't be true.

    I had a manager who came from defense electronics in the 1980's. He said in that context, the quality engineering team was always in charge, and always more skilled. For him, software is backwards.

  25. > It might be the case that real revenue is worse than hypothetical revenue.

    Because Altman is eying IPO, and controlling the valuation narrative.

    It's a bit like keeping rents high and apartments empty to build average rents while hiding the vacancy rate to project a good multiple (and avoid rent control from user-facing businesses).

    They'll never earn or borrow enough for their current spend; it has to come from equity sales.

  26. > changing the habits of 800 million+ people who use ChatGPT every week, however, is a battle that can only be fought individual by individual

    That's the basis for his conclusions about both OpenAI and Google, but is it true?

    It's precisely because uptake has been so rapid that I believe it can change rapidly.

    I also think worldwide consumers no longer view US tech as some savior of humanity that they need to join or be left behind. They're likely to jump to any local viable competitor.

    Still the adtech/advertiser consumers who pay the bills are likely to stay even if users wander, so we're back to the battle of business models.

  27. Underlying this seems to be a hard engineering problem: how to run a SaaS within UI timeframes that can store or ferry enough context to tailor for individual users, with privacy.

    While Eddie Cue seems to be Apple's SaaS man, I can't say I'm confident that separating AI development and implementation is a good idea, or that Apple's implementation will not fall outside UI timeframes, given their other availability issues.

    Unstated really is how good local models will be as an alternative to SaaS. That's been the gambit, and perhaps the prospective hardware-engineering CEO signals some breakthrough in the pipeline.

  28. The title is misleading, and HN comments don't seem to relate to the article.

    The misleading part: the actual finding is that organoid cells fire in patterns that are "like" the patterns in the brain's default mode network. That says nothing about whether the there's any relationship between phenomena of a few hundred organoid cells and millions in the brain.

    As a reminder, heart pacing cells are automatically firing long before anything like a heart actually forms. It's silly to call that a heartbeat because they're not actually driving anything like a heart.

    So this is not evidence of "firmware" or "prewired" or "preconfigured" or any instructions whatsoever.

    This is evidence that a bunch of neurons will fall into patterns when interacting with each other -- no surprise since they have dendrites and firing thresholds and axons connected via neural junctions.

    The real claim is that organoids are a viable model since they exhibit emergent phenomena, but whether any experiments can lead to applicable science is an open question.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal