- > all compete with each other
It's common business practice to set up internal innovation competitions, and blend the best.
- TIL the scale of bitcoin derivatives in 2020 (hence volatility): ~2T on 2B market activity. Jeepers!
--- Starting in late 2020, as shown in The Economist's graphic, the spot market in Bitcoin became dwarfed by the derivatives markets. In the last month $1.7T of Bitcoin futures traded on unregulated exchanges, and $6.4B on regulated exchanges. Compare this with the $1.8B of the spot market in the same month. ---
- This is the 2002 law article, before the 2006 book on networks that reflected early interest in network effects, arguing that open-source is an emergent mode of production. The same analysis could arguably be applied to the creator or influencer economy. It aged well.
But he adopting the techniques of transaction cost economics (TCE) while at the same time posing straw man TCE claims (e.g., that TCE says there are only integrated firms and markets). TCE says transaction costs matter most in determining economies of transactions at high scale, and its methods can show how a broad variety of costs end up shaping activity and institutions. It also explains which innovations are disruptive (surprise: they change the transaction costs) and thus how digitalization has had such a huge impact so quickly.https://en.wikipedia.org/wiki/Yochai_BenklerThe TCE analysis has become second nature in business strategy, but surprisingly rare in policy circles, its intended audience.
And his analysis in hindsight is a bit wishful. Roughly speaking, while open-source reduces coordination costs, it doesn't reduce the underlying complexity. The big open-source projects get that way through major corporate sponsorship, and they run with very clear dictatorial/oligarchic or bureaucratic decision-making if they evolve.
One of the principles of TCE methodology is to compare not ideal with real, but two actual and viable forms of organization. In this case he's projected ideal benefits without any of the real costs. That was forgivable in 2002 or even 2006, but it would be malpractice now.
- I like the implication that we can have an alternative to uv speed-wise, but I think reliability and understandability are more important in this context (so this comment is a bit off-topic).
What I want from a package manager is that it just works.
That's what I mostly like about uv.
Many of the changes that made speed possible were to reduce the complexity and thus the likelihood of things not working.
What I don't like about uv (or pip or many other package managers), is that the programmer isn't given a clear mental model of what's happening and thus how to fix the inevitable problems. Better (pubhub) error messages are good, but it's rare that they can provide specific fixes. So even if you get 99% speed, you end up with 1% perplexity and diagnostic black boxes.
To me the time that matters most is time to fix problems that arise.
- Yes, and the terms are much more protective for enterprise clients, so it pays to pay. Similar to a protection racket, they (Z.ai et al) raise a threat and then offer to relieve the same threat.
The real guarantee comes from their having (enterprise) clients who would punish them severely for violating their interests, and then sliding under the same roof (because technical consistency of same service?). The punishment comes in the form of becoming persona non-grata in investment circles, applied to both the company and the principals. So it's safe for little-company if it's using the same service as that used by big-company - a kind of free-riding protection. The difficulty with that is it does open a peephole for security services (and Z.ai expressly says it will comply with any such orders), and security services seem to be used for technological competition nowadays.
In fairness, it's not clear the TOS from other providers are any better, and other bigger providers might be more likely to have established cooperation with security services - if that's a concern.
- The finding is that older diesel engines and renewables produce measurable adverse effects in microglial stem cells, but new diesel formulations in new engines do not. The implication is that policy-makers should accelerate the transition to newer diesel and abandon reusable diesel. Since Europe has been gung-ho for diesel for decades, this finding could have significant regulatory and market effects.
- Appears to be cheap and effective, though under suspicion.
But the personal and policy issues are about as daunting as the technology is promising.
Some the terms, possibly similar to many such services:
To state the obvious competition issues: If/since Anthropic, OpenAI, Google, X.AI, et al are spending billions on data centers, research, and services, they'll need to make some revenue. Z.ai could dump services out of a strategic interest in destroying competition. This dumping is good for the consumer short-term, but if it destroys competition, bad in the long term. Still, customers need to compete with each other, and thus would be at a disadvantage if they don't take advantage of the dumping.- The use of Z.ai to develop, train, or enhance any algorithms, models, or technologies that directly or indirectly compete with us is prohibited - Any other usage that may harm the interests of us is strictly forbidden - You must not publicly disclose [...] defects through the internet or other channels. - [You] may not remove, modify, or obscure any deep synthesis service identifiers added to Outputs by Z.ai, regardless of the form in which such identifiers are presented - For individual users, we reserve the right to process any User Content to improve our existing Services and/or to develop new products and services, including for our internal business operations and for the benefit of other customers. - You hereby explicitly authorize and consent to our: [...] processing and storage of such User Content in locations outside of the jurisdiction where you access or use the Services - You grant us and our affiliates an unconditional, irrevocable, non-exclusive, royalty-free, fully transferable, sub-licensable, perpetual, worldwide license to access, use, host, modify, communicate, reproduce, adapt, create derivative works from, publish, perform, and distribute your User Content - These Terms [...] shall be governed by the laws of SingaporeOnce your job or company depends on it to succeed, there really isn't a question.
- To emphasize the dynamics: (1) No person will migrate until most of their connectors migrate, and their connectors cannot migrate until everyone does. It's deadlock, for every thread you care about. (2) Automation in job applications and a declining job market have both made networking more essential, so there's no tolerance for lost connections, so you'd also have to solve those problems too before all would switch. (3) Even if users don't like it and could surmount the coordination costs of switching, if companies continue to rely on it, switching would be a career-limiting move; and because companies cannot signal their recruitment strategies without triggering a stampede to game their system, companies tend to keep quiet, so no company would lead an exodus.
Still, no one (outside influencers) likes how work networking and recruitment happens today, so user might do both linkedin and some new system if one created a more effective networking and recruitment mode (e.g., for some well-defined, high-value subset, like recent Stanford MBA's, YC alumni, FinTech, ...).
- Any theory of how people behave works so long as (key) people follow it.
It's not really game theory but economics: the supply curve for nicely contended markets, and transaction costs for everything. Game theory only addresses the information aspects of transaction costs, and translates mostly only for equal power and information (markets).
The more enduring theory is the roof; i.e., it mostly reduces to what team you're on: which mafia don, or cold-war side, or technology you're leveraging for advantage. In this context, signaling matters most: identifying where you stand. As an influencer, the signal is that you're the leading edge, so people should follow you. The betas vie to grow the alpha, and the alpha boosts or cuts betas to retain their role as decider. The roof creates the roles and empowers creatures, not vice-versa.
The character of the roof depends on resources available: what military, economic, spiritual or social threat is wielded (in the cold war, capitalism, religion or culture wars).
The roof itself - the political franchise of the protection racket - is the origin of "civilization". The few escapes from such oppression are legendary and worth emulating, but rare. Still, that's our responsibility: to temper or escape.
- old guy gave all his money and energy to start a school to keep civilization from going bonkers. i never knew him and he never knew me but we still are related.
- VSCode, IntelliJ, Eclipse...
- Kudos to Cloudflare for clarity and diligence.
When talking of their earlier Lua code:
> we have never before applied a killswitch to a rule with an action of “execute”.
I was surprised that a rules-based system was not tested completely, perhaps because the Lua code is legacy relative to the newer Rust implementation?
It tracks what I've seen elsewhere: quality engineering can't keep up with the production engineering. It's just that I think of CloudFlare as an infrastructure place, where that shouldn't be true.
I had a manager who came from defense electronics in the 1980's. He said in that context, the quality engineering team was always in charge, and always more skilled. For him, software is backwards.
- > It might be the case that real revenue is worse than hypothetical revenue.
Because Altman is eying IPO, and controlling the valuation narrative.
It's a bit like keeping rents high and apartments empty to build average rents while hiding the vacancy rate to project a good multiple (and avoid rent control from user-facing businesses).
They'll never earn or borrow enough for their current spend; it has to come from equity sales.
- > changing the habits of 800 million+ people who use ChatGPT every week, however, is a battle that can only be fought individual by individual
That's the basis for his conclusions about both OpenAI and Google, but is it true?
It's precisely because uptake has been so rapid that I believe it can change rapidly.
I also think worldwide consumers no longer view US tech as some savior of humanity that they need to join or be left behind. They're likely to jump to any local viable competitor.
Still the adtech/advertiser consumers who pay the bills are likely to stay even if users wander, so we're back to the battle of business models.
- Underlying this seems to be a hard engineering problem: how to run a SaaS within UI timeframes that can store or ferry enough context to tailor for individual users, with privacy.
While Eddie Cue seems to be Apple's SaaS man, I can't say I'm confident that separating AI development and implementation is a good idea, or that Apple's implementation will not fall outside UI timeframes, given their other availability issues.
Unstated really is how good local models will be as an alternative to SaaS. That's been the gambit, and perhaps the prospective hardware-engineering CEO signals some breakthrough in the pipeline.
- The title is misleading, and HN comments don't seem to relate to the article.
The misleading part: the actual finding is that organoid cells fire in patterns that are "like" the patterns in the brain's default mode network. That says nothing about whether the there's any relationship between phenomena of a few hundred organoid cells and millions in the brain.
As a reminder, heart pacing cells are automatically firing long before anything like a heart actually forms. It's silly to call that a heartbeat because they're not actually driving anything like a heart.
So this is not evidence of "firmware" or "prewired" or "preconfigured" or any instructions whatsoever.
This is evidence that a bunch of neurons will fall into patterns when interacting with each other -- no surprise since they have dendrites and firing thresholds and axons connected via neural junctions.
The real claim is that organoids are a viable model since they exhibit emergent phenomena, but whether any experiments can lead to applicable science is an open question.
- "Bad" regulation just raises the question what would be better for all concerned. Sometimes that means reducing the weight and impact of a concern (redefining the problem), but more often it means a different approach or more information.
In this case, pumping first-ever possible toxins into the ground could be toxic, destructive, and irreversible, in ways that are hard to test or understand in a field with few experts. The benefit is mainly a new financial quirk, to meet carbon accounting with uncertain gains for the environment. It's not hard to see why there's a delay, which would only be made worse with an oppositional company on a short financial leash pushing the burden back onto regulators.
The regulation that needs attention is not the unique weird case, but the slow expansion of under-represented, high-frequency or high-traffic - exactly like the cellular roaming charges or housing permits or cookies. It's all-too-easy to learn to live with small burdens.
- Do you have any evidence that the AI efforts are not being funded by the AI product, Kagi Assistant? I would expect the reverse: the high-margin AI products are likely cross-subsidizing the low-margin search products and their sliver of AI support.
Still, that's what it takes to reach N > friends+students.
It's beyond ironic that AI empowerment is leading actual creators to stop creating. Books don't make sense any more, and your pet open source project will be delivered mainly via LLM's that conceal your authorship and voice and bastardize the code.
Ideas form through packaging insight for others. Where's the incentive otherwise?