- There have been many demonstrations that F150, cybertruck, and others have short ranges when loaded and even shorter ranges when towing (I saw sub 40 miles on a full charge claimed by some people).
If you use your truck as a truck, that’s simply not feasible. If you just use it as expensive transportation, you probably still try to convince yourself by thinking about how you might use it as a truck sometimes and won’t buy an electric truck either.
There’s not much of a market, so leaving makes sense.
- CAS latency doesn't matter so much as ns of total random-access latency and the raw clockspeed of the individual RAM cells. If you are accessing the same cell repeatedly, RAM hasn't gotten faster in years (around DDR2 IIRC).
- Only if you change TS to have actually sound types and it enables good performance instead of enabling you to craft extraordinarily convoluted types for stuff that you should have never written in the first place.
Put another way, I'm fine with the TS syntax (and use TS because there aren't other choices), but the TS semantics aren't a good long-term solution.
- This is an assertion, but there’s no supporting evidence and many indicators you are incorrect.
50 years ago was 1975. It wasn’t the dark ages and the worst cases were already being moved to asylums for at least 150 years before that.
Suicide in particular is hard to hide any suicide rates are going up despite treatment. If mental illness rates are the same as 50 years ago and more people are getting effective treatment, we’d expect per capita rates to decrease.
Impoverished third world countries where people have nothing but problems almost universally have higher reported happiness and less suicide.
Severe mental health issues don’t just go away because you drink and if alcohol could suppress the problems, we’d never have made treatments to begin with.
In terms of “self medicating” with drugs, we’re hitting an all-time high (pun intended). Risky and self destructive behavior is also way up as evidence with our prison systems overflowing.
Nothing indicates to me that mental health is improving and everything seems to indicate it getting worse despite all the attempted interventions.
- Nobody would consider Chrome or Firefox to be immature or lacking polish because they have replaced entire compilers several times over the past few years? I don't have an exact count, but they probably do this every 3-5 years which puts them way ahead of Racket.
I'd also note that Chez Scheme was a commercial implementation bought and open-sourced by Cisco. It wasn't something they threw together. Because it is a complete scheme v6 implementation they are building on instead of rolling their own implementation in C. Coding against a stable Scheme API has to be easier and less buggy than what they had before (not to mention Chez being much faster at a lot of stuff).
- This theory is a science-free zone. It seems far more likely that the drug induced sudden, overwhelming suicidal thoughts than someone said "I feel the best I've ever felt and life is looking up. I think I'll kill myself and make all the good feelings go away".
Furthermore, if the latter were true, it would be an indication that depression was a symptom rather than a cause and the psychiatrist misdiagnosed and improperly treated the patient.
- Prozac and other SSRIs are proven to cause MORE suicidal tendencies in children.
- > Note that the black box warning has nothing to do with long-term effects of the medication
What are the long-term effects of suicide?
A 7-year-old kid doesn't understand what suicide really means. Putting them on something that encourages a behavior that they don't understand and has completely catastrophic results isn't a risk I would take with my children.
- The solution for suicidal thoughts is a drug known to induce suicidal thoughts?
You said elsewhere that there were "no known long-term side effects". Aside from that not being universally true for any drug I've ever personally researched, no side effect is more long-term than suicide.
- The data is very clear that the rate of mental illnesses is increasing. Rates of severe mental illnesses like Schizophrenia are also increasing.
NONE of the current theories being experimented with on patients have a concrete, proven scientific basis with some such as the decades-long SSRI scam have actively harmed patients and created physical dependence/addiction and actively causing harm to patients and their families (eg, SSRI-induced suicides).
I trust science, but I don't trust scientists any more than I trust any other human with their money, career, and reputation on the line. I trust the FDA and pharmaceutical company ethics even less (eg, Bayer knowingly selling HIV-infested drugs to hemophiliacs, saying Oxycotin is non-addictive, or the revolving door that allows non-working SSRIs to be released and marketed as working despite all evidence to the contrary).
- If you don't have a serious model for what you are treating, then you are experimenting on your patients and hoping it works for unknown reasons. Not too different from folk remedies. Even worse, patients are essentially never informed that the doctor is throwing things at the wall hoping something sticks.
- > These companies target different workloads.
This hasn't been true for at least half of a decade.
The latest generation of phone chips run from 4.2GHz all the way up to 4.6GHz with even just a single core using 12-16 watts of power and multi-core hitting over 20w.
Those cores are designed for desktops and happen to work in phones, but the smaller, energy-efficient M-cores and E-cores still dominate in phones because they can't keep up with the P-cores.
ARM's Neoverse cores are mostly just their normal P-cores with more validation and certification. Nuvia (designers of Qualcomm's cores) was founded because the M-series designers wanted to make a server-specific chip and Apple wasn't interested. Apple themselves have made mind-blowingly huge chips for their Max/Ultra designs.
"x86 cores are worse because they are server-grade" just isn't a valid rebuttal. A phone is much more constrained than a watercooled server in a datacenter. ARM chips are faster and consume less power and use less die area.
> So at this point, I think it’s actually the ARM ecosystem that has the larger R&D budget.
Apple doesn't design ARM's chips and we know ARM's peak revenue and their R&D spending. ARM pumps out several times more cores per year along with every other thing you would need to make a chip (and they announced they are actually making their own server chips). ARM does this with an R&D budget that is a small fraction of AMD's budget to do the same thing.
What is AMD's excuse? Either everybody at AMD and Intel suck or all the extra work to make x86 fast (and validating all the weirdness around it) is a ball and chain slowing them down.
- Apple isn't dropping Rosetta 2. They say quite clearly that it's sticking around indefinitely for older applications and games.
It seems to me that Apple is simply going to require native ARM versions of new software if you want it to be signed and verified by them (which seems pretty reasonably after 5+ years).
- There seem to be very real differences between x86 and ARM not only in the designs they make easy, but also in the difficulty of making higher-performance designs.
It's telling that ARM, Apple, and Qualcomm have all shipped designs that are physically smaller, faster, and consume way less power vs AMD and Intel. Even ARM's medium cores have had higher IPC than same-generation x86 big cores since at least A78. SiFive's latest RISC-V cores are looking to match or exceed x86 IPC too. x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).
ISA matters.
x86 is quite constrained by its decoders with Intel's 6 and 8-wide cores being massive and sucking an unbelievable amount of power and AMD choosing a hyper-complex 2x4 decoder implementation with a performance bottleneck in serial throughput. Meanwhile, we see 6-wide
32-bit ARM is a lot more simple than x86, but ARM claimed a massive 75% reduction in decoder size switching to 64-bit-only in A715 while increasing throughput. Things like uop cache aren't free. They take die area and power. Even worse, somebody has to spend a bunch of time designing and verifying these workarounds which balloons costs and increases time to market.
Another way the ISA matters is memory models. ARM uses barriers/fences which are only added where needed. x86 uses much tighter memory model that implies a lot of things the developers and compiler didn't actually need/want and that impact performance. The solution (not sure if x86 actually does this) is doing deep analysis of which implicit barriers can be provably ignored and speculating on the rest. Once again though, wiring in all these various proofs into the CPU is complicated and error-prone which slows things down while bloating circuitry, using extra die area/power, and sucking up time/money that could be spent in more meaningful ways.
While the theoretical performance mountain is the same, taking the stairs with ARM or RISC-V is going to be much easier/faster than trying to climb up the cliff faces.
- You are half correct about 2^53-1 being used (around 9 quadrillion). It is the largest integer representable with 64-bit float. JS even includes a `Number.MAX_SAFE_INTEGER`.
That said, these only get used in the rare cases where your number exceeds around 1 billion which is fairly rare.
JS engines use floats only when they cannot prove/speculate that a number can be an i32. They only use 31 of the 32 bits for the number itself with the last bit used for tagging. i32 takes fewer cycles to do calculations with (even with the need to deal with the tag bit) compared to f64. You fit twice as many i32 in a cache line (affects prefetching). i32 uses half the RAM (and using half the cache increases the hit rate). Finally, it takes way more energy to load two numbers into the ALU/FPU than it does to perform the calculation, so cutting the size in half also reduces power consumption. The max allowable size of a JS array is also 2^32.
JS also has BigInt available for arbitrary precision integers and these are probably what someone should be using if they expect to go over that 2^31-1 limit because hitting a number that big generally means you have something unbounded and might go over that 2^53-1 limit.
- When you outsource all your jobs that actually produce goods instead of services, you have nothing left to offer. At the same time, we're proving that short-term consumerism is completely destructive (designed to break is just another variant of the broken window fallacy).
Either you lower US standard of living to match the countries it is currently outsourcing to or you establish protectionism and isolationist policies with a focus on total efficiency rather than short-term gains.
- Does AI look like this from an average or from training on the reams of copyright free books from a century ago? It seems more like the latter.
- We need to have an easy way to pay small amounts for a one-time service. A lot of websites offer content that you need only a couple of times in your life. It's worth paying for, but not worth all the hassle of setting up a normal payment.
This leaves ads as the only form of revenue and because ads don't care about the content, this creates a race to the bottom on generating slop.
- Should have paid the $9 instead of just skimming...
v8 had PTC, but removed it because they insisted it MUST have a new tail call keyword. When they were shot down, they threw a childish fit and removed the PTC from their JIT.