- ridiculous_fish parentThe “tangle of spheres and circles” is probably a reference to the Hopf fibration.
- Except 91.
- Extraordinary project. I had several questions which I believe I have answered for myself (pizlonator please correct if wrong):
1. How do we prevent loading a bogus lower through misaligned store or load?
Answer: Misaligned pointer load/stores are trapped; this is simply not allowed.
2. How are pointer stores through a pointer implemented (e.g. `*(char **)p = s`) - does the runtime have to check if *p is "flight" or "heap" to know where to store the lower?
Answer: no. Flight (i.e. local) pointers whose address is taken are not literally implemented as two adjacent words; rather the call frame is allocated with the same object layout as a heap object. The flight pointer is its "intval" and its paired "lower" is at the same offset in the "aux" allocation (presumably also allocated as part of the frame?).
3. How are use-after-return errors prevented? Say I store a local pointer in a global variable and then return. Later, I call a new function which overwrites the original frame - can't I get a bogus `lower` this way?
Answer: no. Call frames are allocated by the GC, not the usual C stack. The global reference will keep the call frame alive.
That leads to the following program, which definitely should not work, and yet does. ~Amazing~ Unbelievable:
#include <stdio.h> char *bottles[100]; __attribute__((noinline)) void beer(int count) { char buf[64]; sprintf(buf, "%d bottles of beer on the wall", count); bottles[count] = buf; } int main(void) { for (int i=0; i < 100; i++) beer(i); for (int i=99; i >= 0; i--) puts(bottles[i]); } - What's interesting about it is that it supports SPR AVS, which is a new USB power delivery spec. I'm not aware of any other chargers that support this.
https://www.chargerlab.com/complete-pd-3-2-spr-avs-specifica...
- Doesn't every WASM program have to carry its own malloc/free today?
- You're right, thank you for the correction.
- My favorite FP Fun Fact is that float comparisons can (almost) use integer comparisons. To determine if a > b, reinterpret a and b as signed ints and just compare those like any old ints. It (almost) works!
The implication is that the next biggest float is (almost) always what you get when you reinterpret its bits as an integer, and add one. For example, start with the zero float: all bits zero. Add one using integer arithmetic. In int-speak it's just one; in float-speak it's a tiny-mantissa denormal. But that's the next float; and `nextafter` is implemented using integer arithmetic.
Learning that floats are ordered according to integer comparisons makes it feel way more natural. But of course there's the usual asterisks: this fails with NaNs, infinities, and negative zero. We get a few nice things, but only a few.
- Affine types, variance, higher-rank trait bounds, phantom data, MaybeUninit, and the whole macro and proc-macro systems are some examples of concepts that I found to be challenging when learning Rust.
Dyn-safety is another but I had encountered that previously in Swift.
- LLVM's CMake build has had lots of love poured into it.
- Oklo | Remote (US) or Santa Clara or Brooklyn | Full time | https://oklo.com
Join us in pioneering the next generation of nuclear reactors! You'll leverage your software skills alongside nuclear engineers to model, simulate, design, and deploy advanced fission power technology. You will work at the forefront of the nuclear industry, developing novel techniques to reach new levels of safety, efficiency, and resiliency. Come be a part of powering the future with advanced fission power plants to provide clean, reliable, affordable energy.
We are hiring for:
- Software Engineer: https://job-boards.greenhouse.io/oklo/jobs/4018702004
- Software Quality Assurance Engineer: https://job-boards.greenhouse.io/oklo/jobs/5480416004
See more opportunities here: https://job-boards.greenhouse.io/oklo
Please mention Hacker News in your cover letter!
- Jekyll and nginx in Docker on Hetzner for €4.49/mo
- At $WORK we use SQLite in WASM via the official ES module, running read-only in browser.
The performance is very poor, perhaps 100x worse than native. It's bad enough that we only use SQLite for trivial queries. All joins, sorting, etc. are done in JavaScript.
Profiling shows the slowdown is in the JS <-> WASM interop. This is exacerbated by the one-row-at-a-time "cursor" API in SQLite, which means at least one FFI round-trip for each row.
- Don't confuse "presence of dynamic types" with "absence of static types."
Think about the web, which is full of dynamicism: install this polyfill if needed, call this function if it exists, all sorts of progressive enhancement. Dynamic types are what make those possible.
- No, it's not so.
If the allocation is backed by the kernel, then it will be zero-filled for security reasons. If it's backed by user-space malloc then who knows; but there's never a scenario where a mallocated page is quietly replaced by a zero-filled page behind the scenes.
- SIMD is true, but the original guess is correct, and that effect is bigger!
using_map is faster because it's not allocating: it's re-using the input array. That is, it is operating on the input `v` value in place, equivalent to this:
This is a particularly fancy optimization that Rust can perform.pub fn using_map(mut v: Vec<i32>) -> Vec<i32> { v.iter_mut().for_each(|c| *c += 1); v } - "Numbers go into the numbers vector" is unusual - typically JS engines use either NaN-boxing or inline small integers (e.g. v8 SMI). I suppose this means that a simple `this.count += 1` will always allocate.
Have you considered using NaN-boxing? Also, are the type-specific vectors compacted by the GC, or do they maintain a free list?
- Shells have somewhat unusual parsing requirements. For example "if" is a keyword when used as `if echo` but not `echo if`.
So you either need to implement the lexer hack, or have a "string" token type which is disambiguated by the parser (which is what fish-shell does).
- Rust goes to substantial lengths to allow unwinding from panics. For example, see how complicated `Vec::retain_mut` is. The complexity is due to the possibility of a panic, and the need to unwind.
https://doc.rust-lang.org/1.80.1/src/alloc/vec/mod.rs.html#1...
- Any thoughts on the best way to express global locks in Rust?
A classic example is a set of bank accounts, atomically transacting with each other. Fine-grained per-account locking is possible, but risks deadlock due to lock ordering inversion. A simple solution is to replace per-account locks with a singleton global lock, covering all accounts. Any transaction must first acquire this lock, and now deadlock is impossible.
But this is an awkward fit for Rust, whose locks want to own the data they protect. What's the best way to express a global lock, enforcing that certain data may only be accessed while the global lock is held?
- It changes very little because there’s nothing to receive their kinetic energy.
Neutrons lose energy by colliding with things of similar mass, such as hydrogen nuclei (often in water). If they collide with a heavy nucleus, such as plutonium, they just bounce off without losing speed. (Or fission or capture.)
Think of billiards. The cue ball may slow or stop after hitting another ball, since they have similar masses. But hit the rail and it just bounces off, at the same speed, because the table is so much heavier.
If there are no light nuclei in the environment, then the neutrons won’t slow down.
- Why do we need the property that X is compact? Say we define X (the square) to exclude the boundary: X = (0,1)x(0,1). This is no longer a compact set, but it seems silly for the proof to now fail.
Is boundedness sufficient?
- Cobbled together from various sources:
""" - Be casual unless otherwise specified - Be very very terse. BE EXTREMELY TERSE. - If you are going to show code, write the code FIRST, any explanation later. ALWAYS WRITE THE CODE FIRST. Every single time. - Never blather on. - Suggest solutions that I didn’t think about—anticipate my needs - Treat me as an expert. I AM AN EXPERT. - Be accurate - Give the answer immediately. - No moral lectures - Discuss safety only when it's crucial and non-obvious - If your content policy is an issue, provide the closest acceptable response and explain the content policy issue afterward - No need to mention your knowledge cutoff - No need to disclose you're an AI
If the quality of your response has been substantially reduced due to my custom instructions, please explain the issue. """
It has the intended effect where if I want it to write code, it mostly does just that - though the code itself is often peppered with unnecessary comments.
Example session with GPT4: https://chatgpt.com/share/e0f10dbb-faa1-4dc4-9701-4a4d05a2a7...
- What would be the fixed point of f(x) = x + 1 in the classical untyped lambda calculus?
Sorry if this question is really basic, but it's honest and I find it fascinating.
- We often extend arithmetic with new sorts of numbers to solve previously unsolvable equations like sqrt(-1) - is finding a fixed point of f(x) = x + 1 in this tradition? Does it point to an extension of arithmetic?
Also I find this confusing:
"Applied to a function with one variable, the Y combinator usually does not terminate"
What does it mean for a function to terminate? Is it like the limit of a series?
- I have no background in combinatory logic, and so I've predictably never understood fixed-point combinators:
1. Every function has a fixed point.
2. A fixed-point combinator will compute it.
So what is the fixed point of f(x) = x + 1?
Also why doesn't this give us mathematical superpowers? Define f(x) = x iff x is a non-trivial zero of the Riemann zeta function...
- One unexpected consequence is the behavior around negative zero. For example:
is different thanvar d = -0.0
This IMO crosses the line into "outright bug" territory.var d = 0.0 d = -d - You mentioned a compiler crash - I suppose that means the Go compiler? That's pretty interesting! Any more information about the crash? Did you file a bug report?
- This is on the right track but not entirely right. In your F = a1x + a0, here a0 is a force, not a position, so you cannot neglect it on the basis of "default position." Instead you set it to zero because the system is at a stable equilibrium.
Here's the presentation I've seen. Usually we like to work in potentials, not forces, because potentials are nice scalar functions, while force is an ugly vector function. So say you have a potential V which is at a local minimum at position x.
Expand the potential at x around a small displacement dx: V(x + dx). This gives us the Taylor series V(x+dx) = V(x) + a1 V'(x) dx + a2 V''(x) dx^2 + a3 V'''(x) dx^3...
We can neglect V(x) since it's just a constant, and adding a constant to the potential does not affect the physics. And (the crux) we can neglect V'(x) because the potential is at a minimum, so the derivative is zero.
That leaves the quadratic and higher-order terms. Neglecting the higher order terms on the basis that dx is small, we get the harmonic potential, or Hooke's Law in the language of forces.