- ryuuseijin parentJust a note that you can use opencode with their API gateway (they call it "zen") to get access to all the most popular models using a single account, including gemini. (Although this wouldn't have helped the author, since they wanted to try the Gemini CLI specifically).
- Yes, I only use my own keys. It even lets you use your Claude Max subscription.
- 2 points
- I'm using opencode which I think is now very close to covering all the functionality of claude code. You can use GPT5 Codex with it along with most other models.
- Shameless plug for my super simple consistent-hashing implementation in clojure: https://github.com/ryuuseijin/consistent-hashing
- I'm using tsx for a project to achieve the same effect. As you said, it saves you from having to set up a build/transpilation step, which is very useful for development. Tsx has a --watch feature built in as well, which allows me to run a server from the typescript source files and automatically restart on changes. Maybe with nodemon and this new node improvement this can now done without tsx.
To check types at runtime (if that can even be done in a useful way?) it would have to be built into v8, and I suppose that would be a whole rewrite.
- It's called opencode: https://opencode.ai/
- I've switched to opencode. I use it with Sonnet for targeted refactoring tasks and Gemini to do things that touch a lot of files, which otherwise can get expensive quickly.
- Shameless plug of my consistent hashing implementation in about 50 lines of clojure: https://github.com/ryuuseijin/consistent-hashing
- Search for A100 on this page: https://fly.io/docs/about/pricing/
- My heart stopped for a moment when reading the title. I'm glad they haven't decided to axe GPUs, because fly GPU machines are FANTASTIC!
Extremely fast to start on-demand, reliable and although a little bit pricy but not unreasonably so considering the alternatives.
And the DX is amazing! it's just like any other fly machine, no new set of commands to learn. Deploy, logs, metrics, everything just works out of the box.
Regarding the price: we've tried a well known cheaper alternative and every once in a while on restart inference performance was reduced by 90%. We never figured out why, but we never had any such problems on fly.
If I'm using a cheaper "Marketplace" to run our AI workloads, I'm also not really clear on who has access to our customer's data. No such issues with fly GPUs.
All that to say, fly GPUs are a game changer for us. I could wish only for lower prices and more regions, otherwise the product is already perfect.
- 3 points
- Location: Sydney, Australia
Remote: no
Willing to relocate: yes (Japan)
Technologies: Clojure, Typescript, Javascript, AWS, Web
Résumé/CV: available on request
Github: https://github.com/ryuuseijin
Email: spiderbeetle@fastmail.com
----
My last position was at Atlassian working on various backend systems, and specifically I've developed (in clojure) the OT-based synchronization engine behind Confluence's collaborative editing feature.
I'm looking to join a company in Japan and would need visa sponsorship.
- If you look further down in the thread you can see how someone asks whether he saw any actual photos and his reply (which is what I posted).
- Later, in response to a question whether he has actually seen any photos of other people he says:
No, just their accounts are listed. I am not able to see any photos - not even my own. - What's interesting is that even though the value of for example Ethereum is going down, the usage statistics seem to be much healthier, specifically transaction volume [1] and address growth [2].
[0] https://etherscan.io/charts
- 8 points
- 5 points
- Btw, Wave-style OT needs a central server, but there are other forms of OT that do not. They mention this in your link. If your OT system can satisfy the TP2 propery you can do n-n synchronization but still have what you'd call an OT system.
- Thanks, I did address that when I said you can solve it if you know all actors in a n-n system, but I should have been clearer by pointing out the solution, which is (as you already said) every known actor acknowledging it.
In an unbounded n-n system I still don't see a solution.
- Google wave is an example of what works well with a central server: you have many documents that can be sharded across many instances because each document only needs to be consistent with itself, and your throughput limit is therefore only per-document which is more likely to be bounded by the number of participants that can practically interact in a single document.
There is still the issue of a node becoming hot because there is an unusually active document on that node, which would usually not happen in a completely decentralized model.
- What you describe looks a lot like a Google Wave like OT system. Wave-style OT is eventually consistent, like CRDTs, but you need a central server to give the event history a total order. This is necessary because Wave-style OT is a 1-1 model: clients are 1-1 connected with the server, but not with each other, which would be n-n, which is what CRDTs can do.
The total order of the central server can make the system simpler and more efficient, but by itself it doesn't solve the problem that Wave has, which is allowing a client to edit his text/message without being interrupted by network latency/interruptions -- imagine typing a letter and having to wait for the server to acknowledge that keypress with >100ms latencies. To solve this problem, you still need some form of xform/merge algorithm that OT and CRDT systems provide.
EDIT: I assumed you were not familiar with OT systems since you didn't mention it in your post, but now that I followed your link I can see that you are. In that light, it seems your comment is more a question about what the tradeoffs are between OT and CRDT systems rather than whether a central server can solve all problems without xform/merge logic.
One tradeoff that comes to mind when thinking about OT and CRDT systems is in the way operations track locations in the datastructure. In OT systems you have offsets (small), in CRDTs you have uuids (large) or dynamically growing identifiers (usually small but possibly large). This has implications for the byte-size of operations or the in-memory datastructure.
Another is that CRDTs have a pruning problem. It has been some time since I looked at CRDTs, but I remember that Wave-style OT didn't have the same problem due to the central server. The pruning problem can cause a CRDT to grow larger than it needs to by forcing it to keep more historic data around just in case it gets an old operation it hasn't seen yet. The central server solves this problem by guaranteeing that it will have sent you all old operations before sending you a newer one. If you know all actors in a n-n system you can also solve this issue, but in an unbounded n-n system I didn't see any way this issue can be solved when I was researching it.
EDIT2: Just want to add that there are lots of other problems that are more practical than theoretical. For example, authorization, authorative copy of the data, REST API, things like that, but that would depend more on the exact use case.
- I was in a similar situation, although I got more than 3 hours of sleep. I don't think 3 hours of sleep per day is sustainable, you have to find another solution. Any work you do on 3 hours of sleep is probably not great anyway.
I don't know your specific circumstances, but I used low dosage Armodafinil and Melatonin before with with success to get by with little sleep (didn't keep track, but I think at least 6 hours per day). If you are in the US you will need a prescription for Armodafinil, but if you explain your situtation to the doctor it is easy to get one. Armodafinil can be imported cheaply from India. Melatonin is over-the-counter.
Hang in there and good luck!
- Yes, it gives you short circuiting on nil values. The post mentions `some->` and laments its inability to handle varying function signatures.
You can combine `some->` with other threading macros to make it achieve the desired effect, and you can also achieve the desired effect with `if-let` as the post demonstrates, but I believe the b/cond approach to be more readable than both.
- There is a neat macro I've been using to solve problems like the nested if-lets in the post:
https://github.com/Engelberg/better-cond
(b/cond :let [x (foo)] (not x) (do (log "foo failed") false) :let [y (bar x)] (not y) (do (log "bar failed") false) :let [z (baz x y)] (not z) (do (log "baz failed") false) :else (do (qux x y z) (log "it worked") true))