- mhuffmanThey must have been pretty damn confident of the results to give depressed children a placebo.
- What term would you suggest instead? Agent-assisted slop?
- You can see that it is[0].
- I suspect they are referencing the 950tok/s claim on Cognition's page.
- I like lucene and have used it for many years, but sometimes a conceptually close match is what you want. Lucene and friends are fantastic about word matching, fuzzy searches, stem searches, phonetic searches, faceting and more but have nothing for conceptually or semantically close searches (I understand that they recently added new document vector searches). Also vector searches usually always return something which is not ideal in a lot of cases. I like Reciprocal Rank Fusion myself as it gives the best of both worlds. As a fun trick I use duckdb to do RRF with 5million+ documents and get low double-digit ms response time even under load
- Next up: A start-up that spins up AI instances to negotiate against AIs trying to negotiate hospital bills down!
- In either case, I am just happy that Knuth's idea of Literate programming is becoming more and more of a common reality after so many years.
- lol, yes. In my case the "govt. is the cause of all bad incentives and just turbo-fucks the customers/markets with laws and regulations" came before all the tools that one might use to make profit. Also rent-seeking was lionized (eg. event ticket "middle-men" that purchased all the tickets and raised the prices) as fundamentally necessary to the markets along with strategies like when to use "dirty marketing" against opponents and of course there where the required ethics courses to round out the education.
Interestingly, the claim about competing on price was that it would just inevitably lead to everyone lowering their price to zero marginal cost, so you should find other ways to differentiate yourself or to use IP to sue others from not competing.
- >Business school will tell you constantly that best quality is where you want to compete in almost all cases.
Hmmmm, I don't remember it that way. I remember the constant take was to build a moat (typically based on intellectual property), then optimize net profit and/or network effects. Quality never really came up unless it is so bad as to cause lawsuits.
- Along with the other commenter, the reason the dictionary would start getting so big is that words with a stem would have all its variations being different tokens (cat, cats, sit, sitting, etc). Also any out-of-dictionary words or combo words, eg. "cat bed" would not be able to be addressed.
- As a whole, in the US, people don't even want people to have free healthcare. What do you think the chances are that they want people to have "free" money?
- There is no possible way that the board would let the 30 cents that was saved be freely "given" to the customer.
- >why you'd do either instead of just serial/bigserial, which has always been my goto. Did I miss something?
So the common response is sequential ID crawling by bad actors. UUIDs are generally un-guessable and you can throw them into slop DBs like Mongo or storage like S3 as primary identifiers without worrying about permissions or having a clever interested party pwn your whole database. A common case of security through obscurity.
- The blue-purple gradient alone is a dead giveaway[0].
[0] https://www.youtube.com/watch?v=AG_791Y-vs4 (The AI Purple Problem)
- I like it!
Quick question, is the Day Pong a Master of Karate and friendship For Everyone?
- >Yes, you could write the code yourself, but keep in mind that this activity is going away for most engineers (but not for all) in 1 - 2 years.
I'm not saying that it definitely isn't going to happen, but there is a loooong way to go for non-FAANG medium and small companies to let their livelihoods ride on AI completely.
>I think a better advice would be to learn reading/reviewing an inordinate amount of code, very fast. Also heavy focus on patterns, extremely detailed SDLC processes, TDD, DDD, debugging, QA, security reviews, etc...
If we get to a point in 1-2 years where AI is vibe-coding at a high mostly error-free level, what makes you think that it couldn't review code as well?
- There is a loophole where companies hire H1B-type employees in their offices in another country and then move them to the US offices.
- No one is going to give up token-based pricing. The main players can twiddle their models to make anything any amount of tokens they choose.
- This particular one may not work on M chips, but the model itself does. I just tested a different sized version of the same model in LM Studio on a Macbook Pro, 64GB M2 Max with 12 cores, just to see.
Prompt: Create a solar system simulation in a single self-contained HTML file.
qwen3-next-80b (MLX format, 44.86 GB), 4bit 42.56 tok/sec , 2523 tokens, 12.79s to first token
- note: looked like ass, simulation broken, didn't work at all.
Then as a comparison for a model with a similar size, I tried GLM.
GLM-4-32B-0414-8bit (MLX format, 36.66 GB), 9.31 tok/sec, 2936 tokens, 4.77s to first token
- note: looked fantastic for a first try, everything worked as expected.
Not a fair comparison 4bit vs 8bit but some data. The tok/sec on Mac is pretty good depending on the models you use.
- >There's significantly less shoplifting now on average than there was in the '80s or '90s.
possibly, but are you seriously comparing now to the height of the crack epidemic in the US?