- fintechie parentCan you? can you provide a historical (25+ years) chart of reservoir levels in Cyprus or any EU country? Otherwise let me assume you just fell for a sensationalist article
- Italy, Spain... eg. https://embalses.net shows maximum historical levels after a sharp bump these past 2 years... pretty sure it's the same story in many EU countries. Droughts are there until they aren't. Normal fluctuations if you check an actual chart going 50-100 years back.
Can you post a historical chart of Cyprus? maybe it tells a different story
- > It's a crisis measured in hard numbers: reservoir levels, rainfall data, aquifer depletion rates.
Of course I went to check the actual numbers from official sources and they tell a different story. Reservoir levels near historical maximum. So much for building an article on "hard numbers" without pointing to sources.
- Quite good, it would sound much better with SOTA voices though:
- I've been using this one with Cursor the past few months...
- Gave it a go for several projects, but didn't like it... for big projects it gets messy, fast. It also feels like it has become the new bootstrap.
I'm very happy with my current CSS-in-JS workflow. Crafting good old css with LLM help. You just show the LLM a pic, ask for the components.... boom, done (with proper naming, etc)
- Hopefully this makes the Cursor team reconsider security (which doesn't seem very good really).
Stopped using it for serious stuff after I noticed their LLMs grabs your whole .env files and sends them to their server... even after you add them to their .cursorignore file. Bizarre stuff.
Now imagine a bad actor exploiting this... recipe for disaster.
- 3 points
- 208 points
- 3 points
- Yeah, it shouldn't be too difficult to build this with python. I wonder why none of the popular routers like https://github.com/BerriAI/litellm have this feature.
> Problem is running so many LLMs in parallel means you need quite a bunch of resources.
Top of line MacBooks or Minis should be able to run several 7B or even 13B models without major issues. Models are also getting smaller and better. That's why we're close =)
- We're nearing a point where we'll just need a prompt router in front of several specialised models (code, chat, math, sql, health, etc)... and we'll have a local Mixture of Experts kind of thing.
Is any project working on something similar to this?1. Send request to router running a generic model. 2. Prompt/question is deconstructed, classified, and proxied to expert(s) xyz. 3. Responses come back and are assembled by generic model.