- That's neat. Not sure if I would deploy it, as it will be hard to explain/teach people how to use it (as I see in other comments already), but I do see the value in it.
It solves the "drag and drop beyond what fits the screen" much better than you can with drag and drop, the awkward auto-scroll-on-nearing-the-edge-thing.
I would say, if you need to reorder many items, it gets a bit disorienting, the whole list moves as it's anchored to the item you are moving. Maybe there is a way to combine drag and drop where this kicks in if you go beyond the bounds of the visible area.
Also don't think this can work well with multiple axis/drop zones.
- I forgot the most important reason! Data for ads.
Delivering ads is based on data about you, so you get the most effective ads. Your browsing data is really valuable data in that sense.
If you read about the privacy controls in chrome you get a pretty good idea of what they collect:
> Your topics of interest are noted by Chrome and are based on your recent browsing history. Sites can also store info with Chrome about your interests. As you keep browsing, Chrome may be asked to share stored info about ad topics or site-suggested ads to help give you a more personalized ad experience. To measure the performance of ads, limited types of data can be shared among sites and apps.
https://support.google.com/chrome/answer/13355898
It's all about ads.
- Imo it's pretty transparent, it's indeed all about ads. 70-80% of Google's revenue is from ads, that says it all imo.
First of all because of search: If you type something in the search bar of your browser, and that takes you to Google, you see all those ads and Google makes a lot of money.
Second of all because the browser is the entry point to the web. If you browse the web, the chance that you come across Google Adsense ads is very high, in other words, if you browse the web, Google makes money.
Browsers can control what you see, they can have ad blockers, they can replace ads (like the shady business Brave tried at some point), but also change the extension API so ad blockers are less effective (see manifest v3).
Conclusion: Controlling how people browse the internet is highly valuable as direct money maker (search ads) but also to make sure nobody but you can mess with 70-80% of your revenue. That alone is worth every dollar they spend on it.
Microsoft has Bing (but also based on chromium so less investment). Apple needs a browser for their devices, and gets 20B from Google to make it the default search engine (again, if Google can serve more ads, it makes more money). I don't know if Safari is well funded, they lag behind a bit currently.
Edit: Apple also has motivations btw. They have been lagging on implementing a lot of the features in Safari IOS that would make webapps more capable of replacing native apps, the App Store that Apple makes tons of money on... If you allow other browser you don't control that, so you need your own.
Second part why Google might want to fund Mozilla (and Safari to some extend) is to keep regulators happy. Being able to say "no no, it's not a monopoly, see!" is quite useful.
Idk if there is more data, but imo all you have to do is look at the financials, and it's pretty obvious that it's all about serving ads, billions of dollars in ads, directly or indirectly.
- Advertising, search partnerships, premium subscriptions afaik.
These things can be found public:
- Opera https://investor.opera.com/news-releases/news-release-detail...
- Mozilla (largely funded by google) https://en.wikipedia.org/wiki/Mozilla_Corporation
- Brave https://brave.com/blog/100m-mau/ (ads, search api, premium subscription, and their crypto thing)
- Browser company: not sure, I think they have a subscription, but I assume they still mostly run on VC money.
Chrome, Safari, and Edge are funded by their parent companies. I believe Google does also pay Apple $20B to be the default search engine on Safari and ios.
So you could make an argument that Google pays for browsers. A lot of browsers run on Chromium, owned and funded by Google (although technically open source). Except Apple and Mozilla who get search money from Google.
- This seems really nice. Wasn't aware of hack club but that just looks like a wonderful construction and organization.
In a world of VC backed open source projects with big profit motivations, it's refreshing to see things like this. Definitely going to give ghostty another try!
- Wait, so there is one example, which shows the R and Python equivalents are pretty much the same..
I was all hyped up, ready to see the amazing examples and arguments that would convince me to pick up R, and it gave me absolutely nothing (except quotes and brackets..).
Disappointing.
- You might be able to port it fairly easily, depending on the browser extension api's you are using.
Web extensions API is emerging and a lot of it is already somewhat standardized https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...
Just some different fields in the manifest, and there are specifics that work completely different or are not available (for example favicons).
I have tried Chrome -> Firefox before and it was surprisingly easy. Safari is more difficult in my experience, it's missing complete API's like the bookmarks one.
- I think the glaring issue underlying this is that the big companies are not investing enough in the tools they rely on.
I agree with some of the arguments that patching up vulnerabilities is important, but it's crazy to put that expectation on unpaid volunteers when you flood them with CVE's some completely irrelevant.
Also the solution is fairly simple: Either, you submit a PR instead of an issue. Or, you send a generous donation with the issue to reward and motivate the people who do the work.
The amount of revenue they generate using these packages will easily offset the cost of funding the projects, so I really think it's a fair expectation for companies to contribute either by delivering work or funds.
- There is a premium plan for the AI features, so that's the strategy, which does make some sense, I bet a lot of people will want to have those features.
- It's a smart approach imo. They had to get a subscription somehow to support AI features which they need to compete (just usage cost wise you can't do that on a one time fee license).
But since they promised not to go subscription when they got acquired by Canva, making it free with AI as the subscription is a clever solution to not break their promise while still introducing a subscription model.
I think their bet is enough people will want the AI, which I think is correct.
As a long time Affinity user, first reaction was: "see, there is the subscription", but on second thought, fair enough, well played. I'll probably get the AI subscription as well.
I do wonder if over time more features will go into that premium plan, but we'll see.
Edit: It seems like some of the AI stuff runs on device, they are not very clear about what does or doesn't. That makes me change my opinion a bit, as that's just straight up a freemium subscription model.
- It looks very similar to what they already had. If you had all three they all were already integrated, you can just switch between the different types of editing modes.
- But then you don't need uv. The pain point uv solves is projects. Different projects with different dependencies (even the same but different versions), multiple people, teams, and environments trying to run the same code.
That gets problematic if environments go out of sync, or you need different versions of python or dependencies.
So you are right, you probably won't benefit a lot if you just have one big environment and that works for you, but once you pull things in a project, uv is the best tool out there atm.
You could also just create a starter project that has all the things you want, and then later on pull it out, that would be the same thing.
- Seeing this kinda stuff makes me want to keep my physical license and ID. No need for digital ones, I'm good with the cards.
- Besides the obvious issues at hand, it's kinda ironic they publish this on Github, EU tech independence is going great.
- Haha that’s a good point, I guess another sign that they really have no clue what they are doing
- Maybe we should scan their communications for corruption and undue influence. I'm sure it's all above board, so it should be fine if we get an independent group to review them right? (Just following to their reasoning..)
- One interesting line in the proposal:
> Detection will not apply to accounts used by the State for national security purposes, maintaining law and order or military purposes;
If it's all very safe and accurate, why is this exception necessary? Doesn't this say either that it's not secure, or that there is a likely hood that there will be false positives that will be reviewed?
If they have it all figured out, this exception should not be necessary. The reality is that it isn't secure as they are creating backdoors in the encryption, and they will flag many communications incorrectly. That means a lot of legal private communications will leak, and/or will be reviewed by the EU that they have absolutely no business looking into.
It's ridiculous that they keep trying this absolutely ridiculous plan over and over again.
I also wonder about the business implications. I don't think we can pass compliance if we communicate over channels that are not encrypted. We might not be able to do business internationally anymore as our communications will be scanned and reviewed by the EU.
- "the science" I don't agree with this part, and I think it's quite dangerous to rope that in.
Science is not one way of thinking, it's a methodology, it's seeking truth. There might be bad actors and idiots, there is likely lots wrong, but the beautiful thing about science is that facts matter. If someone publishes bullshit you can repeat the study and proof them wrong.
That science is (wrongfully) taken as justification for stupid things, is not on "the science" as a whole.
If anything makes me hopeful, it is science and the remarkable developments happening.
- I'm not assuming anything, I work in software development. In this industry we spend ungodly amounts of time and resources to attempt to keep data safe, and create systems like the ones proposed to flag and handle malicious activity of many kinds. I think I know quite well how hard it is, and how easy it is to get it wrong, with potentially very real consequences.
The only things being handwavingly dismissed are the collateral damage, side effects, very real risks, and concerns about the effectiveness of the proposed solutions.
- This! Very well explained.
- I lost count how many times the "lets get rid of encryption" plans have been tried and failed. It's truly ridiculous how these people don't understand anything about encryption and somehow still think this is a good idea.
How is it possible that after years of discussing plans like this, they still managed to not listen to anyone who knows anything about encryption and online safety?
Makes me really worried about the future. There is a lot going on in the world, and somehow they feel the need to focus on making our communications unsafe and basically getting rid of online privacy.
The goal they are trying to achieve is good, but the execution is just stupid and will make everyone, including and maybe especially the people they want to protect, less safe online.
The age verification thing is another example. All it does is send a lot of sensitive traffic over cheap or free VPN's (that might be controlled by foreign states). Great job, great win for safety!
- I think there are two cases:
1. Self hosting
2. Running locally on device
I have tried both, and find myself not using either.
For both the quality is less than the top performing models in my experience. Part of it is the models, part might be the application layer (chatgpt/claude). It would still work for a lot of use cases, but it certainly limits the possibilities.
The other issue is speed. You can run a lot of things even on fairly basic hardware, but the token speed is not great. Obviously you can get better hardware to mitigate that but then the cost goes up significantly.
For self hosting, you need a certain amount of throughput to make it worth it to have GPU's running. If you have spiky usage you are either paying a bunch for idle GPU's or you have horrible cold start times.
Privacy wise: The business/enterprise TOS's of all big model providers give enough privacy guarantees for all or at least most use cases. You can also get your own OpenAI infra on Azure for example, I assume with enough scale you can get even more customized contracts and data controls.
Conclusion: Quality, speed, price, and you are able to use the hosted versions even in privacy sensitive settings.
- Any decent sized project will encounter breaking changes in dependencies.
The big frontend frameworks have great backward compatibility and usually provide codemods that automatically update your project.
If you install UI components and other libraries that might get abandoned or have breaking changes in major version updates you might have to put in more effort, that's not different in Go or Python.
- This is the answer. I thought this was fairly common knowledge, height is animated quite often (think dropdowns), no need for the over complications.
- Whisper large v3 from openai, but we host it ourselves on Modal.com. It's easy, fast, no rate limits, and cheap as well.
If you want to run it locally, I'd still go with whisper, then I'd look at something like whisper.cpp https://github.com/ggml-org/whisper.cpp. Runs quite well.
- Love this, definitely rooting for this to get big!
I think the goal is great. My dream language is something "in between Go and Rust", Go but with more expressive types, Rust-light, something along those lines. This seems like it is hitting that sweet spot.
Imo Go gets a lot right when it comes to productivity, but the type system always annoys me a bit. I understand the choice for simplicity, but my preference is different.
Rust is quite enjoyable, especially when it comes to the type system. But, kinda the opposite of go, it's a lot, maybe too much for me, and I frequently don't know what I'm doing haha. I also don't really need Rust level performance, most things I do will run totally fine with GC.
So Go with some extra types, same easy concurrency, compilation and portability sounds like a winner to me.
- I kinda agree but the article does not do a great job at defending the position. Who cares about docs?
Vibe coding, or just letting AI take the wheel will work in some situations. It allows non coders to do things they couldn't before and that's great. Just like spreadsheets, no code tools, and integrations tools like Zapier, this will fill a bunch of gaps and push the threshold where you need to get software devs involved.
But as with all these solutions there is that threshold were the complexity, error margin gets, and scale go beyond workable and then you need to unfuck that situation and enforcing correctness. And I think this will result in plenty "oops my data is gone" types of problems.
If you know upfront that your project will get complex and/or needs to scale you might be best off skipping the vibe coding and just getting it right, but for prototypes, small internal tools, process "glue", why not.
It's not a replacement of software engineering as a whole (yet), it's just another tool in the toolbox and imo that's great. Can I use it, no.. I have tried and it just doesn't work at all for bigger more complex projects.
- Isn't it about release-notes.js? There are quite a few files in there that are obfuscated. So far I haven't found anything super bad, it looks like a sanity.io client of some sorts, but there could be some stuff hidden in there as it's seems like quite a big JS bundle.
- Then do it.. you are free to experience it. Especially if it’s just you on the team you might be totally fine.
I will just say that any project that lasts that long will require maintenance work now and then. The issues you describe seem quite minor and are probably relatively easy to patch up. React provides code mods that do a lot of the heavy lifting, your external widgets have updates, or alternatives.
I would also add that you don’t have to use dependencies with react. It’s always good to be mindful of dependencies and limit them.
It’s the same thing, you choose dependencies to save you time, or you do it yourself.
My main point is that with your own framework you will also run into limitations, causing big rewrites taking days/weeks/months, and/or lots of upfront cost..
I think the solution is one of the big companies with lots of money to acquire tailwind. Specifically Vercel. They use it, their v0 thing uses tailwind allover, they have bought a bunch of open source companies in the past, and they should have deep enough pockets. Last year they acquired tremor blocks, which is a UI library, that uses tailwind!
Makes perfect sense, lets get it done.