- BloondAndDoomThis is wild, never even heard of it but kind of tracks
- I still don’t know why all these dark patterns are simply not illegal. What happened to consumer rights? It be a such a widespread practice, I think we will look back at this at one point and will say things akin “how did we let people smoke in planes”. One of those things utterly ridiculous in hindsight
- Is this designed to appeal to people who doesn’t know HTML/CSS? I was watching point and prompt video, majority of those changes would be quicker with simple html or a WYSIWYG editor ( like dreamweaver 20 years ago, if that’s still a thing).
I get why building components with AI which would accelerate dev, but what’s the point of changing a button’s color to red via an LLM?
- I don’t know what’s wrong with these comments. This is the kind of smart design we want to see and everyone is doing nitpicking.
Can we have just better things or are we going to reject everything that’s not perfect and by doing so concede the whole point and just give up?
Well done OP for the right approach and your business. This has always been my design (when possible) to approach data security. When you don’t have data you don’t have to worry about its security.
Best of luck, ignore the naysayers.
- Yes it reads that way, and I guess that also means all previous purchases will be behind DRM.
1. Sell digital things, that costs as same as physical copy
2. Make it so that customer doesn't even own them
3. Profit (No question marks in between)
What a mess. I've mostly stopped Kindle/ebooks but I still have audible which seems like suffering from the same problem.
- My problem here is this; products are designed with a vision. If you are designing with 2-3 visions it won’t be that good, if you design with one vision (AI) then non-AI version of the product will be an after thought. This tells me non-AI version of it will suffer (IMHO)
- I’m using my tv with all the stuff disabled (the ones it’s possibly disable), but even then I realize I don’t trust them and I don’t trust their choices. Because they get to say sorry and not held responsible.
I want smart tv because I want use my streaming services but that’s it. I also want high quality panels. Maybe the solution is high quality TVs where you just stick a custom HDMI device (similar to Amazon fire stick) and use it as the OS. Not sure if there are good open source options since Apple seems to be another company that keeps showing you ads even if you pay shit load of money for their hardware and software, Jobs must turning in his grave
- That’s better since it’s honest offering, and I think they’d still lose money. It’s not like there aren’t enough streaming services to compete with
- The irony of Amazon “empty chair in meetings” myth. When you don’t even listen actual people advocating for customers why’d you care about an imaginary person.
- It’s not predicting more about controlling and shutting down who doesn’t agree with you. We already know majority of governments spying and sabotaging activists. Now imagine you can query for “extreme environmentalists who lives in X” and whatever the further filtering is needed.
- This, even worse I was thinking about my ChatGPT account. I was doing some research on a topic government would redeem “dangerous”, and also I’m an immigrant.
ChatGPT can be one of the best profiling tools, and imagine combining it with Google etc. Which we know has been done before so why not again or maybe already going in.
We are not that far away from border police to possibly review my government issued “suspiciousness level” and just auto reject.
I think this is beginning of the end on privacy against government. There will be a new tech movement (among hackers) focusing on e2e, local AI and all forms of disconnected private computing, but general population is absolutely doomed.
- I mean we have already stopped caring about dump stock photos at the beginning for every blog post, so we already don't care about shit that's meaningless, let's it's still happening because there is an audience for it.
Art can be about many things, we have a lot of tech oriented art (think about demo scene). Noe one gives a shit about art that evokes nothing for them, therefore if AI evokes nothing who cares, if it does, is it bad suddenly because it's AI? How?
Actually I think AI will force good amount of mediums to logical conclusion if what you do is mediocre, and not original and AI can do same or better, then that's about you. Once you pass that threshold that's how the world cherish you as a recognized artist. Again you can be artist even 99.9% of the world thinks what you produced is absolute garbage, that doesn't change what you do and what that means to you. Again nothing to do with AI.
- I honestly cannot agree more with this, while still standing behind what I said on the parent comment.
As someone who's been in tech for more than 25 years, I started to hate tech because of all things that you've said. I loved what tech meant, and I hate what it became (to the point I got out of the industry).
But majority of these disappear if we talk about offline models, open models. Some of that already happened and we know more of that will happen, just matter of time. In that world how any of us can say "I don't want a good amount of the knowledge in the whole fucking world in my computer, without even having an internet or paying someone, or seeing ads".
I respect if your stand is just like a vegetarian says I'm ethically against eating animals", I have no argument to that, it's not my ethical line but I respect it. However behind that point, what's the legitimate argument, shall we make humanity worse just rejecting this paradigm shifting, world changing thing. Do we think about people who's going to able to read any content in the world in their language even if their language is very obscure one, that no one cares or auto translate. I mean the what AI means for humanity is huge.
What tech companies and governments do with AI is horrific and scary. However government will do it nonetheless, and tech companies will be supported by these powers nonetheless. Therefore AI is not the enemy, let's aim our criticism and actions to real enemies.
- It's not corps only though, AI at this point include many open models, and we'll have more of it as we go if needed. Just like how the original hacker culture was born, we have the open source movements, AI will follow it.
When LLMs first gotten of, people were talking about how governments will control them, but anyone who knows the history of personal computing and hacker culture knew that's not the way things go in this world.
Do I enjoy corpos making money off of anyone's work, including obvious things like literally pirating books and training their models (Meta), absolutely not. However you are blaming the wrong thing in here, it's not technology's fault it's how governments are always corrupted and side with money instead of their people. We should be lashing out to them not each other, not the people who use AI and certainly not the people who innovate and build it.
- I hear you, that's not a problem of AI but a problem of copyright and other stuff. I suppose they'd enrage if an artist replicated their art too closely, rightly or wrongly. Isn't it flattery that your art is literally copied millions of times? I guess not when it doesn't pay you, which is a separate issue than AI in my opinion. Theoretically we can have worse that's only trained on public domain that'd have addressed that concern.
Just like you cannot put piracy into the bag in terms of movies, tv shows you cannot put AI into the bag it came from. Bottom line, this is happening (more like happened) now let's think about what that means and find a way forward.
Prime example is voice acting, I hear why voice actors are mad, if someone can steal your voice. But why not work on legal framework to sell your voice for royalties or whatever. I mean if we can get that lovely voice of yours without you spending your weeks, and still compensated fairly for it, I don't see how this is a problem. -and I know this is already happening, as it should.
- No doubt, but if your Starcraft experience against AI was "somehow" exactly same with AI, gave you the same joy, and you cannot even say whether it was AI or other players, does that matter? I get this is kind of Truman Show-ish scenario, but does it really matter? If the end results are same, does it still matter? If it does, why? I get the emotional aspect of it, but in practice you wouldn't even know. Now is AI at that point for any of these, possibly not. We can tell AI right now in many interactions and art forms, because it's hollow, and it's just "perfectly mediocre".
It's kind of the sci-fi cliche, can you have feelings for an AI robot? If you can what does that mean.
- I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter? Is it because they “stole” all the art in the world. But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
- Can someone help me out how to get started in this kind of coding setup?
I haven't written production code for the last 8 years, but has prior development experience for about 17 years (ranging from C++, full stack, .NET, PHP and bunch of other stuff).
I used AI at personal level, and know the basics. Used Claude/Github to me help fix and write some pieces of the code in languages I wasn't familiar with. But it seems like people talking and deploying large real world projects in short-"er" amount of time. An old colleague of mine whom I trust mentioned his startup is developing code 3x faster than we used to develop software.
Is there resource that explains the current best practices (presumably it's all new)? Where do I even start?
- There isn't much to understand because I don't even think he understands or even thought about it more than 5 seconds. If he did think about more than 5 seconds and still typed this tweet, that's worse.
- This is impressive in the worst way! There is a stark difference between original YC management and the current one. I just cannot believe someone who has such a limited understanding of the industry & tech can be the CEO of YC. Following a leader like Paul Graham is practically impossible (and rest of the founding team was surely very talented) but this is just horrible. Not sure if it matters whether he's good or bad in the short term, but after 5 years of this kind of management they'll absolutely lose their advantage.
It's ironic how YC became the Google/Microsoft of its industry.