Preferences

lumenwrites
Joined 671 karma
Startup Founder, Full-Stack Web Developer, Writer.

My best projects (and a webdev portfolio): https://lumenwrites.dev


  1. Yaay, one step closer to torment nexus.
  2. For a person so eager to psychoanalyze others, the author sure seems oblivious to his own biases.
  3. I have learned about YCombinator, hacker news, Paul Graham, and startups in general through one of his essays. I was first blown away by the brilliance and clarity of his writing, and only then did I learn that he's a prominent tech figure.

    So many years later, I still haven't read a better writer (except maybe Scott Alexander). So, at least from my perspective, if anyone has the authority to write about good writing, it's this guy.

  4. You gotta think about it in terms of cost vs benefit. How much damage will a malicious AI do, vs how much value will you get out of non-neutered model?
  5. Is that really a contradiction? We all have our ideals, and we all fail to live up to them sometimes, because life can be brutal.
  6. Oh, sorry to hear that you have to deal with that!

    The way I'm getting a sense of the progress is using AI for what AI is currently good at, using my human brain to do the part AI is currently bad at, and comparing it to doing the same work without AI's help.

    I feel like AI is pretty close to automating 60-80% of the work I would've had to do manually two years ago (as a full-stack web developer).

    It doesn't mean that the remaining 20-40% will be automated very quickly, I'm just saying that I don't see the progress getting any slower.

  7. I'm pretty sure you're wrong for at least 2 of those:

    For 3D models, check out blender-mcp:

    https://old.reddit.com/r/singularity/comments/1joaowb/claude...

    https://old.reddit.com/r/aiwars/comments/1jbsn86/claude_crea...

    Also this:

    https://old.reddit.com/r/StableDiffusion/comments/1hejglg/tr...

    For teaching, I'm using it to learn about tech I'm unfamiliar with every day, it's one of the things it's the most amazing at.

    For the things where the tolerance for mistakes is extremely low and the things where human oversight is extremely importamt, you might be right. It won't have to be perfect (just better than an average human) for that to happen, but I'm not sure if it will.

  8. I'm pretty good at what I do, at least according to myself and the people I work with, and I'm comparing its capabilities (the latest version of Claude used as an agent inside Cursor) to myself. It can't fully do things on its own and makes mistakes, but it can do a lot.

    But suppose you're right, it's 60% as good as "stackoverflow copy-pasting programmers". Isn't that a pretty insanely impressive milestone to just dismiss?

    And why would it just get to this point, and then stop? Like, we can all see AIs continuously beating the benchmarks, and the progress feels very fast in terms of experience of using it as a user.

    I'd need to hear a pretty compelling argument to believe that it'll suddenly stop, something more compelling than "well, it's not very good yet, therefore it won't be any better", or "Sam Altman is lying to us because incentives".

    Sure, it can slow down somewhat because of the exponentially increasing compute costs, but that's assuming no more algorithmic progress, no more compute progress, and no more increases in the capital that flows into this field (I find that hard to believe).

  9. Why would it get 60-80% as good as human programmers (which is what the current state of things feels like to me, as a programmer, using these tools for hours every day), but stop there?
  10. I think it's a gradient. When I think about the "nightmare to live in", I think Soviet Union or North Korea. Those are the places who went all-in on redistribution.

    Most western countries mostly respect individual freedom and property, taxes being an exception to that, somewhat limited and controlled. I see that as a necessary evil - something we can't fully avoid (at least, I can't figure out how we'd do that), but should try to minimize, to avoid sliding down the spectrum towards more and more evil versions of that.

    I think most western countries are nice to live in because they do comparatively good job at respecting people's freedom, property, and the right to keep the stuff they earn.

    Advocating for more redistribution is taking steps away from that, in the direction people don't realize they don't want to go in.

  11. Nothing about the current system (capitalism) prevents people from sharing freely, that's just charity. I think it's wonderful and admirable when people do that, and I fully support that, as long as it's voluntary.

    I'd be happy to live in a version of society where there's enough abundance and good will that people just give to charity, and that is enough to support everyone, and nobody is being forced to do anything they don't want.

    I only dislike it when people advocate for involuntary redistribution of wealth, because it has a lot of negative side effects people aren't thinking through. Also, because I think that it's evil and results in the sort of society and culture where it would be a nightmare to live in.

  12. I don't think the world where a mob of people can gang up on a person and take their stuff is as idyllic as you think it is. If the person who has figured out how to earn a lot of food doesn't get to "hoard" it, it'll just get hoarded by a person with the biggest stick.

    What's worse (for the society), is that in this world nobody has an incentive to create wealth, because they know it'll just be taken away. When rich people aren't in power, people with political capital and big guns are. I don't think that's better.

    If AGI takes over, that changes things, somewhat. If it creates unlimited abundance, then it shouldn't matter who has the most (if everyone has plenty). Yes, it would create power disparity, but the thing is, there'll always be SOMEBODY at the top of the social hierarchy, with most of the money and power - in the AGI scenario, that is someone who is in charge of AGI's actions.

    Either it's AGI itself (in which case all bets are off, since it's an alien god we cannot control), or the people who have developed AGI, or the politicians who have nationalized it.

    Personally, I'm uncomfortable with anyone having that much power, but if I had to pick the lesser evil - I'd prefer it to be a CEO of an AI company (who, at least, had the competence and skill to create it), instead of the AGI itself (who has no reason to care about us unless we solve alignment), or one of the political world leaders (all of whom seem actively insane and/or evil).

  13. So, a cuttting edge AI model turned out to be much cheaper and easier to produce than we thought. Weird reason to call something a "fad". Here's to hope nobody invents a way to produce much cheaper and faster cars, or this whole Car Fad will be over too.
  14. I keep reading people on the internet mistrusting him with a lot of confidence, but I haven't heard of any tangible evidence that he's lying about anything.

    Can you name a couple of examples of the things he said that we know are lies? Or is it all just people making uninformed assumptions or being snarky?

  15. "Intelligence" is a poorly defined term prone to arguments about semantics and goalpost shifting.

    I think it's more productive to think about AI in terms of "effectiveness" or "capability". If you ask it, "what is the capital of France?", and it replies "Paris" - it doesn't matter whether it is intelligent or not, it is effective/capable at identifying the capital of France.

    Same goes for producing an image, writing SQL code that works, automating some % of intellectual labor, giving medical advice, solving an equation, piloting a drone, building and managing a profitable company. It is capable of various things to various degrees. If these capabilities are enough to make money, create risks, change the world in some significant way - that is the part that matters.

    Whether we call it "intelligence" or "probabilistically generaring syllables" is not important.

  16. Don't mind the weird negativity you're getting from some of the comments, this project is awesome and very inspiring! It's amazing to see someone so creative and enthusiastic about what they do. The idea is great, and the execution is excellent as well. The UI is unique and charming, while being easy to use.

    People complain about audio being slow to listen to - I don't know, people do listen to hours of podcasts. People do spend hours on tiktok. With enough users and a voting system, the best content should rise to the top. With the playlist functionality, you'd queue the posts you want to listen to and listen to them passively, while cleaning the room or driving to work.

    Recording little songs or super short flashfiction stories... With the right creators to make quality content, I totally see how this could turn into something awesome.

    One bit of feedback - why require the usernames to end on a number? I want to use a username Im using everywhere else.

    Also, uploading an audio file didn't work for me.

  17. Same point made in a more entertaining and less pretentious way:

    https://motherfuckingwebsite.com/

    The idea of minimal web is appealing from some angle, but people add all the non-minimal stuff because it works. If you want to have readers, it is silly to avoid making basic optimizations. If you don't care about having readers - why not just write a journal?

  18. An extremely interesting and in-depth post about the subject:

    https://www.astralcodexten.com/p/give-up-seventy-percent-of-...

  19. This comment doesn't deserve the downvotes its getting, the author is right, and I'm having the same experience.

    LLM outputs aren't always perfect, but that doesn't stop them from being extremely helpful and massively increasing my productivity.

    They help me to get things done with the tech I'm familiar with much faster, get things done with tech I'm unfamiliar with that I wouldn't be able to do before, and they are extremely helpful for learning as well.

    Also, I've noticed that using them has made me much more curious. I'm asking so many new questions now, I've had no idea how many things I was casually curious about, but not curious enough to google.

  20. I just mean that when you click the button to generate a new version of the response (or edit your own message), ChatGPT shows you the arrow buttons enabling you to go to the previous version of it, and that works for all the messages, so you can go back up a few messages and try a different version of the conversation, without losing what you've had before.
  21. Good history search (including non "main" conversation branches) and convenient conversation management (bookmarking, folders, maybe something smarter) would be great.

    Also, maybe some convenient way to create message templates? I don't know how I'd implement this, I just know that I often write one long prompt that I reuse multiple times, with multiple minor tweaks/edits, and it'd be amazing to have a convenient tool to manage that.

    Also, good mobile/tablet support, convenient to use and without bugs (as I happen to spend most of my time writing prompts on my ipad, but that's just me).

    If you already have a demo - please share a link, I'd be happy to beta test it and maybe become one of the early customers.

This user hasn’t submitted anything.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal