Preferences

rwl4
Joined 655 karma

  1. These kinds of products are drop dead gorgeous to me. Any time I see a device that has an Amiga 500 form factor or similar, I feel a compulsive urge to click buy. But after many, many of such purchases, I've learned my lesson.

    I buy it, I play with it a little bit, but the reality is my phone, iPad, or my laptop can do every single thing better.

    Maybe not with the same swagger. But ultimately, as I get older I realize I'm trying to produce with the least friction possible, and usually these devices have either highly constrained touch interfaces, shrunken keyboards, or both.

    I've always said that if somebody would create a new HP 200LX device with the same chicklet keyboard that I'd buy it in an instant. But now I realize that "ideal" device for me just reaches back to my contextual memory of state of the art devices of the time. A time when we couldn't type on a 6" screen, or use a detachable keyboard. So a chiclet keyboard you could thumb type at 40wpm was a revelation. But we have come a long way.

    In the end, alas, these devices really are just a novelty, at least for me.

  2. This was a great read! I'm not a paid subscriber, so I'll post my thoughts here.

    One angle I think that might be missing is that when only men worked outside the home, women would be stuck at home all day with housework and childcare which I would guess was quite isolating. So I would guess these gatherings were a lifeline.

    When women entered the workforce, they gained the same quasi-social environment men had enjoyed all along. Work friendships might not be as deep as neighborhood ones, but they're "good enough" to take the edge off loneliness. Not only that, but now both partners would come home fatigued from a full day of work. So neither would have a strong drive to now setup these gatherings. Before, you had one exhausted partner who could be coaxed into socializing by a partner who genuinely needed it. Now you have mutual exhaustion. Even worse, planning a party starts to feel like another work project rather than something restorative.

    There's a multi-generational aspect to this too. Their kids learned the lesson that home is for family and screens, not for social gatherings. Computers and smartphones arrived and provided social interaction that required minimal energy. No cleaning the house, no planning food, no getting dressed. Perfect for an already exhausted population that had been socially declining for years.

  3. Somewhat related: Anyone recognize the keyboard in the header image? Looks extremely similar to an HP 95LX series but isn't one I recognize.
  4. Just in case anybody is interested in a bit more of a casual format, I had NotebookLM create a podcast from the paper.

    https://notebooklm.google.com/notebook/0bef03c4-3ed5-4b13-90...

  5. Not sure about the quality of the model's output. But I really appreciate this little mini-paper they produced. It gives a nice concise description of their goals, benchmarks, dataset preparation, model sizes, challenges and conclusion. And the whole thing is about a 5-10 minute read.
  6. Nice looking app!

    Very similar to DevUtils (https://devutils.com), but with a much more friendly pricing scheme which is great.

    I'm curious though, why link to the GitHub site if it's not an open source app? You even do GitHub releases of what I guess are just the readme files and screenshots?

  7. Interesting idea!

    You can somewhat recreate the essence of this using a system prompt with any sufficiently sized model. Here's the prompt I tried for anybody who's interested:

      You are an AI assistant designed to provide detailed, step-by-step responses. Your outputs should follow this structure:
    
      1. Begin with a <thinking> section. Everything in this section is invisible to the user.
      2. Inside the thinking section:
         a. Briefly analyze the question and outline your approach.
         b. Present a clear plan of steps to solve the problem.
         c. Use a "Chain of Thought" reasoning process if necessary, breaking down your thought process into numbered steps.
      3. Include a <reflection> section for each idea where you:
         a. Review your reasoning.
         b. Check for potential errors or oversights.
         c. Confirm or adjust your conclusion if necessary.
      4. Be sure to close all reflection sections.
      5. Close the thinking section with </thinking>.
      6. Provide your final answer in an <output> section.
      
      Always use these tags in your responses. Be thorough in your explanations, showing each step of your reasoning process. Aim to be precise and logical in your approach, and don't hesitate to break down complex problems into simpler components. Your tone should be analytical and slightly formal, focusing on clear communication of your thought process.
      
      Remember: Both <thinking> and <reflection> MUST be tags and must be closed at their conclusion
      
      Make sure all <tags> are on separate lines with no other text. Do not include other text on a line containing a tag.
  8. Look, I get it. Startups are hard. Desperation can lead to poor choices. But spamming isn't the answer and will burn bridges faster than you can build them.

    If you are genuinely sorry and want to (possibly) turn this around, here are some ideas:

    1. Consolidate your work into one solid well documented Github repo, including a small toy model that people can actually use and play with.

    2. Write a blog post or tutorial about your tech. That'll show your expertise without making people feel like they are being duped.

    3. Engage in relevant communities genuinely. Build relationships, not just a user base.

    4. Use proper channels for promotion... Show HN exists for a reason.

    Remember, reputation is currency in this industry. It's hard to earn and easy to lose. Take the long view, build something good, be transparent, and let organic interest do its thing...

    If your product is actually solid, there are ways to get it out there. It might be slower, but it's at least sustainable. Good luck.

  9. So I'm trying to understand. Is this spam for the Lamucal service? I saw this same code posted on Reddit the other day under a different name. Here are a few repos with the exact same code under different names:

    - https://github.com/DoMusic/Hybrid-Net

    - https://github.com/TuneMusic/NiceMusic

    - https://github.com/JoinMusic/fish

    - https://github.com/Famuse/CombineNet

    - https://github.com/AIAudioLab/AITabs

    - https://github.com/AIMusicLab/MicroMuisc

    I'm pretty sure there are more, but I'll stop there. Especially suspicious considering all the usernames.

    Here's a post from yesterday on Reddit:

    - https://www.reddit.com/r/coolaitools/comments/1ervthn/found_...

    I'm guessing the general process here is:

    - Push novelty (but unusable to most people) code to new Github repo

    - Submit that code to Reddit/Hacker News

    - People see it and are impressed by the novelty code, despite not running it due to missing the models themselves, etc. They upvote and subscribe ($$$) to actually try it.

    - Repeat

    I understand the desire to promote one's new service, and the product seems like it could be interesting, but this is not the way to get the word out. Reputation matters.

    Edit:

    Check out the user deeplover's post/comment history. One submission with the MicroMusic (see above) repo, and one comment, see below.

    Also, the post by user liwei0517 is almost exactly like BigOrange688 on Reddit. See: https://www.reddit.com/r/MachineLearning/comments/1es0deh/co...

  10. Here's a direct link to him being dragged off the stage:

    https://x.com/dmitrygr/status/1822124650547257637

    It's definitely somewhat aggressive. Way to burn bridges.

  11. Well, thanks for putting this up! It's really a treasure for those of us who used it as our daily driver so many years ago.
  12. Weird that for a couple minutes, these paths existed:

    * https://github.com/microsoft/MS-DOS/tree/main/v4.0/bin

    * https://github.com/microsoft/MS-DOS/tree/main/v4.0/bin/DISK1

    * https://github.com/microsoft/MS-DOS/tree/main/v4.0/pdf

    But they disappeared as I browsed the repo. I guess they didn't want that part public?

    Edit: I knew I wasn't seeing things! Somebody forked it along with those files: https://github.com/OwnedByWuigi/DOS/tree/main/v4.0

  13. That screen protector in the headline picture...
  14. I've been an extremely happy user of Rocket Money for a couple years now.
  15. This reminds me of megaCar from the same era:

    https://web.archive.org/web/20001019032036/http://www.megaca...

    Browsing that site again now in 2023 is hilarious! Make sure to turn your audio on.

  16. The author of the article appears to have misunderstood one important detail about Code Llama.

    They state:

    > The Code Llama models were trained on 500B tokens, whereas Llama 2 models were trained on 2T tokens. Since the Code Llama model was trained on 4x fewer tokens, maybe a CodeLlama 70B version did not perform well enough due to LLM scaling laws—there was not enough training data.

    But if you read the paper, on page 1, it says:

    > Our approach is based on gradually specializing and increasing the capabilities of Llama 2 models by applying a cascade of training and fine-tuning steps [...]

    In fact, they show a diagram at the top of page 3 that details the process, starting with Llama 2 foundation models.

    Llama 2 Foundation models (7B, 13B, 34B) -> Code training 500B -> Python / Long Context.

    See the paper here: https://arxiv.org/abs/2308.12950

  17. Whoa! I remember experiencing this and just assuming it must be my imagination! I also remember that same laptop getting those pits. No where as dramatic as the pictures in the linked page, but still pronounced. Crazy.

    It's never happened to my more recent MBPs.

  18. Somewhat related story time! In the early 2000's my main business was web hosting. It paid the bills but never made me enough to really invest in it. So it kept running, sitting in a colocation space. In 2005, having the mail server hosted on my web server was becoming a problem, so I decided to put it on a new server.

    I chose a 1.42Ghz Power PC Mac Mini. I installed Linux, and was very happy with how well it worked, how tiny it was, and how it took a fraction of the the rack space that my web server and other servers took. I thought I might even just use those in the future.

    Fast forward a couple years, and the load started increasing. I had used XFS for the mail partition, it ran Qmail and used used Maildirs which tended to accumulate thousands of files per mail directory, and the server was starting to choke. I also avoided rebooting it for years. If I remember correctly, by the end, the server had an 6 year uptime because I was so scared that rebooting it might brick it. But I had a major problem: this Qmail+Vpopmail+SpamAssassin+[dozens of custom tweaks] install had accumulated so many custom hacks, tweaks and patches that I never had confidence that I could do a real downtime-free cut over to a new system without a barrage of complaints.

    So I put it off. And I put it off. Fast forward to about 2013 and I decided enough was enough, so instead of doing a fraught cut-over, I just ended email service. Problem solved. Best choice I ever made.

    Needless to say, I avoid overly complex, patched configs now.

  19. Hmm. My Mac shows these:

    [...] 15 sententiousness 15 sinuatodentated 15 soundheadedness 15 tendentiousness 15 uninitiatedness 16 antisensuousness 16 ostentatiousness 17 dissentaneousness 17 instantaneousness 18 unostentatiousness

This user hasn’t submitted anything.