Preferences

devinprater
Joined 1,165 karma
I'm a blind person interested in accessibility and audio-first design. Emacs with Emacspeak was my first love, but can't always be in Emacs. Interested in seeing Android accessibility continue to improve.

Email: r.d.t.prater@gmail.com


  1. What are you trying to sell me again? :)
  2. AI features you say? They could start by taking the alt-text generation model and making it browser-wide if blind people want it, not just for PDF's.
  3. Wow, I never heard of that before today. Sega Genesis was my first console. I still remember the six button controller. It worked well for Mortal Kombat 3.
  4. Can the tables have column headers so my screen reader can read the model name as I go across the benchmakrs? And the images should have alt-text.
  5. Emacs and Emacspeak make me feel something. A lot of something. This kind of "playground" feeling where I can dive into a manual that's just sitting right there. The the entire Emacs is a manual. C-h m and boom, all keyboard commands for that mode are right, feaking, there. No hidden bullcrap, no patchwork HTML tables to drudge through, nothing. And if something doesn't work with Emacspeak, I can Codex it into working. Maybe. Enough to get what I want done, done.
  6. Wow, just 32B? This could almost run on a good device with 64 GB RAM. Once it gets to Ollama I'll have to see just what I can get out of this.
  7. Holy crap it even felt like the HN front page with my screen reader. I thought I'd clicked the wrong link until I read the LLAMA 12 and such.
  8. I, myself, as a singular blind person, would absolutely love this. But we ain't there yet. On-device AI isn't finetuned for this, and neither Apple nor Google have shown indications of working on this in release software, so I'm sure we're a good 3 years away from the first version of this.
  9. Thank goodness for emulation. With OCR, and now AI screenshot descriptions, I can know what menu I'm in, what menu option is selected, dialog on the screen, stuff like that. Case and point, Dragon Ball Z Budokai Tenkaichi 3 for the Playstation 2. On original hardware, I had no idea what I was getting when I'd finish a battle in story mode. Now, with NetherSX2 on my phone, after a battle, I can have TalkBack describe the screen, listen to the description of what I won, press B to exit the description, press A to advance the game screen, read the next thing I won, and so on. Of course, the app has to have an accessibility element that TalkBack can grab onto to describe, so ironically, Retroarch doesn't work for this, and either does Lemuroid, but I mean it's a start, and hopefully one day TalkBack can grab the entire screen for a screenshot without needing an element onscreen to latch onto.
  10. Yes, they can. NVDA has a Speech Viewer. VoiceOver (Mac) has the caption panel.

    NVDA Speech viewer: https://download.nvaccess.org/documentation/userGuide.html#S... Caption Panel: https://support.apple.com/guide/voiceover/use-the-caption-pa...

  11. There are thousands of blind people on the net. Can't you hire one of them to test for you? Please?
  12. Lol the Copilot app isn't even that useful on iOS for a blind person. On Android, you type something in, hit sent, and the app pipes the pure output of the AI, Markdown formatting and citation markup included, to the screen reader. That's at least something. I mean it's crumbs, yes, but we blind people are very, very used to crumbs.

    On iOS, you type a message and send, and... nothing.

  13. Audio described Youtube please? That'd be so amazing! Even if I couldn't play Zelda yet, I could listen to a playthrough with Gemini describing it.
  14. Yep, they're nice. There are even online versions.
  15. I suppose I should write about them. A good few will be about issues with the mobile apps and websites for AI, like Claude not even letting me know a response is available to read, let alone sending it to the screen reader to be read. It's a mess, but if we blind people want it, we have to push through inaccessibility to get it.
  16. Image descriptions. TalkBack on Android has it built in and uses Gemini. VoiceOver still uses some older, less accurate, and far less descriptive ML model, but we can share images to Seeing AI or Be My Eyes and such and get a description.

    Video descriptions, through PiccyBot, have made watching more visual videos or videos where things happen that don't make sense without visuals much easier. Of course, it'd be much better if YouTube incorporated audio description through AI the same way they do captions, but that may happen in a good 2 years or so. I'm not holding my breath. Google as a whole is hard to get accessibility out of more than the bare minimum.

    Looking up information like restaurant menus. Yes it can make things up, but worst-case, the waiter says they don't have that.

  17. Nah, best pun ever!
  18. Mainly realtime processing. I play video games, and would love to play something like Legend of Zelda and just have the AI going, then ask it "read the menu options as I move between them," and it would speak each menu option as the cursor moves to it. Or when navigating a 3D environment, ask it to describe the surroundings, then ask it to tell me how to get to a place or object, then it guide me to it. That could be useful in real-world scenarios too.

This user hasn’t submitted anything.