- Issue 1: Establishing lots of reasons why people should encrypt
Issue 2: Making it easy to encrypt
Issue 3: Popularizing encryption or getting more people to do it
- They made some open source software for me ;^)
- A bit tangential but I was going to make a post of my own on naming with the problem of "filenames"
so many times I download something and the filename has nothing to do with the file or it's too much of an abbreviation so when I go to look for the file it's hard to find or if I come across one of these files I have no idea what it actually contains
- I do this with plain text but hear me out:
multiple files
multiple directories (folders)
(scripts)
- No:
I actually kind of find it surprising that this post and the top comments saying "yes" even exist because I think the answer should be so firmly "no", but I'll explain what I like to post elsewhere using AI (edit: and some reasons why I think LLM output is useful):
1. A unique human made prompt
2. AI output, designated as "AI says:". This saves you tokens and time copying and pasting over to get the output yourself, and it's really just to give you more info that you could argue for or against in the conversation (adds a lot of "value" to consider to the conversation).
3. Usually I do some manual skimming and trimming of the AI output to make sure it's saying something I'd like to share; just like I don't purely "vibe code" but usually kind of skim output to make sure it's not doing something "extremely bad". The "AI says:" disclaimer makes clear that I may have missed something, but usually there's useful information in the output that is probably better or less time consuming than doing lots of manual research. It's literally like citing Wikipedia or a web search and encouraging you to cross-check the info if it sounds questionable, but the info is good enough most of the time such that it seems valuable to share it.
Other points:
A. The AI-generated answers are just so good... it feels akin to people here not using AI to program (while I see a lot of posts posting otherwise that they have had a lot of positive experiences with using AI to program). It's really the same kind of idea. I think the key is in "unique prompts", that's the human element in the discussion. Essentially I am sharing "tweets" (microblogs) and then AI-generated essays about the topic (so maybe I have a different perspective on why I think this is totally acceptable, as you can always just scroll past AI output if it's labeled as such?). Maybe it makes more sense in context to me? Even for this post, you could have asked an AI "what are the pros and cons of allowing people to use LLM output to make comments" (a unique human prompt to add to the conversation) and then pasted AI output for people to consider the pros and cons of allowing such comments, and I'd anticipate doing this would generate a "pretty good essay to read".
B. This is kind of like in schools, AI is probably going to force them to adapt somehow because you could just add to a prompt to "respond in such a way as to be less detectable to a human" or something like that. At some point it's impossible to tell if someone is "cheating" in school or posting LLM output on to the comments here. But you don't need to despair because what's ultimately important on forum comments is that the information is useful, and if LLM output is useful then it will be upvoted. (In other concerning news related to this, I'm pretty sure they're working on how to generate forum posts and comments without humans being involved at all!)
So I guess for me the conversation is more how to handle LLM output and maybe for people to learn how to comment or post with AI assistance (much like people are learning to code with AI assistance), rather than to totally ban it (which to me seems very counter-productive).
edit: (100% human post btw!)
- This reminds me of something slightly different but of "not reinventing the wheel"
It feels like there should be emerging "optimized solutions" to certain problems that are widely accepted, but rather instead it seems like people just keep re-doing things that I thought we would have already "solved" and moved on past
For example, if you simply want to consume the cheapest caffeine source, I thought someone figured out it was powdered caffeine... versus paying maybe like 100x more for a coffee from a "coffee chain store". Now, granted the experience and maybe the same antioxidants or chemical makeup may not be the same in caffeine powder versus coffee, but the point is I feel like a lot of problems aren't "solved for optimization" which would enable us to make progress on some other unoptimized problem in society
I guess this "reinvention of the wheel" feels like a "vanity activity" to me?
- I think the top comment is getting at some issues, that this substack post seems to have a neoliberal (?) focus on material prosperity, but I'd try to frame the discussion maybe in a way to try to make it more obvious and then ask some questions about things.
Assume you are the richest person in the world.
What if you had to live in solitary confinement? (So, you wealth doesn't give you good relationships)
What if you were chronically sick? (Your wealth does not give you health)
What if you were not able to spend your money freely due to living under a dictator? (Your wealth does not give you freedom)
You could probably continue this thought experiment and maybe zero in on some specific problems.
What if you could be the wealthiest person but you literally had to work every waking hour? So, having wealth (in this thought experiment example) does not buy you free time.
What if you had access to being able to buy some of the best stuff but it costs more than it did for generations, forcing you to work more for "better" but more costly items? So having more money yourself doesn't say anything about how the market is developing around you.
Naturally, a counter-argument to some of the above is that money may allow you to buy things to solve some of these problems, but it doesn't always work out that way.
(I liked the article mostly in that it felt like it was expressing an obvious idea, that America has more "success" and thus "should" be happier but the author acknowledges there is some legitimate unhappiness that exist, and then it was kind of like a brainteaser to think about if people are rationally or irrationally unhappy in the USA)
- I've seen a lot of these posts and my comment a few times has been that coding was difficult before, so when a challenge met your skill, this put you in the psychological state of "flow". When there is too much challenge and not enough skill, that creates stress. When there is too much skill and not enough challenge (what AI is now creating, by increasing your "skill") then you get boredom.
So you're "bored" now, and you need to increase the challenge to match the new "skill" AI has given you. So if before maybe you worked on a singular app that took a long time, now you might work on more apps (plural) that you complete quicker with AI.
Maybe an analogy exists with walking versus bicycle riding, although not entirely. You walked a mile somewhere and back and that felt like a good walk, but now with a bicycle 2 miles doesn't feel like much. You now need to bike the equivalent of that walk which might be like 5 miles each way to feel like you got a "real leisurely exercise" in. Riding a bike is also a different skill than walking, so you need to learn how to ride the bicycle.
It's totally valid to feel unhappy about the change, but I think if you find the right challenges you may go back to feeling the joy you had before.
- Harvard: "Exercise is an all-natural treatment to fight depression"
https://www.health.harvard.edu/mind-and-mood/exercise-is-an-...
"Seasonal Affective Order, or SAD ... has been linked to vitamin D, otherwise known as the sunshine vitamin, because the skin absorbs it through exposure to sunlight."
https://www.va.gov/washington-dc-health-care/stories/combati...
"Consider adding some of these steps into your daily routine to improve your mood:"
"Spend time outside to get ample vitamin D ... Eat foods rich in vitamin D (salmon, eggs, tuna, etc.) Take vitamin D supplements"
- as others said, interesting idea, buggy and not totally functional
- have thought about trying this but was wondering how hard is it to unbrick your phone if you screw this up
- I'm curious to what extent things in this article have been fixed:
"Think twice before abandoning X11. Wayland breaks everything!"
https://gist.github.com/probonopd/9feb7c20257af5dd915e3a9f2d...
In my experience Wayland always had problems, so depending on how XWayland works, I'd probably have to drop Gnome if there's no X11 support that's functional and I imagine a lot of others would need to do so (until X11 support is reinstated)
What are some better Gnome alternatives that support X11?
- top comment seems to be on point, it's time for more of a focus on linux mobile (or mobile linux)... this has been known to be needed for years and some progress has been made on it and more can be made with more people getting involved (postmarketos, mobian, ubuntu touch, etc.)
- There's a lot of things to be said on these topics, it probably is worth trying to keep android "open" here, but there's also a lot of alternative routes to consider and in the long run I think maybe Android is a lost cause (?) to be abandoned
The big alternative is mobile linux or linux mobile, which is akin to desktop linux in the 2000s maybe in lagging behind the competing operating systems. An influx of interest in these operating systems and related hardware might make this discussion more moot (software like: postmarketos, mobian, ubuntu touch, and so on. hardware like: pinephone, raspberry pi used as a phone?, librem phones, and so on.)
Some progress has been made to have android phones run on linux with projects like postmarketos and mobian. Again, more people just focusing on building these projects, especially with the help of LLMs, might make this discussion less necessary.
F-Droid could also pivot a bit to promoting more linux mobile initiatives.
Apple should be called out as much as Google here for already being closed off.
Both platforms (ios and Android) could probably be appealed to through the incentive of "developer openness being good for business" - it probably helps both companies to make more money by making "sideloading" easy. If they both essentially become closed, this opens up a giant incentive for linux mobile to take over. (Maybe that is something we should root for?)
On the hardware side, we need some ios/android alternative phones. I've seen some people post that you can attach cell dongles to raspberry pis and use those as phones (?). Maybe more diy cell phone projects would be nice to see.
I guess the FSF is trying to create a Librephone; initiatives like this are overdue: https://liliputing.com/free-software-foundation-announces-a-...
Not sure what else to add, the writing has been on the wall that Google and Apple are trying to be closed source systems, so generally linux mobile (and/or *BSD mobile, if that's to be a thing in the future) need more attention.
This is probably a good moment to consider the alternatives and the seemingly predictable trajectory of where things are going.
- I think some of the detractors or skeptics of AI developments like this are highly underestimating the value of these kinds of innovations like Sora, maybe because they are focusing on less productive uses of video generation (so they seem to call a lot of it "slop")
I would think instead about educational or instructional videos: now instead of hunt for them online or producing them yourself somewhat "manually", you could just use AI technology to create them. That seems "massively" beneficial and useful.
To dismiss these kinds of developments sounds akin to dismissing the development of the ability to write because "people will only use writing to make erotic fiction or jokes" (!). No one would say that. Or to developing cameras because people will "only take indecent or goofy pictures with them" (!).
Maybe people are not making as much "useful" educational or instructional material, that's fair to say.
- I've seen multiple posts like this
I still stand by my previous response, which is about flow:
flow happens when your skills meet a sufficient challenge
AI has disrupted this by basically increasing your skills
when you have too many skills but not enough of a challenge, you feel boredom
if you have too much of a challenge and not enough skills, you feel anxiety
so you'd need a bigger challenge to feel like you're in flow if you have increased your skill ability
- thanks, that's helpful to hear
I know this is a new "space" so I've just been going off what I can find on here and other places and...
it all seems a little confusing to me besides what I otherwise tried to describe (and which apparently resonates with you, which is good to see)
- > The main takeaway, again, is to keep things simple.
if true this seems like a bloated approach but tbh I wouldn't claim to know totally how to use Claude like the author here...
I find you can get a lot of mileage out of "regular" prompts, I'd call them?
Just asking for what you need one prompt at a time?
I still can't visualize how any of the complexity on top of that like discussed in the article adds anything to carefully crafted prompts one at a time
I also still can't really visualize how claude works compared to simple prompts one at a time.
Like, wouldn't it be more efficient to generate a prompt and then check it by looping through the appendix sections ("Main Claude Code System Prompt" and "All Claude Code Tools"), or is that basically what the LLM does somewhat mysteriously (it just works)? So like "give me while loop equivalent in [new language I'm learning]" is the entirety of the prompt... then if you need to you can loop through the appendix section? Otherwise isn't that a massive over-use of tokens, and the requests might even be ignored because they're too complex?
The control flow eludes me a bit here. I otherwise get the impression that the LLM does not use the appendix sections correctly by adding them to prompts (like, couldn't it just ignore them at times)? It would seem like you'd get more accurate responses by separating that from whatever you're prompting and then checking the prompt through looping over the appendix sections.
Does that make any sense?
I'm visualizing coding an entire program as prompting discrete pieces of it. I have not needed elaborate .md files to do that, you just ask for "how to do a while loop equivalent in [new language I'm learning]" for example. It's possible my prompts are much simpler for my uses, but I still haven't seen any write-ups on how people are constructing elaborate programs in some other way.
Like how are people stringing prompts together to create whole programs? (I guess is one question I have that comes to mind)
I guess maybe I need to find a prompt-by-prompt breakdown of some people building things to get a clearer picture of how LLMs are being used
- is there a guide for corporate cybersecurity?
I can see faulting them for these lapses in security, but on the other hand I also don't have a guide in mind to point them to that they should make use of instead (obviously the guide they had was insufficient)
https://wiki.postmarketos.org/wiki/Category:Kobo