- gitpusher parentCurious: Can you expand a little bit on your usage? $700/month equates to 350,000 minutes. Are you just running a truck-load of different Actions, or are the Actions themselves long-lived (waiting on something to complete)?
- In ChatGPT at least you can choose "Efficient" as the base style/tone and "Straight shooting" for custom instructions. And this seems to eliminate a lot of the fluff. I no longer get those cloyingly sweet outputs that play to my ego in cringey vernacular. Although it still won't go as far as criticizing my thoughts or ideas unless I explicitly ask it to (humans will happily do this without prompting. lol)
- This seems like it could be the source: https://github.com/GPUOpen-LibrariesAndSDKs/Cauldron/blob/ma...
If true, then this usage could violate its MIT License: "The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software."
The file seems to have been copied verbatim, more or less. But without the copyright info
- The author makes it seem like we have two choices:
1) enable memory, and use ChatGPT like a confessional booth. Flood it with all of your deepest, darkest humiliations going all the way back to childhood ...
2) disable memory
Perhaps my age is showing. But memory or no memory, I would never tell ChatGPT anything compromising about myself. Nor would I tweet such things, write them in an email, or put them into a Slack message. This is just basic digital hygiene.
I've noticed a lot of people treat ChatGPT like a close confidant, which I find pretty interesting. Particularly the younger folks. I understand the allure – LLMs are the "friend" that never gets bored of listening, never judges you, and always says the right thing. Because of this people end up sharing even MORE than they would to their closest human friends.
- > Framing it in gigawatts is very interesting given the controversy
Exactly. When I saw the headline I assumed it would contain some sort of ambitious green energy build-out, or at least a commitment to acquire X% of the energy from renewable sources. That's the only reason I can think to brag about energy consumption
- Whoa, that's fascinating. So their botnet runs in multiple regions and will auto-switch if one has problems. Makes sense. Seems a bit strange to use China as the primary, though. Unless of course the attacker is based in China? Of the countries you mentioned Lithuania seems a much better choice. They have excellent pipes to EU and North America, and there's no firewall to deal with
- I worked at Apple and heard a lot of Steve stories. He really did personally approve everything. He would be sitting in a room, and team leads would all line up to give their quick 2-minute update. So it's the MacBook Air guy's turn. He comes in and places his prototype down in front of Steve. Steve opens the lid. Two seconds later he picks up the laptop and heaves it so hard it skipped across the table like a stone on water: "I said fxxking INSTANT ON!!" The poor guy collected his prototype and exited the room. Later the MacBook Air launched... it fxxking turned on the moment you open the lid
- It's important to understand that the firework situation today is VERY different from 5 years ago.
Before pandemic, big fireworks were only sold to professionals, and they were exploded at a pre-determined time and place. If you like big fireworks: no problem you can simply attend one of these shows. If you don't like fireworks: no problem just be somewhere else on that particular evening
Nowadays anyone can buy big stuff. And they are setting them off constantly. I live in an urban area and big BOOMs are going off all the time in my neighborhood, especially at night. I'm not sure how anyone can argue that this is OK or a matter of preference. It's a major disruptor to quality of life, and I wish all fireworks enthusiasts would get to experience this
If you ask me I would prefer a regression to the pre-2020 situation. Who knows. If kids weren't allowed to buy fireworks, Maybe the Palisades would still be here and my parents and all their friends would still have a house
- When you are feeling this way it's good to take stock of your 3 fundamentals... Food, Sleep, Exercise. If any are suffering, then it's almost guaranteed to be the source of your problem. It sounds elementary but I have to remind myself of this constantly. Particularly the sleep part
- HAHAHA. Remember when Sam was absolutely frothing at the mouth to "regulate AI" two years ago?
> https://www.nytimes.com/2023/05/16/technology/openai-altman-...
> https://edition.cnn.com/2023/06/09/tech/korea-altman-chatgpt...
- I don't understand. The prompt is given as 4 lines:
x
= 1
x
--> 0
which does not execute. And if I run them as two lines:
x = 1
x --> 0
then the final output is `true`. `1` is only output during the initial assignment, which is hardly surprising
If i understand correctly, the reason i am getting `true` is because the second line gets parsed as two operations:
(x--) > 0
first a post-decrement operator (--) followed by a comparison (greater than). Since x = 1 it's greater than 0 (true), but afterward it's decremented so you'll get false if you run the same line again
- from their docs: https://docs.k3s.io/
> We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a 10-letter word stylized as K8s. So something half as big as Kubernetes would be a 5-letter word stylized as K3s. There is no long form of K3s and no official pronunciation.
- > "I think this is a pretty stark wake-up call. The system is portrayed as being credible and informative and authoritative, and it's obviously not."
Except it was never claimed to be any of those things. In fact they have a bunch of disclaimers warning users the exact opposite.
Still, it will be interesting to see the outcome of this case. In today's age of misinformation, such disclaimers may not be sufficient. And AI's are developing the ability to be "confidently wrong" in ways that will become increasingly convincing & subtle over time.
- > "Leaving aside [all of AI's potential benefits] it is clear that large-language A.I. engines are creating real harms to all of humanity right now [...] While a human being is responsible for five tons of CO2 per year, training a large neural LM [language model] costs 284 tons."
Presuming this figure is in the right ballpark – 284 tons is actually quite a lot.
I did some back of the napkin math (with the help of GPT, of course.) 284 tons is roughly equivalent to...
- a person taking 120 round trip flights from Los Angeles to London - 2 or 3 NBA teams traveling to all their away games over the course of a season - driving 1 million miles in a car - 42 years of energy usage by a typical U.S. household
- Related – this video does a nice job of articulating that argument: https://www.youtube.com/watch?v=oHlpmxLTxpw