- codebje parentBoth of you are the exceptions that prove the rule?
- Everything is lined up for sub-optimal play.
For a start, the setting is an emotive one. It's not just a numeric game with arbitrary tokens, it's about "the perfect romantic partner." It would take an unusually self-isolating human to not identify who they feel their perfect match should be and bias towards that, subconsciously or consciously. We (nearly) all seek connection.
Then, it's reality TV. Contestants will be chosen for emotional volatility, and relentlessly manipulated to generate drama. No-one is going to watch a full season of a handful of math nerds take a few minutes to progress a worksheet towards a solution each week coupled with whatever they do to pass the time otherwise.
- The moral outrage crowd in the US have no power. The people who can and will act against you will only use morality as an excuse, not a cause. Being some nobody, the government has no interest in you anyway. You can watch porn, they can know it, and nothing changes, because you're still a nobody.
(If you watch porn online, you can be pretty sure they already "know" it, because you're not doing it in the privacy of your own home, you're doing it on a public network with next to no secrecy about who you are or what you're doing).
- The difference is legislation, in both cases. Permissible data exchange between government services is legislatively encoded. Permissible sentences are legislatively encoded.
Since we don't see a whole lot of moderately healthy democracies arbitrarily jailing people for life, one might reasonably assume these sorts of controls work.
- Absolutely this. We have limits in place for usage of a bunch of this sort of stuff, from not at all to up to an hour, and we'd be constantly tested and pushed on these limits. Constantly. "But my friends are..." is the usual start to it.
Government says you can't chat with just anyone in Roblox, and suddenly it's accepted that this is just what it is. Not only that, but limits and rules on how much and when you can watch YouTube and the like are also suddenly more acceptable.
So far what my kids are saying is that this is broadly true across their peer groups. The exceptions are just that, exceptions. The peer pressure to be in on it all is lessened. And in turn, that means less push-back on boundaries set by us, because it's less of a big deal.
(And I face less of a dilemma of how much to allow to balance out the harm of not being part of the zeitgeist vs. the harm of short form, mega-corporation curated content).
- https://github.com/jacobly0/llvm-project
... but now three years out of date, because it's hard to maintain :-)
- The CE-dev community's LLVM back-end for the (e)Z80 'panned out' in that it produced pretty decent Z80 assembly code, but like most hobby-level back-ends the effort to keep up to date with LLVM's changes overwhelmed the few contributors active and it's now three years since the last release. It still works, so long as you're OK using the older LLVM (and clang).
This is why these back-ends aren't accepted by the LLVM project: without a significant commitment to supporting them, they're a liability for LLVM.
- I'm sorry, but I'm not sure if you're implying you dislike Apple's approach to what the user is allowed to do, or suggesting we should only talk about general purpose computing devices. If it's the latter, sure, the iPhone's not an innovation in that space, discard it from my list of examples. If it's the former, I'll give you that too, but it was still the first of its kind, by a large margin.
(I remember the huge window in which phone companies desperately put out feature phones with sub-par touch screens, completely missing the value to consumers. The iPod Touch should've been warning enough... and should've been (one of) my signal(s) to buy Apple stock, I guess :-)
- I broadly agree, in the sense of providing the vision, direction, and design choices for the LLM to do a lot of the grunt work of implementation.
The comprehension problem isn't really so much about software, per se, though it can apply there too. LLMs do not think, they compute statistically likely tokens from their training corpus and context window, so if I can't understand the thing any more and I'm just asking the LLM to figure it out, do a solution, and tell me I did a good job sitting there doomscrolling while it worked, I'm adding zero value to the situation and may as well not even be there.
If I lose the ability to comprehend a project, I lose the ability to contribute to it.
Is it harmful to me if I ask an LLM to explain a function whose workings are a bit opaque to me? Maybe not. It doesn't really feel harmful. But that's the parallel to the ChatGPT social thing: it doesn't really feel harmful in each small step, it's only harmful when you look back and realise you lost something important.
I think comprehension might just be that something important I don't want to lose.
I don't think, by the way, that LLM-style prose is the new Esperanto. Having one AI write some slop that another AI reads and coarsely translates back into something closer to the original prompt like some kind of telephone game feels like a step backwards in collaboration to me.
- The iPhone wasn't designed or marketed to large corporations. 3dfx didn't invent the voodoo for B2B sales. IBM didn't branch out from international business machines to the personal computer for business sales. The compact disc wasn't invented for corporate storage.
Computing didn't take off until it shrank from the giant, unreliable beasts of machines owned by a small number of big corporations to the home computers of the 70s.
There's a lot more of us than them.
There's a gold rush market for GPUs and DRAM. It won't last forever, but while it does high volume sales at high margins will dominate supply. GPUs are still inflated from the crypto rush, too.
- Gen Alpha is people born roughly 2010-2020, younger than gen Z, raised on social media and smartphones. Gen Beta is proposed for people being born now.
Radicalising forums are already filled with bots, but there's no need to finely tune them to each participant because group behaviours are already well understood and easily manipulated.
- Using ChatGPT to numb social isolation is akin to using alcohol to numb anxiety.
ChatGPT isn't a social connection: LLMs don't connect with you. There is no relationship growth, just an echo chamber with one occupant.
Maybe it's a little healthier for society overall if people become withdrawn to the point of suicide by spiralling deeper into loneliness with an AI chat instead of being radicalised to mass murder by forum bots and propagandists, but those are not the only two options out there.
Join a club. It doesn't really matter what it's for, so long as you like the general gist of it (and, you know, it's not "plot terrorism"). Sit in the corner and do the club thing, and social connections will form whether you want them to or not. Be a choir nerd, be a bonsai nut, do macrame, do crossfit, find a niche thing you like that you can do in a group setting, and loneliness will fade.
Numbing it will just make it hurt worse when the feeling returns, and it'll seem like the only answer is more numbing.
- Not necessarily: transactional, impersonal directions to a machine to complete a task don't automatically imply, in my mind, the sorts of feedback loops necessary to induce AI psychosis.
All CASE tools, however, displace human skills, and all unused skills atrophy. I struggle to read code without syntax highlighting after decades of using it to replace my own ability to parse syntactic elements.
Perhaps the slow shift risk is to one of poor comprehension. Using LLMs for language comprehension tasks - summarising, producing boilerplate (text or code), and the like - I think shifts one's mindset to avoiding such tasks, eventually eroding the skills needed to do them. Not something one would notice per interaction, but that might result in a major change in behaviour.
- Relocation information, primarily.
ELF supports loading a shared library to some arbitrary memory address and fixing up references to symbols in that library accordingly, including dynamically after load time with dlopen(3).
a.out did not support this. The executable format doesn't have relocation entries, which means every address in the binary was fixed at link time. Shared libraries were supported by maintaining a table of statically-assigned, non-overlapping address spaces, and at link time resolving external references to those fixed addresses.
Loading is faster and simpler when all you do is copy sections into memory then jump to the start address.
- I think you're still describing a countably infinite set: there's a bijection between the natural numbers and the set of houses.
One way to think about it is that, even though you're defining an index that permits infinite amounts of subdivision, from any given house there's always a "next house up" in the vector: you can move up one space.
In a real-indexed vector, that notion doesn't apply. It's "infinity plus one" all the way down: whatever real value you pick to start with, x, there's no delta small enough to add to it such that there's no number between x and x+d.
- > The predicate gets tested every time we do type checking? It is part of the type identity.
When does type checking happen?
I think it happens at compile time, which means the predicate is not used for anything at all at run time.
> edit: I think I am starting to understand. In the implementations that are currently existing, Singleton types may be abstracted. My point is exactly to unabstract them so that the value is part of their identity.
I am not sure what you mean by "to unabstract them so that the value is part of their identity", sorry. Could you please explain it for me?
> And only then we can deal with only types i.e. everything from the same Universe.
If you mean avoiding the hierarchy of universes that many dependently typed languages have, the reason they have those is that treating all types as being in the same universe leads to a paradox ("the type of all types" contains itself and that gets weird - not impossible, just weird).
- The dependent type question is, I think, subtly different: "a function whose return type is either a number or a string depending on the boolean argument."
The union type (or the more or less equivalent Either ADT) don't give you type checking because you can take either (sic) branch no matter what your boolean value was. You're free to write a function that takes false and returns an int, and the type checker will accept that function as correct.
No matter how you twist and turn in GHC Haskell or TypeScript, you can't prevent that just by the type signature alone, so you have to "trust me bro" that the code does what the comment says.
For this example that's trivial and laughable, but being able to trust that code does what the compiler-enforced types say and nothing more is perhaps a step towards addressing supply chain attacks.
- There's only one possible value of type Nine; there's only one possible value of type Unit. They're isomorphic: there's a pair of functions to convert from Nine to Unit and from Unit to Nine whose compositions are identities. Both functions are just constants that discard their argument un-inspected. "nineToUnit _ = unit" and "unitToNine _ = {9}".
You've made up your language and syntax for "type Nine int = {9}" so the rules of how it works are up to you. You're sort of talking about it like it's a refinement type, which is a type plus a predicate over values of the type. Refinement types aren't quite dependent types: they're sort of like a dependent pair where the second term is of kind Proposition, not Type; your type in Liquid Haskell would look something like 'type Nine = Int<\n -> n == 9>'.
Your type Nine carries no information, so the most reasonable runtime representation is no representation at all. Any use of the Nine boils down to pattern matching on it, and a pattern match on a Nine only has one possible branch, so you can ignore the Nine term altogether.
- You could probably say that. AFAIK there isn't a single valid definition of a dependently typed language any more than there is a single valid definition of a functional language.
I usually go with "you can make a dependent pair", which is two terms where the type of the second depends on the value of the first; I think you could do that in Zig with a bit of handwaving around whether the comptime expression really is the first value or not.
What Zig's doing is also, as best I understand it all, the only real path to anything like dependent types in imperative languages: make a separate sub-language for constant expressions that has its own evaluation rules that run at compile time. See also C++ constexpr, or the constants permitted in a Rust static array size parameter (practically a dependent type itself!)
A functional language's basic building block is an expression and terms are built out of composed expressions; normalisation means reducing a term as far as possible, and equality is (handwaving!) when two terms reduce to exactly the same thing. So you can do your dependent terms in the usual source language with the usual evaluation rules, and those terms are freely usable at runtime or compile time, it's all the same code.
An imperative language's basic building block is a memory peek or poke, with programs composed of sequences of peeks and pokes scheduled by control flow operators. It's much less clear what "normalisation" looks like here, so imperative languages need that separate constant-value embedded language.
- I had a collector's edition 3-man transport ship, but IIRC the novelty of standing on the ship while in transit wore off before the beta ended. Cool, but too shallow on its own.
I can't figure out if the open world game was fun enough just on its own that an open space game would've been chef's kiss, or if it did need some kind of story telling too. It's too long ago to remember well enough, for me.
- The skill tree system was so nice compared to the rigid class systems of other MMORPGs, too.
The fact that player towns just emerged was really cool.
It was such a shame the space expansion was so ... flat. Neither space nor ground had a storyline to follow, but space wasn't an open world, and had no real element of choice in skill paths.