waffletower
Joined 799 karma
- waffletowerRegardless of correctness, as a DSP dork I really identified with the question: "What kind of a monster would make a non-power of two ring anyway?" I remember thinking similarly when requesting a power of two buffer from a 3rd party audio hardware device and having it correct to a nearby non-power of two. Latency adding ringbuffer to the rescue.
- Humans can fail at some of these qualifications, often without guile: - being consistent and knowing their limitations - people do not universally demonstrate effective understanding and mental modeling.
I don't believe the "consciousness" qualification is at all appropriate, as I would argue that it is a projection of the human machine's experience onto an entirely different machine with a substantially different existential topology -- relationship to time and sensorium. I don't think artificial general intelligence is a binary label which is applied if a machine rigidly simulates human agency, memory, and sensing.
- If this quantification of lag is anywhere near accurate (it may be larger and/or more complex to describe), soon open source models will be "simply good enough". Perhaps companies like Apple could be 2nd round AI growth companies -- where they market optimized private AI devices via already capable Macbooks or rumored appliances. While not obviating cloud AI, they could cheaply provide capable models without subscription while driving their revenue through increased device sales. If the cost of cloud AI increases to support its expense, this use case will act as a check on subscription prices.
- I think housing might have some potential with federal subsidies, particularly in the container scale pre-fabricated home market.
- If I were a hedge fund shorting AI, I would nod and promote the message of this article.
- Hmmm, maybe use a different OS? I would never dream of using Windows to get any type of work done myself and there are many others like me. There certainly are choices. If you prefer to stay, MCP services can be configured to use local models, and people are doing so on Windows as well (and definitely with MacOS and Linux). From an OS instrumentation perspective, I think MacOS is probably the most mature -- Apple has acknowledged MCP and intends a hybrid approach defaulting to their own in house, on device, models, but by embracing MCP appears to be allowing local model access.
- Exactly. I was paying for Gemini Pro, and moved to a Claude subscription. Am going to switch back to Gemini for the next few months. The cloud centralization, in its current product stage, allows you to be a model butterfly. And these affordable and capable frontier model subscriptions, help me train and modify my local open weight models.
- I think it is incredibly healthy to be critical and perhaps even a tinge cynical about the intentions of companies developing and productizing large language models (AI). However, the argument here completely ignores the evolving ecosystem of open weight models. Yes, the prominent companies developing frontier models are attempting to build markets and moats where possible, and the capital cloud investments are incredibly centralized. But even in 2025 the choice is there, with your own capital investment (RTX, MacBook etc.), for completely private and decentralized AI. You can also choose your own cloud too -- Cloudflare just acquired Replicate. If enough continue to participate in the open weight ecosystem, this centralization need not be totalitarian.
- Taken together, as Andrew Tsang (too) beautifully depicts, the United States Healthcare system is arguably the largest bureaucracy on planet Earth. Larger in employees and collective spending than any effective bureaucracies in India or China.
- If a student is given a task that a machine can do, and there is some intrinsic value for the student to perform this task manually and hermetically, this value ought to be explained to the student, and they can decide for themselves how to confront the challenge. I think LLMs pose an excellent challenge to educators -- if they are lazily asking for regurgitation from students they are likely to receive machine-aided regurgitation in 2025.
- It seems that 10% of college students in the U.S. are younger than 18, or do not have adult status. The other 90% are adults and are trusted with voting, armed services participation and enjoy most other rights that adults have (with several obvious and notable exceptions -- car rental and legal controlled substance purchase etc.) Are you saying that these adults shouldn't be trusted to use AI? In the United States, and much of the world, we have drawn the line at 18. Are you advocating that AI use shouldn't be allowed until a later cutoff in adulthood? It is not at all definitively established what these "essentially hidden" negative side effects are, that you elude to, and if they actually exist.
- You said I didn't read the article. That is your weak and petty straw man. Very clearly.
- That's an utterly hilarious straw man, a spin worthy of politics, and someone else would label, a tautological "cheat". Students "cheated" hundreds of years ago. Students "cheated" 25 years ago. They "cheat" now. You can make an argument that AI mechanizes "cheating" to such an extent that the impact is now catastrophic. I argue that the concern for "cheating", regardless of its scale, is far overblown and a fallacy to begin with. Graduation, or measurement of student ability, is a game, a simulation that does not test or foster cognitive development implicitly. Should universities become hermetic fortresses to buttress against these untold losses posed by AI? I think this is a deeply misguided approach. While I had been a professor myself for 8 years, and do somewhat value the ideal of The Liberal Arts Education, I think students are ultimately responsible for their own cognitive development. University students are primarily adults, not children and not prisoners. Credential provisions, and graduation (in the literal sense) of student populations, is an institutional practice to discard and evolve away from.
- You can straw man all you like, I haven't used an LLM in a few days -- definitely not to summarize this article -- and what you claim is the central idea, is directly related to my claim. Its very easy to combine them directly: students intellectual development is going to be impaired by AI because they can't be trusted to use it critically. I disagree.
- The core premise is decidedly naive and simplistic -- AI is used to cheat and students can't be trusted with it. This thesis is carried through the entirety of the article.
- This is such a naive, simplistic, distrusting and ultimately monastic perspective. An assumption here is that university students are uncritical and incapable of learning while utilizing AI as an instrument of mind. I think a much more prescient assessment would be that presence of AI demands a transformation and evolution of university curricula and assessment - and the author details early attempts at this -- but declares them failures and uncritical acquiescence. AI is literally built from staggeringly large subsets of human knowledge -- university cultures that refuse to critically participate and evolve with this development, and react by attempting to deny student access, do not deserve the title "university" -- perhaps "college", or the more fitting "monastery", would suffice. The obsession with "cheating", the fallacy that every individual needs to be assessed hermetically, has denied the reality (for centuries) that we are a collective and, now more than ever, embody a rich mass mind. Successful students will grow and flourish with these developments, and institutions of higher learning ought to as well.
- Seriously, I think it is a petty mistake to characterize Ruby as unserious. I am not drawn to the language myself, and my previous interest in it waned after debugging dependency rot in a cloud deployed Rails app more than 10 years ago. However, to label it as unserious would be nearly as unserious as labelling python unserious.
- While I found the summary of computational consciousness useful, the author infected their prose with dreadfully pompous judgements. The final straw was the author's declaration of boredom. Such obnoxious writing is unworthy of, and distracts from the subject matter. How did such wasteful and intolerant writing get upvoted? The original article surely has much more value than this painful summary.
- I really enjoy the doublespeak of "reality has a liberal bias". I can't think of a more telling and compelling example of the distortion caused by the binary lens of American politics.
- I liked betamax better, sorry. The tapes were more compact and used less storage space. Can't argue with that. I also liked that you could use betamax with a Sony PCM F1 processor to record digital audio before the advent of the DAT format (digital audio tape). Can't argue with that. But when was the last time I even thought about betamax? Much more front of mind are the vagaries of blu-ray formats; and I rarely think about them either.