Preferences

I don’t think it’s a reasonable expectation for a language model to evaluate arbitrary code.

It's definitely not a reasonable expectation. That doesn't stop the breathless hype from implying it is reasonable.
Where is this breathless hype everyone speaks of? All I see is an endless stream of populist hot takes on the exact opposite!
There was a post on reddit about how all software was going to be replaced by LLMs, there's a podcast talking about how ChatGPT will become the universal interface that can reconfigure itself as needed depending on the API it's using and what the user needs. There are people claiming programmers will be replaced by prompt engineers. The hype has been everywhere the last few months.
"populist hot takes" haha, I'm sorry, but seeing this kind of language in a "gpt can/can't code" debate on a "hacker" forum is just funny to me
It's not breathless hype, as much as surprise in the fact that it actually does work, way better than it should.
It's weird that reasoning through complex mathematical proofs is easier (for both me and GPT) than mentally evaluating a short snippet of code. I've done a BSCS, I've done those tests that ask you what the output of a poorly written and tricky loop is, that stuff hurts my brain. In comparison, a mathematical proof, once you're ready, feels good.
"Mentally evaluating a short snippet of code" requires remembering the state of that code, then repeatedly overwriting your memory as you step through it. GPT-4 isn't able to do that; it is in a sense purely functional.

I've had success getting it to evaluate FizzBuzz, but to do so I told it to write out the state of the program one timestep at a time.

https://chat.openai.com/share/c109e946-fb6d-494e-8182-fc93d2...

...this is actually 3.5. 4 wouldn't need as much explanation.

This feels like an interesting insight into not just why LLMs might have trouble understanding some code, but also why people might have trouble understanding some code.

I wonder how gpt handles code where multiple objects of the same class call the same methods and each individual object calls the same methods with different threads as well as method local variables, class member variables, and static member variables are all in play. [Being fair to our LLM brethren; us meat sacks are sure to hate that hypothetical code as well.]

If you give me an example, I'll try it. From that description I'm not sure of much more than I hate it. :)
Off the top of my head, something like this comes to mind:

    class X {
        private static int _staticCount = 0;
        private int _memberCount = 0;

        public uint Stuff(int input) {
            if (input <= 0) {
                return (uint)(_staticCount + 1);
            }

            var y = new Thread(() => { 
                Interlocked.Add(ref X._staticCount, (int)Stuff(input - 1));
                Interlocked.Add(ref _memberCount, (int)Stuff(input - 2));
            });
            y.Start();

            y.Join();

            return (uint)(Stuff(input - 1) + _memberCount); 
        }
    }

    class Program {

        public static void Main() {

            var x = new X();
            var y = new X();
            var z = new X();

            uint resultX = x.Stuff(4);
            uint resultY = y.Stuff(3);
            uint resultZ = z.Stuff(2);


            Console.WriteLine(resultX + resultY + resultZ);
        }

    }
EDIT: Probably should have setup another way to wait for the threads to end, that way all three X classes could be running at the same time. But perhaps this is a good enough start point.

I get 3274511360 when I run it. Although, I had to upgrade everything to uint because I was getting some overflow ... so there might be some of that in the output.

It isn't, yet it's used that way.
Everyone is worried about doomsday scenarios like runaway AGI and Skynet, meanwhile the real danger of AI turns out to be people naively assuming it works like the computer in Star Trek, that because it can hold a conversation, is has some form of intelligence and awareness.
Please keep saying this over and over. Even people who get that it doesn't work that way don't seem to grasp the danger of others relying on these tools in inappropriate ways.
The problem is LLMs are kind of an outside context problem for our culture. We're still arguing over the correct language to even describe what it does ("hallucinate"? "confabulate"? "lie?") because every word we have presupposes the existence of a stateful mind with intent. It's a philosophical zombie, which is so counterintuitive most people can't even accept it as a sensible concept.

Between the hype and decades of indoctrination through science fiction about how AI is "supposed" to work, I guess it isn't surprising that this is how things are shaking out. People will learn, the way they learned with the internet, but it will take a few years I think.

The computer in Star Trek seems to only work like a sentient intelligence when you prompt to the Holodeck a certain way, like make a worthy opponent for Data.

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal