Ok I hear this all the time. Are pointers really that hard for so many people to understand? I'm not trying to brag it took me I think like 15 minutes to grok them from learning about them the first time. I'm sure it took me longer to be proficient but I don't get this legendary difficulty aura that seems to surround their existance.
Also yes nice project.
Job app complete projected archived and abandoned in 3...2..1... :). I hope not.
Then when you start using pointers, it makes sense. If variable is a pointer, that means its a memory location. *variable is a way to get that data. Then arrays is just a wrapper around pointer arithmetic.
Whereas with CS, you learn about variables first, which is an abstraction on top of memory, and pointers don't make sense in this regard.
This is why any EE/ECE grads are much better developers than CS grads, because once you understand fundamentals
This is largely not the case in my experience. They probably understand the lower level details of manipulating memory better, but there's a lot more to developing software than understanding that memory is a specific place.
Yep, and all of that is derivative from how to organize memory. Classes are just more fancy structs. Object creation is memory initialization. Processing flow and code reuse is recognizing memory access patterns. ETC and so on.
Can't agree with this enough. The moment i finally understood what pointers are was when I landed embedded job and during debugging session I looked at memory browser that showed my variable at exact address in memory. After that all about pointer arithmetic and even function pointers became clear as day. Something at least 3 teachers weren't able to explain clear enough.
Hah, like fuck they are. The worst code I regularly have to review is written by EE grads. They have less of an understanding of pointers than the people in the office with a CS background.
I still feel like this argument could be transferred to nearly any concept in CS though. Abstract enough anywhere and you will always start exceeding the brains working memory.
We are just pretending, there is nothing to understand?
They aren't even numbers. They're voltage-high and voltage-low signals.
Numbers don't even exist! You'll never find a 2 in nature. You'll find two things, but you'll never find the 2 itself.
And all 2s are the same 2. But every voltage signal representing a 2 is a completely different voltage signal. Sometimes they aren't even voltage signals! Sometimes they're magnetic flux signals! Sometimes they're electrical field signals! Sometimes they're photons modulated down a wire made of glass!
But the 2 they represent? Not even that is 2. It's 10!
Like we pretend it is high for convenience while we really mean higher. For all practical purposes our imaginary world works! hurray!
Apparently they are; I believe it's the indirection that gets people.
Most learners aren't really taught basics properly - they learn that a variable "contains" a value, when instead they should learn that all values have a type, and some variables hold values.
> I'm not trying to brag it took me I think like 15 minutes to grok them from learning about them the first time.
I can't remember not knowing pointers, so I can't really tell you how long it took for it to click, but I do know that I had done a non-trivial amount of assembly before I used C, so maybe that helped/.
It seems a lot of people assume that pointers don't actually consume any memory and then get confused trying to understand it that way.
I came at C after doing 6502 and 8086 assembler. Pointers just made sense because working with indirect addressing and understanding how things were stored in memory already made sense.
Now dependency injection, that's some magical bullshit right there.
That's all there's to it.
You can do DI in your own startup code and have some logic in there that substitutes mocks when running under test. Or you could change the logging when debug is enabled. Hardly rocket science. If you can write code, you can write the startup code.
If your team likes patterns, dont mention dependency injection unless you're confident it wont get replaced with the framework of the day.
See https://www.jamesshore.com/v2/blog/2023/the-problem-with-dep...
Frameworks turn your DI code into highly complicated configuration. The end result is a giant lump of code whose only achievement is hiding the new operator and possibly also someones job security.
> Now dependency injection, that's some magical bullshit right there.
I see you there! Joking aside, for me, I also struggled a lot with DI when I first saw it in Java. The ridiculous frameworks that hid all of the details drove me crazy. Even Google Guice was supposed to be more clear, but it was never as clear as... Eventually, I settled on hand-writing the wiring in one giant 1,000+ line function that builds the entire object graph on start-up. Then I could really understand DI because you could actually read (and debug) the code doing the "wiring" (building the object graph).I still don't understand this decision. I think it should've been like int^ p = &i; ... or ... int i = *p;
Everything clicked ironically when I went even deeper and studied assembly language. Then following pointers to data vs just reading pointers becomes very clear and explicit.
Variable declaration `T v;` means "declare `v` such that expression `v` has type `T`". Variable declaration `T *p` means declare `p` such that the expression `*p` has type `T`". etc.
This is the most confusing concept of pointers. I feel this could have been easily avoided with different character like ~ or ^ or other.
float * (*foo(int *)) (int);
foo is something, that can be called with an 'int *', which results in a pointer to something that can be called with an 'int', which results in something which can be dereferenced, which is a float.The problem arises when you start to mix memory management with more complex structures.
It’s extremely easy to make mistakes, and you must be very careful about ownership and clean up. Those things are not strictly related to pointers, but in C, it’s inevitable to use pointers to handle them. That's why people say pointers are hard.
When I first started learning to program, it was in C, with one of those "Sam's Learn Yerself a C++ in Eleventy-Three Days" books. I was, like, 15 or something. This was long enough ago and my family was just barely poor enough that we didn't even have the Internet yet.
The book explained memory. That was not hard to understand. But we had been using stack-allocated variables through several chapters in the book so far. I didn't get why you would ever want anything as a pointer. If I wanted to write a program to add 3 and 5, why wouldn't I just say "int x = 3;"? Why bother with this stupid dereferencing syntax? Given that the author chose to explain pointers by first explain the address-of operator on stack allocated variables, it felt particularly perverse. The regular, ol' variables were right there! Why but just use them
I didn't have a concept yet of what one could even do with programming. Hell, just a few years prior to that point, I was pretty sure all of the other programs on my computer were written by massive teams of wizards manually typing out 1s and 0s.
I still managed to bungle on and write code. But my programs in C never really worked well. Though, they still worked better than my classmates' versions! Then, in my 2nd year of college, I had transferred universities and the new place taught in a Java.
Java was disgusting. It was so verbose. Why did we need all these weird, multi-sylabic, period-infested function calls to do anything? Why was it so slow? Why couldn't I write ASCII-art graphics with it? Why couldn't I run my programs on my friend's computer?
It wasn't until I had taken computer architecture that I gained a much better understanding of what any of all these computer things were meant to do. I ended up implementing a basic scripting language in Java. And then, suddenly, I understood pointers.
Yes, anyone who has taken algorithms and data structures class in C knows that some people just don't get it.
Also the way people teach it tends to be bad, before teaching pointers you need to teach Stack and Heap at a conceptual level.
The * vs & always gets me and not to mention if I ever have to deal with Pointer Math.
int * p;
int *p;
int* p;
Now remember that the type is a memory address. I'm sure it is semantically wrong for whatever reason somebody will explain but it helps to think about it. So you can do : int my_number = 6;
int* p = &my_number;
Both sides of the "=" are the same type (int* is an address, and &my_number is also an address, the one of my_number).Now p is a pointer (or an int* or an address), and *p is... an int ! So this is totally valid :
printf("%d\n", *p)
and for anything else than int you need to malloc that so you will see a lot of : my_struct* s = malloc(sizeof(my_struct);
which makes sense because malloc returns an address (the beginning address of the content of s ; yet again somebody will tell me I'm wrong to call it "the content of s" so sorry for that). my_struct* // is the type of s, it is an address
my_struct // is the type of *s (some custom type of size sizeof(my_struct))I don't like that syntax, because it confuses people. It might be sensible to think of the type as (int *), but C just doesn't work this way. You might never declare more that a single variable in a statement, but it still gives people the wrong intuition.
I very much prefer that syntax, because the '*' is part of the type, not part of the variable name.
> You might never declare more that a single variable in a statement
int a, *b, c, *d;
Yes, you can do that, and in fact if you want to declare multiple pointers on the same line, you are required to put a '*' in front of every pointer variable name.Personally, I've always considered this to be a defect of the language. Would it really be so awful to have to write instead:
// Both are of type 'int'. Pretty straightforward.
int a, c;
// In my fantasy world, both of these would be of type 'pointer to int',
// but IRL b is a pointer and d is an int. fun!
int* b, d;
But of course that's not the language we have.I'd be very curious to know the motivation for requiring a '*' in front of each pointer variable name, as opposed to once with the type. So far the only ones I've thought of (not mutually exclusive) are:
a) we REALLY love terseness, and
b) ooh look at this clever hack I came up with for variable declaration syntax.
I think a lot of noobs learning C struggle with pointers especially because there are no good error messages besides "segmentation fault" :D
Yes. Especially pointer to pointer to ...
The big problem is that arrays are conflated with pointers because C doesn't do slices. If C had slices, people would naturally avoid pointers except in the cases where they were genuinely necessary. That would make teaching pointers vastly easier.
Appreciate you saying that!
I'm busy writing some of the most optimized-but-still-portable code that I've ever written and it is very interesting to see how even a slight difference in how you express something can cause a massive difference in execution speed (especially, obviously, in inner loops). Your code is clearly written from what your comfort zone with C is and I'm really impressed by the restraint on display. At the same time, some of the code feels a bit repetitive and would benefit from more universal mechanisms. But that would require more effort and I'm not even sure if that is productive. One part where I see this is in the argument parsing code as well as in the way you handle strings, it is all coded very explicitly, which substantially increases the chance of making a mistake.
Another limitation is that using AI to help you write the code means you don't actually understand what it does, and this in turn may expose you to side effects that you are not able to eliminate because you did not consider them while writing, it is as if someone else gave you that code and asked you to trust them they did not make any mistakes.
I've also never seen tests written this way in C. Great work.
C was the first programming language I learned when I was still in middle/high school, raising the family PC out of the grave by installing free software - which I learned was mostly built in C. I never had many options for coursework in compsci until I was in college, where we did data structures and algorithms in C++, so I had a leg up as I'd already understood pointers. :-)
Happy to see C appreciated for what it is, a very clean and nice/simple language if you stay away from some of the nuts and bolts. Of course, the accessibility of the underlying nuts and bolts is one of the reasons for using C, so there's a balance.