And I don't think they're idiots, I just think that they're wrong. Not many people have read Djikstra and (?) and have deep faith in mathematical beauty, or have even thought about this much, doesn't make them idiots.
That's the second time you've ridiculously misrepresented my statements: "anyone disagreeing with you is an idiot" "deep faith will lead you to the conclusion" (and not deep faith + thinking about the actual problem, several convincing arguments and some ability)
Fortran chose 1-based indexing for a very obvious reason...it was the best translation from the mathematics literature that they were trying to implement. Because matrix notation uses 1-based indexing! MatLab, a language designed specifically as a high level language for matrix mathematics, chose it for the same reason. R, a language for statistics, chose 1-based indexing because it is a statistical language, and counting is one of the most fundamental operations in statistics, and 1-based indexing is the form used for counting.
Mathematicians obviously have no problem switching back and forth between 0-based and 1-based indexing for different domains, so it boggles my mind that computer scientists have turned it into such a huge holy war, and even more mind-boggling that 0-based zealots claim to have mathematics on their side.
[1]: https://www.hackerneue.com/item?id=15473169
[2]: https://www.hackerneue.com/item?id=15472933
--
The idea is that the interface to the data structure should ideally closely match the semantic meaning of the data (timestamps, frequencies, etc.) rather than memory addresses/pointers. That lets you program at a higher level of abstraction. More generally, both zero-based and one-based indexing have their natural uses, depending on what you're referring to. Eg: consider floors in a building. If you wish to index the horizontal surfaces separating the spaces, it's more natural to start with zero for the "ground level". If you wish to number the spaces between the surfaces, then it's more natural to start with one for the "first floor". Which one is more convenient depends on the semantics of the problem domain and what ideas you wish to communicate. Being overly attached to one perspective is unhelpful.
To quote myself from those links:
> The way I think of it is that different indexing schemes suit different problems. I want to think carefully about the problem domain and use the most convenient convention. For example, when my array stores a time series, I would like the index to correspond to timestamps (and still be performant, so long as my timestamps can be efficiently mapped to memory locations, which is true for affine transformations, for example). When another array stores the Fourier transform of that time series, I would like to access elements by the frequency they correspond to. That stops me from making annoying indexing errors (eg: off-by-1), because the data structure's interface maps nicely to the problem domain. I find that much easier than the cognitive cost that comes with trying to shoehorn a single convention on every situation. But it's difficult to appreciate that when thinking of language constructs divorced from specific problem domains, as one tends to do when typically studying data structures and/or algorithms.
> [Regarding offset array indexing...] Think of this feature as blurring the distinction between data (accessing an array) and computation (calling a function). The fact is that arrays as they are used (contiguous memory location collection) often carry more information than being just a dumb list of arbitrary data, and it's very convenient to expose that in their interface.
Now, for example, an array can more closely resemble a cached function computation, because the interface to both carry the same semantic meaning.
I hadn't seen it before, basically if I understand it right treating all vectors as lisp s-exps which are themselves 0-indexed (0th element is 'list' , because you're doing a lot of symbolic manipulation. It's the best argument I've seen for 1-indexing, I'd say two criticisms - 1: I don't know how homoiconic Julia is, if it's not very then the actual use of this seems very low relative to slicing. 2: It seems to be mixing up indexing a vector itself, and indexing the expression that the vector is (like macro-quoted vs unquoted). I can see how it would smooth things in some way but it feels like a fudge that might cause more pain somehow later.
It depends. In the USA, 0-indexed floor numbers are the norm. Elsewhere (eg, Australia), 1-indexed floor numbers are the norm.
I thought about my post after I read it and came to this- given Djikstra's (only system which can describe an empty interval and interval with first element without "unnaturals" ) and (?)'s argument that it creates a nice homomorphism between real-number intervals and integer slices then it's inarguable to me that 0-based is more mathematically elegant.
Where I guess there's space to disagree is this, I believe (to almost a point of faith I guess) that mathematical/logical elegance is important, and moving away from it is normally a mistake which leads to more pain in the long run.