The 1980s were not a particularly enlightened time for programming language design; and Dijkstra's opinions seem to carry extra weight mainly because his name has a certain shock and awe factor.
It isn't usual for me to agree with the mathematical convention for notations, but the 1st element of a sequence being denoted with a "1" just seems obviously superior. I'm sure there is a culture that counts their first finger as 0 and I expect they're mocked mercilessly for it by all their neighbours. I've been programming for too long to appreciate it myself, but always assumed it traces back to memory offsets in an array rather than any principled stance because 0-counting sequences represents a crazy choice.
I've heard the statement "Let's just see if starting with 0 or 1 makes the equations and explanations prettier" quite a few times. For example, a sequence <x, f(x), f(f(x)), ...> is easier to look at if a_0 has f applied 0 times, a_1 has f applied 1 time, and so on.
0-based indexing aligns better with how memory actually works, and is therefore more performant, all things being equal.
Assuming `a` is the address of the beginning of the array, the 0-based indexing on the left is equivalent to the memory access on the right (I'm using C syntax here):
The comment you are replying to essentially said exactly that:
> but always assumed it traces back to memory offsets in an array rather than any principled stance because 0-counting sequences represents a crazy choice.
> The 1980s were not a particularly enlightened time for programming language design; and Dijkstra's opinions seem to carry extra weight mainly because his name has a certain shock and awe factor.
Zero based indexing had nothing to do with Dijkstra's opinion but the practical realities of hardware, memory addressing and assembly programming.
> I'm sure there is a culture that counts their first finger as 0
Not a one because zero as a concept was discovered many millenia after humans began counting.
For math too, 0-based indexing is superior. When taking sub-matrices (blocks), with 1-based indexing you have to deal with + 1 and - 1 terms for the element indices. E.g. the third size-4 block of a 16x16 matrix begins at (3-1)*4+1 in 1-based indexing, at 2*4 in 0-based indexing (where the 2 is naturally the 0-indexed block index).
Also, the origin is at 0, not at 1. If you begin at 1, you've already moved some distance away from the origin at the start.
Just speaking anecdotally, I had the impression that math people prefer 1-based indexing. I've heard that Matlab is 1-based because it was written by math majors, rather than CS majors.
Indeed. I was going to point out that mathematicians choose the index based on whatever is convenient for their problem. It could begin at -3, 2, or whatever. I've never heard a mathematician complain that another mathematician is using the "wrong" index. That's something only programmers seem to do.
That's arguably one of the only downsides of zero-based, and can be handled easily with negative indexing. Basically all indexing arithmetic is easier with zero-based.
`l[:n]` gives you the first `n` elements of the list `l`. Ideally `l[-n:]` would give you the last `n` elements - but that doesn't work when `n` is zero.
I believe this is why C# introduced a special "index from end" operator, `^`, so you can refer to the end of the array as `^0`.
> Yes, negative indexing as in e.g. Python (so basically "from the end") can be incredibly convenient and works seamlessly when indexes are 0-based.
I'd claim 0-based indexing actually throws an annoying wrench in that. Consider for instance:
for n in [3, 2, 1, 0]:
start_window = arr[n: n+5]
end_window = arr[-n-5: -n]
The start_window indexing works fine, but end_window fails when n=0 because -0 is just 0, the start of the array, instead of the end. We're effectively missing one "fence-post". It'd work perfectly fine with MatLab-style (1-based, inclusive ranges) indexing.
It isn't usual for me to agree with the mathematical convention for notations, but the 1st element of a sequence being denoted with a "1" just seems obviously superior. I'm sure there is a culture that counts their first finger as 0 and I expect they're mocked mercilessly for it by all their neighbours. I've been programming for too long to appreciate it myself, but always assumed it traces back to memory offsets in an array rather than any principled stance because 0-counting sequences represents a crazy choice.