Because programming languages are typically designed for the tiny group of existing programmers than the much larger group of future programmers. The same reason unnecessary tokens exist, and 'drop' means delete in databases. It's entirely cultural.
Technically, DROP is used because DELETE is taken by something that deletes rows and using DELETE for tables and rows is a bit scary so you get DROP. I suspect quite a lot of naming happens that way.
I'd have thought, not being a programmer, that drop was used because whilst the access and relations with the drop-ed table are updated the table isn't necessarily deleted.
Drop updates the logical structure without necessarily acting on the physical storage in a way commensurate with "deletion". You can this drop a million tuple table in a ms (less I expect) whilst deletion would take far, far longer (of the order of a million-times longer).
DROP <object-type> <object-name> is sort of like a macro for, loosely
DELETE FROM <catalog-relvar-for-object-type> WHERE name = <object-name>
Except that real RDBMSs don't usually have DDL that is really equivalent to DML against system tables, and particularly (esp historically, but still in many years DBs), DDL has a different relation to transaction processing than DML, so it's a very good thing for clarity and developer intuition to not overload DML keywords for DDL operations, despite the loose similarity between CREATE/DROP in DDL and INSERT/DELETE in DML.
>Because programming languages are typically designed for the tiny group of existing programmers...
So this is a dilemma I have while working on a new language. I'd like to go with `:=`, but `=` is absurdly popular, and I'm trying to keep the language as approachable as possible.
I don't think the clarity of `:=` is so compelling that it outweighs the `ew, why are there colons in there` reaction that I think most novice coders would have.
(Remove redundant commentary about database stuffs.)
The key isn't whether you use := or =, it's whether you allow assignment in expressions.
My advice: don't allow assignment in expressions. To me, it's like the case-sensitive issue: the language designers think it's a useful feature, but it actually works against most developers.
I definitely agree that assignments should be statement level operations.
I don't think case-folding identifiers is helpful. The language has decreed fooBar is the same as foobar, and that handles the error where you spelled the same idea two different ways, but it fails silently on the error where you spelled two different things a similar way. Worse, there are some people who are very sensitive to case and will be confused, while others will happily type their entire code in all caps.
I think a linter is the best way to catch these issues, and those subjective rules are precisely the sort of thing that need to develop more rapidly than the core parser.
Yes, but again, the issue is whether most developers will be hindered or helped by case-sensitivity in a language. Based upon my experience, identifier case-sensitivity is simply making things harder than they need to be on the developer.
Conceptually, what is the difference between these two identifiers:
myObjectInstance
MyObjectInstance
?
And the key here is the reason for the difference: if it's a typo, then a case-insensitive language design will allow it and no-harm, no-foul. If it's not a typo, then who wants to work on a codebase littered with identifiers whose only difference is case ? :-)
In Haskell, one is a variable, the other is a type, and that's enforced by the language. It's the same, albeit by convention, in Java. There are a lot of cases where you want to describe a type and a thing, so apple = new Apple() is pretty reasonable.
When I think of case-insensitive languages, I'm thinking of Basic, LISP, SQL, and those don't have a lot of type declarations.
And consider two counter-examples:
my_instance vs myinstance
things vs THINGS
The first shows case-folding is only a partial answer to ambiguous identifiers. The second shows that differences in case can be very obvious to the reader.
Those are motivators to me for pushing this off to the linter: there are a lot of subjective judgements in what should and shouldn't be the same, and having the language keep its rules for identifiers as simple possible seems like a good separation of concerns.
My final concern is metaprogramming and interoperability. In SQL, for instance, there are bizarre rules to work around case-insensitive identifiers. If another system asks you for "myObjectInstance" and "MyObjectInstance", it has to know your case folding rules to know those two identifiers are the same.
> If it's not a typo, then who wants to work on a codebase littered with identifiers whose only difference is case ? :-)
Ever worked on a Python project that interacts with Javascript, so it's snake and camel case?
I generally agree, I'd just prefer a gofmt-style utility that would just automatically resolve those and tidy everything up. I completely agree that just chucking error messages is a poor answer.
Finally, here's a challenge, if identifiers are going to be folded by the compiler: what locale should be used? In particular, how do you handle I, İ, i and ı?
No, in my example they're both references to an object instance - they're simply identifiers. Languages that are case-insensitive tend to force one to use identifiers that are also descriptive as to their usage, which is very helpful when reading code as you can tell a type from a variable from a....
Re: languages: Pascal/Object Pascal is case-insensitive, and is statically-typed.
Re: SQL: all implementations that I'm aware of use case-insensitive identifiers for all of the reasons that I've outlined. Any that don't are problematic, at best.
Re: locales: the way that this is typically handled is by a) restricting the allowed characters to the English alphabet (older), or b) by using Unicode (UTF-16) encoding for source files (newer).