Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The problem with this argument is the user isn't trying to do these things,

I'd argue differently. I'd say the the problem isn't that the user is doing those things, it's that the language doesn't know what he's trying to do.

Python's explicit goal was always ergonomics, and it was always ergonomics over speed or annoying compile time error messages. "Just run the code as written dammit" was always the goal. I remember when the never class model was introduced, necessitating the introduction of __get_attribute__. My first reaction as a C programmer was "gee you took a speed hit there". A later reaction was to use it to twist the new system into something it's inventors possibly never thought of. It was a LR(1) parser, that let you write the grammars as regular Python statements.

While they may not have thought abusing the language in that particular way, I'm sure the explicit goal was to create a framework that any idea to be expressed with minimal code. Others also used to hooks they provided into the way the language builds to create things like pydantic and spyne. Spyne for example lets you express the on-the-wire serialisation formats used by RPC as Python class declarations, and then compile them into JSON, xml, SOAP of whatever. Sqlalchamey lets you express SQL using Python syntax, although in a more straightforward way.

All of them are very clever in how they twist the language. Inside those frameworks, "a = b + c" does not mean "add b to c, and place the result in a". In the LR(1) parser for example it means "there is a production called 'a', that is a 'b' followed by a 'c'". 'a' in that formulation holds references to 'b' and 'c'. Later the LR(1) parser will consume that, compiling it into something very different. The result is a long way from two's compliment addition.

It is possible to use a power powerful type systems in a similar way. For example I've seen FPGA designs expressed in Scalar. However, because Scalar's type system insists on knowing what is going on at compile time, Scalar had a fair idea of what the programmer is building. The compile result isn't going to be much slower than any other code. Python achieved the same flexibility by abandoning type checking at compile time almost entirely, pushing it all to run time. Thus the compiler has no idea of what going to executed in the end (the + operation in the LR parser only gets executed once for example), which is what I said above "it's that the language doesn't know what the programmer is trying to do".

You argue that since it's an interpreted language, it's the interpreters jobs to figure out what the programmer is trying to do at run time. Surely it can figure out that "a = b + c" really is adding two 32 bit integers that won't overflow. That's true, but that creates a low of work to do at run time. Which is a round about way of saying the same thing as the talk: electing to do it at run time means the language chose flexibility over speed.

You can't always fix this in an interpreter. Javascript has some of the best interpreters around, and they do make the happy path run quickly. But those interpreters come with caveats, usually of the form "if you muck around with the internals of classes, by say replacing function definitions at run time, we abandon all attempts to JIT it". People don't typically do such things in Javascript, but as it happens, Python's design with it's meta classes, dynamic types created with "type(...)", and "__new__(..)" almost could be said encourage that coding style. That is, again, a language design choice, and it's one that favours flexibility over speed.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: