Does the cognitive energy expended by French to do basic counting conditions their brain from early childhood for mathematical proficiency resulting in so many great mathematicians whose native language was French?
</end_of_joke>
Ah right I remember hearing somewhere that you guys don't have words for 70, 80, and 90 and do this odd sum of two thing. I suppose there are worse ways than the reverse German :D
The French language has such words, but Frenchmen don't use them. For example they prefer to say the old fashioned "quatre-vingt-dix" (4 - 20 - 10) instead of the perfectly fine "nonante" that French speakers in Belgium use.
the word for 60 in Danish is tres
the word for 50 in Danish is halvtreds - so basically half 60 (I guess cause the original counting system in the Nordic region was based on 20s?), and since Danes don't pronounce the d and the halv is quick sometimes you get confused in what is being said.
But then the word for 80 is firs, fee-es with a partially swallowed r sound in there somewhere.
and 70 is halvfjerds - half firs.
The word for 90 is halvfems - half fives.
a Dane speaking quickly can confuse others really quickly with these numbers as to whether it was said 50,60,70,80,90 and then you put the second number in 'backwards' as said, so
92 is to og halvfems - toe oh hellfems and so forth, but said very quickly with a tendency to not fully pronounce all of a word.
The system is actually based on scores, 20, which is called a snes in older Danish, so halvtreds is short for halv tredje snes, the half third score, and 60 is tres, short for tre snese, i.e. three scores and so on. So for the tens between 50 and 90, we count scores, and if it's not a whole number of scores, we name it the half of the score that we are into. It's also preserved in a very infrequently used variant word for 80, firsindstyve, which is just 4 score, more explicitly (tyve is the modern word for twenty). In conclusion: Yes, the Danish number system is relatively silly.
> the original counting system in the Nordic region was based on 20s?
No other Nordic language is like that.
It's probably not a coindicence that the same system the French use. Apparently French was the coolest language you could speak in the 1700s and all the nobility did it.
Only the Danish swalllowed the "twenty" part of the it, so it's no longer possible to deduce any meaning from hearing the word. Add that to the fact that "half" has a universally accepted meaning too, but should be understood here as "ten-less-than".
So I think Danish wins the most bizarre counting system over the French. And the French is far more so than the German. All they're guilty of is being careless with the ordering of numerals.
More precisely, French (cent soixante quinze) is actually: hundred sixty fifteen. Seventies, eighties (quatre-vingt = four twenties), and nineties (quatre-vingt-dix = four twenties and ten) are a mess in most French dialects.
Norwegian changed via a language reform a few decades ago. "Fem og sytti" used to be the norm (we inherited some of the Danish rules with the reverse numbers, but not the "halvfjerds" bit (which is effectively "half and four times 20")), and was still common well into the 80's-90's. I learned the new form at school, but picked up the old form from my parents.
Danish is in fact slightly more complicated. They have a vigesimal system with a base of 20, with halvfjerds, or halffourth, meaning 3½ times 20. So rather hundred five and three-and-a-half score.
I grew up with both the old one and the new one so I sometimes say it the old way and I am almost happy that my kids don't understand it immediately so I have to correct myself.
Fun fact: it was actually decided in Stortinget (the supreme legislature of Norway) in November 1950 and implemented in July 1951, as far as I know the only time a matter of how to pronounce something has been decided at that level.
When was the last time you used Databricks? You should definitely try it again. Their product offering has improved a lot in the past few years.
> broad feature set
My experience is that the feature sets of Snowflake and Databricks are very similar. Both have time travel support. Snowflake has materialized views, but Databricks has Delta Live Tables. Databricks has a distributed Pandas API, but Snowflake recently introduced Snowpark. Databricks also has autoscaling and they recently launched a serverless offering that makes autoscaling super fast aswell.
Snowflake has much more advanced data security - table, column, and row level security and dynamic data masking policies. The zero-copy cloning is also pretty useful for CI/CD (pretty much the one practical way to do blue-green deployment for data application).
Databricks has some interesting features (we were originally interested in it as "nice UI" for our AWS data lake for citizen data scientists - using it for industrialized processing was price impractical compared to AWS Glue) but the security seems lacking - it goes just table level and only in SQL and Spark, with R you can't have security at all.
I really liked the Databricks UI and integrated visualizations, though, that's where they are better than Snowflake I think. Of course, they gained those by buying open source Redash.io and ending it.
The part that ended our PoC with them was when they gave us a price quote for expected number of users, the management was like "ok that sounds reasonable" until I told them that's just license and does not include EC2 costs - the real cost would be at least twice. That made everyone angry.
Agreed! GCP should be broken up as well. It is incredibly unfair to third party services that Google is both supplying the platform and the competing services. This is probably a bigger problem on AWS though.
Apple has filed a court case to block the publication of a report. Obviously the contents of the report will be discussed as part of the hearings. So if the hearings were public then the contents would become public as well, which would make the court case pointless. Hence the private hearings.
Comparing Apache Beam with Stripe's API is unfair in my opinion. Apache Beam can be operated outside Google Cloud, but the Stripe API is only useful for Stripe customers (as mentioned by the author). I don't think it's weird that a company provides more support to paying customers. The author should have raised the issue with Google Cloud's support team, instead of creating a Jira ticket for Apache Beam.
Apache Beam _can be_ but this person is talking about using it inside of Google (e.g. as the Dataflow API) and with a Google service. I think the comparison is on the money. That being said I think the best way forward would be both google cloud support & a JIRA ticket.
Although, to a degree, I think the authors complaint about Google's mentality towards open source is on the money, a lot of the OSS work is understaffed with the theory that the community will pick up the slack, even when the primary users of the OSS project would be Google customers.
(Full disclosure: I work on Apache Spark for my day job, have previously worked on Apache Beam for my day job, and have friends who work on Bigquery so my world view is maybe skewed).
> Apache Beam _can be_ but this person is talking about using it inside of Google (e.g. as the Dataflow API) and with a Google service.
Right, but it sounds like they filed the bug with Apache Beam, not with Google Cloud. Working through the Beam bug tracker means the team behind it doesn't have any context as to what kind of user they are. And even if they said "I'm using this with Google Cloud", the people they're interacting with there aren't necessarily paid to work on Google-related issues. Sure, a better answer (instead of "it's open source; you can fix it") might have been "you should contact your Google Cloud account rep to report this to them", but still, I think OP reported this bug in the wrong place given their expectations of how it should be handled.
It won't even start to decay until space expands enough that it's warmer than the cosmological background, not that that's more than a rounding error to your calculation.
It looks like a bug. The unhappy path contains both `node++` and `node=node->next`. Note that this is in the code following "Let’s go back to the code we showed for value speculation in C:", which is actually different from the preceding code it's supposed to be a copy of. I guess it's a typo.