Thanks for the feedback! What OS and browsers are you using, please? Did you receive any error messages?
It worked fine for me with correct messages, including across networks (I tested from my mobile phone network to my home network, not only just on the same wifi connection or same computer).
It should work in Chrome, Firefox, and Safari across any device and operating system. Normally you don't need to open any firewall ports either. It uses WebRTC: https://en.wikipedia.org/wiki/WebRTC
I am not confident enough to predict the future, but experienced and resourced enough to know that options are freedom and safety. I’m remote and mostly financially independent, my kids are homeschooled, relocating is just experiencing another place and culture for a bit. They’ve spent months living in Europe and Mexico, so this wouldn’t be out of the ordinary for us (besides the extended duration).
Worst case, we’re there long enough (~5 years) I can get them EU citizenship, giving them more options when they become adults.
Whatever happens on Tuesday, it’s going to be extremely close. The implication being that the US isn’t a lost cause, and failing to win is a signal that more people need to get involved in politics and pushing against the tide.
How do you expect improvement/positive change if the people who align with your values all flee? If everyone does, it only accelerates the problematic ideologies that are growing.
As harmful as the undesired outcome might be, things won’t turn into some dangerous authoritarian dictatorship overnight. The guardrails that exist need to be tended and cared for. Throwing up one’s hands and running away seems deeply problematic and arguably more harmful than the “wrong” election result itself if enough people choose the same path.
As a US expat, I am permitted to vote from abroad in the last state of residence, which for me is Florida. So, my progressive vote will still be counted without having to expose my family to peril. I will also continue to max out FEC contributions to candidates that support democracy and human rights. I owe the US active effort to improved governance, but also owe my children a safe environment. I believe what I’ve described balances both.
The guardrails should have prevented Trump's 2016 election, but they didn't. I'm sorry to say that any remaining guardrails are nearly worn down to the nubs, especially with the recent Supreme Court ruling allowing the president to essentially act as a king.
When half the country is angry and foolish enough to select someone like Trump and his MAGAs to lead them, there's only so much you can do other than waiting for the next generation(s) to replace them.
Fie on "Implicit namespace package". If only because making "implicit" explicit is linguistically pointless in that 3-word phrase.
Either "namespace" or "package" is also pointless linguistically. Noun-noun names ("namespace package") in programming are always a smell. Meh, it's a job career that pays the bills rent.
Maybe "namespace" (no dunder init) vs "package" (dunder init) would have saved countless person-years of confusion? Packages and "implicit namespace packages" are not substitutes for one another (fscking parent relative imports!) so there's no reason they need the same nouns.
>If only because making "implicit" explicit is linguistically pointless in that 3-word phrase.
"Implicit" isn't part of the formal terminology PEP 420 introduces. It's just in the title and some other passing descriptive mentions. (The PEP author has posted ITT, so you could probably ask for details.)
>Either "namespace" or "package" is also pointless linguistically.
"Namespace package" distinguishes from "regular package". The two words are not at all synonyms. In Python, "namespace" could also plausibly refer to the set of attributes of an object (actually how package namespacing is implemented: the package is a `module` object, and its contained modules and subpackages are attributes), keys of a dictionary, or names in a variable scope (e.g. "the global namespace" - which, in turn, gets reflected as a dictionary by `globals()`). Meanwhile, a package in a running program is about more than the namespacing it provides: it has additional magic like `__path__`. And in a broader development context, "package" could refer to a distribution package you get from PyPI, which might contain the code for zero or more "import packages" (yes, that is also quasi-standard terminology: https://packaging.python.org/en/latest/discussions/distribut...).
>Packages and "implicit namespace packages" are not substitutes for one another (fscking parent relative imports!)
Yes, they are. Both are modeled with objects of the same type in Python, created following the same rules. The absence of `__init__.py` is not why your relative imports fail. They fail because the parent package hasn't been loaded (and thus its `__path__` can't be consulted), which happens because:
1. you've tried to run the child directly, rather than entering the package via an absolute import (from a driver script or by using `-m` - see https://stackoverflow.com/questions/11536764); or
2. you're expecting the leading `.`s in a relative import to ascend through the file hierarchy, but it doesn't work that way - they ascend through the loaded package hierarchy (https://stackoverflow.com/questions/30669474).
(The SO references are admittedly not great - they're full of bad answers from people who didn't properly understand the topic but managed to get something working. Hopefully I'll have much better Q&A about this topic up on Codidact eventually.)
I do relative imports without `__init__.py` all the time. Here's a demo:
$ mkdir package
$ mkdir package/subpackage
$ cat > package/parent.py
print("hello from parent")
$ cat > package/subpackage/child.py
from .. import parent
print("hello from child")
$ python package/subpackage/child.py
[traceback omitted]
ImportError: attempted relative import with no known parent package
$ python -m package.subpackage.child
hello from parent
hello from child
$ cd package/
$ python -m subpackage.child
[traceback omitted]
ImportError: attempted relative import beyond top-level package
>Paraphrasing, "That's not the name it's just the title and used repeatedly therein" seems to cause more than a little confusion.
The phrase "implicit namespace packages" is only used once within the prose of the PEP. But also, the title of the PEP is certainly a separate thing from the name of the feature.
Similarly, nobody says that a project following modern packaging standards is using "A build-system independent format for source trees" (which would make it sound as if there were more than one relevant such format), the title of PEP-517. Instead they say that it's a `pyproject.toml`-based project.
>The extensive response confirms that the words are awfully overloaded in subtle ways,
I agree, basically. This happens all the time in programming, of course. "Package" in the Python ecosystem is perhaps not as bad as, say, `static` in the C++ language; but it's bad and I really wish there were a reasonable way to fix it.
On the other hand, "namespace" here isn't meant as Python-specific jargon. It isn't really meant that way anywhere else, either (e.g. people saying "global namespace" should normally really be saying "global scope"). It's the language of computer science, in the abstract (https://en.wikipedia.org/wiki/Namespace). So of course it ends up referring to all kinds of things (in multiple categories: data types, objects which are instances of those data types, file systems...) which implement the concept of namespacing.
>But, rest assured, I will re-encounter it.
Whenever I browse HN I mainly look for posts about Python specifically; so if for example you ever have an Ask HN about it there's a good chance I can help.
The H100 has 16,000 cuda cores at 1.2ghz. My rough calculation is it can handle 230k concurrent calculations. Whereas a 192 core avx512 chip (assuming it calculates on 16 bit data) can handle 6k concurrent calculations at 4x the frequency. So, about a 10x difference just on compute, not to mention that memory is an even stronger advantage for GPUs.
A Zen 5 core has four parallel AVX-512 execution units, so it should be able to execute 128 16-bit operations in parallel, or over 24k on 192 cores. However I think the 192-core processors use the compact variant core Zen 5c, and I'm not sure if Zen 5c is quite as capable as the full Zen 5 core.
Right, I found this interesting as a thought exercise and took it from another angle.
Since it takes 4 cycles to execute FMA on double-precision 64-bit floats (VFMADD132PD) this translates to 1.25G ops/s (GFLOPS/s) per each core@5GHz. At 192 cores this is 240 GFLOPS/s. For a single FMA unit. At 2x FMA units per core this becomes 480 GFLOPS/s.
For 16-bit operations this becomes 1920 GFLOPS/s or 1.92 TFLOPS/s for FMA workloads.
Similarly, 16-bit FADD workloads are able to sustain more at 2550 GFLOPS/s or 2.55 TFLOPS/s since the FADD is a bit cheaper (3 cycles).
This means that for combined half-precision FADD+FMA workloads zen5 at 192 cores should be able to sustain ~4.5 TFLOPS/s.
Nvidia H100 OTOH per wikipedia entries, if correct, can sustain 50-65 TFLOP/s at single-precision and 750-1000 TFLOPS/s at half-precision. Quite a difference.
The execution units are fully pipelined, so although the latency is four cycles, you can receive one result every cycle from each of the execution units.
For a Zen 5 core, that means 16 double precision FMAs per cycle using AVX 512, so 80gflop per core at 5ghz, or twice that using fp32
You're absolutely right, not sure why I dumbed down my example to a single instruction. Correct way to estimate this number is to feed and keep the whole pipeline busy.
This is actually a bit crazy when you stop and think about it. Nowadays CPUs are packing more and more cores per die at somewhat increasing clock frequencies so they are actually coming quite close to the GPUs.
I mean, top of the line Nvidia H100 can sustain ~30 to ~60 TFLOPS whereas Zen 5 with 192 cores can do only half as much, ~15 to ~30 TFLOPS. This is not even a 10x difference.
I agree! I think people are used to comparing to a single threaded execution of non-vectorized code, which is using .1% of a modern CPU's compute power.
Where the balance slants all the way towards gpus again is the tensor units using reduced precision...
They're memory bandwidth limited, you can basically just estimate the performance from the time it takes to read the entire model from ram for each token.
But closed models are clearly slowing. It seems reasonable to expect that as open weight models reach the closed weight model sizes they’ll see the same slowdown.
We have to consider the fact that human prompting + selection of outputs is essentially RLHF, so the models can and will continue to get better over time.
It's not the end of the Internet, it's the beginning of a new era.