Hacker Newsnew | past | comments | ask | show | jobs | submit | ok_dad's favoriteslogin

I often reach for the HTML5 boilerplate for things like this:

https://github.com/h5bp/html5-boilerplate/blob/main/dist/ind...



This is what eating your own dog food looks like when you are selling dog food.

Optical atomic clocks based on trapped single ions like this, and also those based on lattices of neutral atoms do not provide a continuous clock signal.

They are used together with a laser (which is a component included in a so-called frequency comb, which acts as a frequency divider between the hundreds of THz of the optical signal and some hundreds of MHz or a few GHz of a clock signal that can be counted with a digital counter; that digital counter could be used as a date and time clock, except that you would need more such optical clocks, to guard against downtime; the present optical clocks do not succeed to operate for very long times before needing a reset because the trapped ion has been lost from the trap or the neutral atoms have been lost from the optical lattice; therefore you need many of them to implement a continuous time scale).

The laser is the one that provides a continuous signal. In this case the laser produces infrared light in the same band as the lasers used for optical fiber communications, and it is based on glass doped with erbium and ytterbium. The frequency of the laser is adjusted to match some resonance frequency of the trapped ion (in this case a submultiple of the frequency, because the frequency of the transition used in the aluminum ion is very high, in ultraviolet). For very short time intervals, when it cannot follow the reference frequency, because that must be filtered of noise, the stability of the laser frequency is determined by a resonant cavity made of silicon (which is transparent for the infrared light of the laser), which is cooled at a very low temperature, in order to improve its quality factor.

So this is similar to the behavior of the clock of a computer, which for long time intervals has the stability of the clocks used by the NTP servers used by it for synchronization, but for short time intervals it has the stability of its internal quartz oscillator.

This new optical atomic clock has the lowest ever uncertainty for the value of its reference frequency, but being a trapped single ion clock it has a higher noise than the clocks based on lattices of neutral atoms (because those can use thousands of atoms instead of one ion), so its output signal must be averaged over long times (e.g. many days) in order to reach the advertised accuracy.

For short averaging times, e.g. of one second, its accuracy is about a thousand times worse than the best attainable (however, its best accuracy is so high that even when averaged for a few seconds it is about as good as the best microwave clocks based on cesium or hydrogen).


Both lambda calculus and interaction nets are confluent. That is, for a term that can be normalised (i.e. evaluated), one can obtain the same answer by performing any available action in any order (n.b. if the chosen order terminates). For example, for `A (B) (C (D))`, I can choose to first evaluate either `A (B)` or `C (D)` and the final answer will be the same. This is true in both systems (although the reduction in interaction nets has more satisfactory properties).

The key reason one may consider interaction nets to be more parallelisable than lambda calculus is that the key evaluation operation is global in lambda calculus, but local in interaction nets. The key evaluation operation in labmda calculus is (beta) reduction. For instance, if one evaluates a lambda term `(\n -> f n n) x`, reduction takes this to the term `f x x`. To do so, one must duplicate the entire term `x` from the argument to perform the computation, by either 1) physical duplication, or 2) keeping track of references. Both are unsatisfactory solutions with many properties hindering parallelism. As I shall explain, the term `x` may be of unbounded size or be intertwined non-locally with a large part of the control graphs of other terms.

If `x` is simply a ground term (i.e. a piece of data), then it seems like either duplication or keeping track of the references would be an inevitable and reasonable cost, with the usual managed-language issues of garbage collection. If one decides to solve the problem by attempting to force the argument to be a ground term, one would find the only method to be to impose eager evaluation, evaluating terms by always first evaluating the leaf of the expression, before evaluating internal nodes in the expression. Eager evaluation can easily become unboundedly wasteful when one strives to reuse a general computation for some more specific use cases, so one may not prefer an eager evaluation doctrine.

However, once one evaluates in an order that is not strictly eager (e.g. lazy evaluation), the terms that one is duplicating or referencing are no longer simple pieces of data, but pieces of a (not necessarily acyclic) control graph, and any referencing logic quickly becomes very complicated. Moreover, the argument `x` could also be a function, and keeping track of references would involve keeping track of closures over different variables and scopes, which complicates the problem of sharing even further.

Thus, either one follows an eager evaluation order, in which most of the nodes in a term's expression tree are not available for evaluation yet, and available pieces of work for evaluation are only generated as evaluation happens, which imposes a global and somewhat strict serialised order of execution, or one deals with a big complicated shared graph, which is also inconvenient to be distributed across computational resources.

In contrast with lambda calculus, the key evaluation operation in interaction nets is local. Interaction nets can be seen as more low-level than lambda calculus, and both code and data are represented as graphs of nodes to be evaluated. Thus, a large term is represented as a large net, and regardless of the unbounded size of a term, in one unit of evaluation, only one node from the term's graph is involved.

Given a graph of some 'net' to be evaluated, one can choose any "active" node and begin evaluating right there, and the result of computation in that unit of evaluation will be guaranteed to affect only the nodes originally connected to the evaluated node, no referencing involved. Thus, the problem of computation becomes almost embarrassingly parallel, where workers simply pick any piece of a graph and locally add or remove from that piece of the graph.

This is what is meant when one refers to interaction nets being more parallelisable than lambda calculus.


No surprises here. Sonos has become pretty much unusable since the app "upgrade".

I wanted to start some music playing via Sonos in another room for my dogs earlier while I was on a call.

It took nearly two minutes for the app to update and be able to select my "calming cello music for dogs" playlist.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: