Hacker Newsnew | past | comments | ask | show | jobs | submit | jnd-cz's commentslogin

The digital part of TCXO is interesting. It must be some simple microncontroller with lookup table that steers the frequency back to nominal value. These days you really have computation in many basic components, from crystals to flash memories.

Late to the party here but I didn't see until now.

Looking at the layout I'm very confident this controller is fully analog, with just a few gates of digital doing control stuff. There's only a few dozen to maaaaybe low hundreds of gates of digital in the whole die, not nearly enough to be any kind of MCU.

And there's a whole bunch of really large resistors and other analog stuff.

I do see a structure that might be some kind of ROM for piecewise linear calibration curves, but it's too obscured by wiring on top for me to draw any definitive conclusions without delayering the chip.


Yes, the typical way this works is that the lookup table is programmed during device calibration and that the microcontroller has a temperature sensor attached and uses a varicap to drive one of the two capacitors attached to the crystal (usually through a coupling capacitor to avoid loading up the circuit too much).

This is nice because it will help to keep the crystal on track but a lot depends on the time constants of the control circuit whether or not the Allan Deviation of the circuit as a whole is going to be acceptable across all applicable timescales.

As a domain this is both fascinating and far more complex than I had ever imagined it to be, but having spent the last couple of months researching this and building (small) prototypes I've learned enough to have holy respect for anything that doesn't have an atomic clock in it and that does better than 10^-7. That is a serious engineering challenge especially when you're on a tiny budget.

If you can use a GPS disciplined oscillator then that's one possible solution, but there to you may see short term deviations that are unacceptable even if the system is long term very precise.


Is there really a microcontroller in there? As in a general purpose microprocessor core executing machine code in ROM? Any references for that?

I find it baffling that this would be cost effective. Maybe by dropping in a CPU core and software you save some design cost vs. a more specialized IC. But it must be more expensive per unit to manufacture in a process where you can fit in all those transistors. And these things are manufactured in such quantities that design costs must be a pretty minimal part of the final part price.

This used to be implemented as a purely analog control loop, i.e. opamps and such. After all TCXOs predate the age of ubiquitous CPUs by decades. Even if there is a need for a factory-programmed temperature calibration curve, there are techniques where it can be implemented in a pure analog way, or in a dedicated digital circuit where the transistor count will be much lower vs. a general purpose CPU core.


That microcontroller costs a small fraction of the precision ground crystal it is boxed in with.

You need a way to calibrate the device after the package is sealed, that implies some smarts or you're going to end up with a whole raft of extra pins and that would be costlier than the microcontroller!

I'm sure there are alternative ways but in this day and age cpus and small amounts of flash + memory are priced a little bit above the sand they're made of. I have whole units packaged and with far larger capabilities for $3 Q1, and that's with a whole lot of assembly and other costly detailing.

Microchip, one particular embedded controller manufacturer lists their SMD packaged PIC16F15213-I/SN which is much more powerful than what you need here for $0.33, Q100 that drops to $ 0,27400. This is a complete device, not an unpackaged die, which would retail for a small fraction of that.

Control loops and analog stuff works well, but not if you also want to be able to do calibration after the fact package is sealed, I'm not aware of any tech that would be fully analog but that would have the same flexibility and long term stability, never mind mechanical stability (microphony, talking to a crystal is probably the cheapest and easiest way to get FM modulation!). Note that this is different precision wise from a device that simply measures the temperature and does a compensation based on that, the device you are looking at in this article is easily an order of magnitude better.


just because it is digital doesn't mean it has to be a microcontroller though, right?? i see no reason this wouldnt just be a state machine or whatever out of plain old logic.

Well, I've been working with a number of these devices, different brands but the same or slightly more functionality and they all have little controllers in them. Some are documented and you can talk to them directly (usually I^2C) others are 'black boxes', you can tell there is something living on the other side of the nominally 'NC' pin but not what and you don't have control over it.

I also have a couple of very fancy ones that you can compensate and whose NV memory you can write to directly. Those are pretty expensive, $100 or thereabouts but the precision is unreal for a non-governed device.


There's definitely enough transistors in there for a microcontroller. It only takes a few thousand. If you're building a custom integrated circuit in the first place, the cost of a microcontroller core is relatively low, and often the cheapest option. The alternative is to write the logic in a hardware description language (HDL) like Verilog, and implement it with logic gates.

The microcontroller approach uses a fixed number of transistors, with cheap mask ROM scaling with complexity, whereas the HDL approach scales its transistor usage with complexity. The HDL approach usually runs much, much faster, is far less error prone, and takes longer to develop.

Which approach is better depends a lot on the application.


If you implement a temperature-calibration curve by analog means, it will drift in time, unless you use very high-quality and expensive components.

Calibrations done with a microcontroller have replaced those done with analog components in most applications, because the total cost is reduced in this way.

Even a relatively powerful 32-bit ARM microcontroller costs a fraction of a dollar. Good analog components, with guaranteed behavior in temperature and in time, are usually more expensive than microcontrollers.


Also cheap OCXOs.

This depends how close to the solder joint (or to board) you are trimming. If you're already cutting solder together with the component lead then it's too close and can affect the quality. I'm sure the NASA soldering manuals show this in great detail.

The sum of human knowledge is more than enough to come up with innovative ideas and not every field is working directly with the physical world. Still I would say there's enough information in the written history to create virtual simulation of 3d world with all ohysical laws applying (to a certain degree because computation is limited).

What current LLMs lack is inner motivation to create something on their own without being prompted. To think in their free time (whatever that means for batch, on demand processing), to reflect and learn, eventually to self modify.

I have a simple brain, limited knowledge, limited attention span, limited context memory. Yet I create stuff based what I see, read online. Nothing special, sometimes more based on someone else's project, sometimes on my own ideas which I have no doubt aren't that unique among 8 billions of other people. Yet consulting with AI provides me with more ideas applicable to my current vision of what I want to achieve. Sure it's mostly based on generally known (not always known to me) good practices. But my thoughts are the same way, only more limited by what I have slowly learned so far in my life.


> virtual simulation of 3d world

Virtual simulations are not substitutable for the physical world. They are fundamentally different theory problems that have almost no overlap in applicability. You could in principle create a simulation with the same mathematical properties as the physical world but no one has ever done that. I'm not sure if we even know how.

Physical world dynamics are metastable and non-linear at every resolution. The models we do build are created from sparse irregular samples with large error rates; you often have to do complex inference to know if a piece of data even represents something real. All of this largely breaks the assumptions of our tidy sampling theorems in mathematics. The problem of physical world inference has been studied for a couple decades in the defense and mapping industries; we already have a pretty good understanding of why LLM-style AI is uniquely bad at inference in this domain, and it mostly comes down to the architectural inability to represent it.

Grounded estimates of the minimum quantity of training data required to build a reliable model of physical world dynamics, given the above properties, is many exabytes. This data exists, so that is not a problem. The models will be orders of magnitude larger than current LLMs. Even if you solve the computer science and theory problems around representation so that learning and inference is efficient, few people are prepared for the scale of it.

(source: many years doing frontier R&D on these problems)


> You could in principle create a simulation with the same mathematical properties as the physical world but no one has ever done that. I'm not sure if we even know how.

What do you mean by that? Simulating physics is a rich field, which incidentally was one of the main drivers of parallel/super computing before AI came along.


The mapping of the physical world onto a computer representation introduces idiosyncratic measurement issues for every data point. The idiosyncratic bias, errors, and non-repeatability changes dynamically at every point in space and time, so it can be modeled neither globally nor statically. Some idiosyncratic bias exhibits coupling across space and time.

Reconstructing ground truth from these measurements, which is what you really want to train on, is a difficult open inference problem. The idiosyncratic effects induce large changes in the relationships learnable from the data model. Many measurements map to things that aren't real. How badly that non-reality can break your inference is context dependent. Because the samples are sparse and irregular, you have to constantly model the noise floor to make sure there is actually some signal in the synthesized "ground truth".

In simulated physics, there are no idiosyncratic measurement issues. Every data point is deterministic, repeatable, and well-behaved. There is also much less algorithmic information, so learning is simpler. It is a trivial problem by comparison. Using simulations to train physical world models is skipping over all the hard parts.

I've worked in HPC, including physics models. Taking a standard physics simulation and introducing representative idiosyncratic measurement seems difficult. I don't think we've ever built a physics simulation with remotely the quantity and complexity of fine structure this would require.


Is this like some scale-independent version of Heisenberg's Uncertainty Principle?

I'm probably missing most of your point, but wouldn't the fact that we have inverse problems being applied in real-world situations somewhat contradict your qualms? In those cases too, we have to deal with noisy real-world information.

I'll admit I'm not very familiar with that type of work - I'm in the forward solve business - but if assumptions are made on the sensor noise distribution, couldn't those be inferred by more generic models? I realize I'm talking about adding a loop on top of an inverse problem loop, which is two steps away (just stuffing a forward solve in a loop is already not very common due to cost and engineering difficulty).

Or better yet, one could probably "primal-adjoint" this and just solve at once for physical parameters and noise model, too. They're but two differentiable things in the way of a loss function.


I guess you need two things to make that happen. First, more specialization among models and an ability to evolve, else you get all instances thinking roughly the same thing, or deer in the headlights where they don't know what of the millions of options they should think about. Second, fewer guardrails; there's only so much you can do by pure thought.

The problem is, idk if we're ready to have millions of distinct, evolving, self-executing models running wild without guardrails. It seems like a contradiction: you can't achieve true cognition from a machine while artificially restricting its boundaries, and you can't lift the boundaries without impacting safety.


Unless you use your oen local models then you don't even know when OpenAI or Anthropic tweaked the model less or more. One week it's a version x, next week it's a version y. Just like your operating system is continuously evolving with smaller patches of specific apps to whole new kernel version and new OS release.

There is still a huge gap between a model continuously updating itself and weekly patches by a specialist team. The former would make things unpredictable.

Of course they aren't polluters as in generating some kind of smoke themselves. But they do consume megawatts upon megawatts of power that has to be generated somewhere. Not often you have the luxury of building near nuclear power plant. And in the end you're still releasing those megawatts as heat into the atmosphere.


Yeah, one may not use it but it's hard to ignore when Office apps suggest you save the document to the cloud as a default. I do avoid it and don't really need any collaboration but I understand that I'm minority. On my home workstation (which is mainly used for video editing) I have only local account so I don't get sucked into more MS services. But at this point you have to actively try to get around the default setup with online account and cloud apps, so it's indeed hard to ignore.


In my country and city the small shops are largely stocked from buying the same things from larger shops combined with their own resupplying network. So you can either walk 100m to the corner shop, pay couple dozen % extra or walk 500m to the nearest Lidl or similar and save on basically the same products.


I looked up the Libet experiment:

"Implications

The experiment raised significant questions about free will and determinism. While it suggested that unconscious brain activity precedes conscious decision-making, Libet argued that this does not negate free will, as individuals can still choose to suppress actions initiated by unconscious processes."


It's been repeated a huge number of time since, and widely debated. When Libet first did the experiment it was only like 200ms before the mind become consciously aware of the decision. More recent studies have shown they can predict actions up to 7-10 seconds before the subject is aware of having made a decision.

It's pretty hard to argue that you're really "free" to make a different decision if your body knew which you would choose 7 seconds before you became aware of it.

I mean, those long term predictions were only something like 60% accurate, but still, the preponderance of evidence says that those decisions are deterministic and we keep finding new ways to predict the outcome sooner and with higher accuracy.

https://pubmed.ncbi.nlm.nih.gov/18408715/


"I conducted an experiment where I instructed experienced drivers to follow a path in a parking lot laid out with traffic cones, and found that we were able to predict the trajectory of the car with greater than 60% accuracy. Therefore drivers do not have free will to just dodge the cones and drive arbitrarily from the start to the finish."

Clearly, that conclusion would be patently absurd to draw from that experiment. There are so many expectation and observation effects that go into the very setup from the beginning. Humans generally follow directions, particularly when a guy in a labcoat is giving them.

> At some point, when they felt the urge to do so, they were to freely decide between one of two buttons, operated by the left and right index fingers, and press it immediately. [0]

Wow. TWO whole choices to choose from! Human minds tend to pre-think their choice between one of two fingers to wiggle, therefore free will doesn't exist.

> It's pretty hard to argue that you're really "free" to make a different decision if your body knew which you would choose 7 seconds before you became aware of it.

To really spell it out since the analogy/satire may be lost: You're free to refrain from pressing either button during the prompt. You're free to press both buttons at the same time. You're free to mash them rapidly and randomly throughout the whole experiment. You're free to walk into the fMRI room with a bag full of steel BB's and cause days of downtime and thousands of dollars in damage. Folks generally don't do those things because of conditioning.

[0] - http://behavioralhealth2000.com/wp-content/uploads/2017/10/U...


If that were the only evidence I might agree that alternative explanations are as likely, but I cited only one of many studies that show similar outcomes. There are loads of other studies done with entirely different methodologies that indicate most human introspection is mostly better described as post hoc confabulism. That is to say, we don't use reason to make decisions so much as we make decisions, and then justify them with reasons. Nisbet and Wilson were showing it experimentally as far back as the late 70's. [0] It's been confirmed in different forms hundreds of times since.

Certainly we can come up with some alternative theories (like "free will") to explain it all away, but the simplest (therefore most likely correct) answer is just that we're basically statistical state machines and are as deterministic as a similar computational system.

To be clear, I'm not saying that metacognition doesn't exist. Just that I've never seen any reason to believe it's very different from current thinking models that just feed an output back in as another input.

[0] - https://home.csulb.edu/~cwallis/382/readings/482/nisbett%20s...


Can AVIF display 10 bit HDR with larger color gamut that any modern phone nowadays is capable of capturing?


> Can AVIF display 10 bit HDR with larger color gamut that any modern phone nowadays is capable of capturing?

Sure, 12-bit too, with HDR transfer functions (PQ and HLG), wide-gamut primaries (BT.2020, P3, etc.), and high-dynamic-range metadata (ITU/CTA mastering metadata, content light level metadata).

JPEG XL matches or exceeds these capabilities on paper, but not in practice. The reality is that the world is going to support the JPEG XL capabilities that Apple supports, and probably not much more.


if you actually read your parent comment: "typical web image quality"


Typical web image quality is like it is partly because of lack of support. It’s literally more difficult to show a static HDR photo than a whole video!


PNG supports HDR with up to 16 bits per channel, see https://www.w3.org/TR/png-3/ and the cICP, mDCV and cLLI chunks.


With incredibly bad compression ratios.


HDR should not be "typical web" anything. It's insane that websites are allowed to override my system brightness setting through HDR media. There's so much stuff out there that literally hurts my eyes if I've set my brightness such that pure white (SDR FFFFFF) is a comfortable light level.

I want JXL in web browsers, but without HDR support.


There's nothing stopping browsers from tone mapping[1] those HDR images using your tone mapping preference.

[1]: https://en.wikipedia.org/wiki/Tone_mapping


What does that achieve? Isn't it simpler to just not support HDR than to support HDR but tone map away the HDR effect?

Anyway, which web browsers have a setting to tone map HDR images such that they look like SDR images? (And why should "don't physically hurt my eyes" be an opt-in setting anyway instead of just the default?)


> What does that achieve?

Because then a user who wants to see the HDR image in all its full glory can do so. If the base image is not HDR, then there is nothing they can do about it.

> And why should "don't physically hurt my eyes" be an opt-in setting anyway instead of just the default?

While I very much support more HDR in the online world, I fully agree with you here.

However, I suspect the reason will boil down to what it usually does: almost no users change the default settings ever. And so, any default which goes the other way will invariably lead to a ton of support cases of "why doesn't this work".

However, web browsers are dark-mode aware, they could be HDR aware and do what you prefer based on that.


What user wants the web to look like this? https://floss.social/@mort/115147174361502259


That video is clearly not encoded correctly. If it were the levels would match the background, given there is no actual HDR content visible in that video frame.

Anyway, even if the video was of a lovely nature scene in proper HDR, you might still find it jarring compared to the surrounding non-HDR desktop elements. I might too, depending on the specifics.

However, like I said, it's up to the browser to handle this.

One suggestion I saw mentioned by some browser devs was to make the default to tone map HDR if the page is not viewed in fullscreen mode, and switch to full HDR range if it is fullscreen.

Even if that doesn't become the default, it could be a behavior the browser could let the user select.


> That video is clearly not encoded correctly.

Actually I forgot about auto-HDR conversion of SDR videos which some operating systems do. So it might not be the video itself, but rather the OS and video driver ruining things in this case.


Ideally, browsers should just not support HDR.


Well I strongly disagree on that point.

Just because we're in the infancy of wide HDR adoption and thus experience some niggling issues while software folks work out the kinks isn't a good reason to just wholesale forego the feature in such a crucial piece of infrastructure.

Sure, if you don't want HDR in the browser I do think there should be a browser option to let you achieve that. I don't want to force it on everyone out there.

Keep in mind the screenshot you showed is how things looked on my Windows until I changed the auto-HDR option. It wasn't the browser that did it, it was completely innocent.

It was just so long ago I completely forgot I had changed that OS configuration.


If you want to avoid eye pain then you want caps on how much brightness can be in what percent of the image, not to throw the baby out with the bathwater and disable it entirely.

And if you're speaking from iphone experience, my understanding is the main problem there isn't extra bright things in the image, it's the renderer ignoring your brightness settings when HDR shows up, which is obviously stupid and not a problem with HDR in general.


If the brightness cap of the HDR image is full SDR brightness, what value remains in HDR? As far as I can see, it's all bath water, no baby


> If the brightness cap of the HDR image is full SDR brightness, what value remains in HDR?

If you set #ffffff to be a comfortable max, then that would be the brightness cap for HDR flares that fill the entire screen.

But filling the entire screen like that rarely happens. Smaller flares would have a higher cap.

For example, let's say an HDR scene has an average brightness that's 55% of #ffffff, but a tenth of the screen is up at 200% of #ffffff. That should give you a visually impressive boosted range without blinding you.


Oh.

I don't want the ability for 10% of the screen to be so bright it hurts my eyes. That's the exact thing I want to avoid. I don't understand why you think your suggestion would help. I want SDR FFFFFF to be the brightest any part of my screen goes to, because that's what I've configured to be at a comfortable value using my OS brightness controls.


I strongly doubt that the brightness to hurt your eyes is the same for 10% of the screen and 100% of the screen.

I am not suggesting eye hurting. The opposite really, I'm suggesting a curve that stays similarly comfortable at all sizes.


I don't want any one part of my screen to be a stupidly bright point light. It's not just the total amount of photons that matters.


It is not just the total amount.

But it's not the brightest spot either.

It's in between.


I just don't want your "in between" "only hurt my eyes a little" solution. I don't see how that's so hard to understand. I set my brightness so that SDR FFFFFF is a comfortable max brightness. I don't understand why web content should be allowed to go brighter than that.


I'm suggesting something that WON'T hurt your eyes. I don't see how that's so hard to understand.

You set a comfortable max brightness for the entire screen.

Comfortable max brightness for small parts of the screen is a different brightness. Comfortable. NO eye hurting.


It's still uncomfortable to have 10% of the screen get ridiculously bright.


Yes, it's uncomfortable to have it get "ridiculously" bright.

But there's a level that is comfortable that is higher than what you set for FFFFFF.

And the comfortable level for 1% of the screen is even higher.

HDR could take advantage of that to make more realistic scenes without making you uncomfortable. If it was coded right to respect your limits. Which is probably isn't right now. But it could be.


I severely doubt that I could ever be comfortable with 10% of my screen getting much brighter than the value I set as max brightness.

But say you're right. Now you've achieved images looking completely out of place. You've achieved making the surrounding GUI look grey instead of white. And the screen looks broken when it suddenly dims after switching tabs away from one with an HDR video. What's the point? Even ignoring the painful aspects (which is a big thing to ignore, since my laptop currently physically hurts me at night with no setting to make it not hurt me, which I don't appreciate), you're just making the experience of browsing the web worse. Why?


In general, people report that HDR content looks more realistic and pretty. That's the point, if it can be done without hurting you.


Do they? Do people report that an HDR image on a web page that takes up roughly 10% of the screen looks more realistic? Do they report that an HDR YouTube video, which mostly consists of a screen recording with the recorded SDR FFF being mapped to the brightness of the sun, looks pretty? Do people like when their light-mode GUI suddenly turns grey as a part of it becomes 10x the brightness of what used to be white? (see e.g https://floss.social/@mort/115147174361502259)

Because that's what HDR web content is.

HDR movies playing on a livingroom TV? Sure, nothing against that. I mean it's stupid that it tries to achieve some kind of absolute brightness, but in principle, some form of "brighter than SDR FFF" could make sense there. But for web content, surrounded by an SDR GUI?


> when their light-mode GUI suddenly turns grey as a part of it becomes 10x the brightness of what used to be white

I don't know why you're asking me about examples that violate the rules I proposed. No I don't want that.

And obviously boosting the brightness of a screen capture is bad. It would look bad in SDR too. I don't know why you're even bringing it up. I am aware that HDR can be done wrong...

But for HDR videos where the HDR actually makes sense, yeah it's fine for highlights in the video to be a little brighter than the GUI around them, or for tiny little blips to be significantly brighter. Not enough to make it look gray like the misbehavior you linked.


it actually is somewhat an HDR problem because the HDR standards made some dumb choices. SDR standardizes relative brightness, but HDR uses absolute brightness even though that's an obviously dumb idea and in practice no one with a brain actually implements it.


In a modern image chain, capture is more often than not HDR.

These images are then graded for HDR or SDR. I.e., sacrifices are made on the image data such that it is suitable for a display standard.

If you have an HDR image, it's relatively easy to tone-map that into SDR space, see e.g. BT.2408 for an approach in Video.

The underlying problem here is that the Web isn't ready for HDR at all, and I'm almost 100% confident browsers don't do the right things yet. HDR displays have enormous variance. From "Slightly above SDR" to experimental displays at Dolby Labs. So to display an image correctly, you need to render it properly to the displays capabilities. Likewise if you want to display a HDR image on an SDR monitor. I.e., tone mapping is a required part of the solution.

A correctly graded HDR image taken of the real world will have like 95% of the pixel values falling within your typical SDR (Rec.709/sRGB) range. You only use the "physically hurt my eyes" values sparingly, and you will take the room conditions into consideration when designing the peak value. As an example: cinemas using DCI-P3 peaks at 48 nits because the cinema is completely dark. 48 nits is more than enough for a pure white in that environment. But take that image and put it on a display sitting inside during the day, and it's not nearly enough for a white. Add HDR peaks into this, and it's easy to see that in a cinema, you probably shouldn't peak at 1000 nits (which is about 4.x stops of light above the DCI-P3 peak). In short: your rendering to the displays capabilities require that you probe the light conditions in the room.

It's also why you shouldn't be able to manipulate brightness on an HDR display. We need that to be part of the image rendering chain such that the right decisions can be made.



How about websites just straight up aren't allowed to physically hurt me, by default?


Web sites aren’t made for just you. If images from your screen are causing you issues, that is a you / your device problem, not a web site problem.


I agree, it's not a web site problem. It's a web standards problem that it's possible for web sites to do that.


Note the spec does recommend providing a user option: https://drafts.csswg.org/css-color-hdr-1/#a11y


You asked “which web browsers have a setting to tone map HDR images such that they look like SDR images?”; I answered. Were you not actually looking for a solution?


I was looking for a setting, not a hack.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: