Hacker Newsnew | past | comments | ask | show | jobs | submit | dwaite's commentslogin

However, they don't need to learn YAML or read configuration manuals to dismiss an ad.

Speaking of technical jibberish...

Because there are people who care about Free software from a philosophical standpoint on how societies should function and interact.

The community aspect of free software both pushes for more people to participate (and often for other groups to be excluded as "wrong" or "evil").

But that community only offers secondary benefits to those who are authors or painters or photographers rather than software developers - economic factors, risk aversion, functionality, and so on. The FLOSS communities are almost invariably driven toward hobbyists and developers rather than authors, artists, gamers, and the like - people whose interest lies outside of tinkering with and/or improving software.

The BSDs were never really a movement in that sense, and macOS is still just a product even if there are enthusiastic users of them both.

Similarly on the Linux side: Android, Steam Deck, and countless IoT devices are examples of successful products where the Linux aspect of them is not really even advertised.


More likely the UX team touched AVP last, so some of the design language influenced what they were building.

The goal is most likely to unify the experience around iPadOS, so that one codebase ports down the phone and watch and over to the Mac and AVP.

The delta between Mac and iPad UX elements goes down every release. The latest one gave the iPad a menu bar and multi window support.

Looking at it from a certain angle, the iOS codebase is the only one which has a native team for a lot of large companies - they might not even create larger views for an iPad native version, and may instead ship Electron for the macOS release. Apple is trying to recruit the native mobile team to be able to support native releases for the whole ecosystem.


That would be interresting is Native Switft apps would work and feel better than electron apps. But they are not much better and only consume less RAM (not that big of a deal outside of Apple hardware).

Does anyone know what this adds beyond AccessorySetupKit, which shipped over a year ago?


The problem is you can’t regulate interoperability where it doesn’t exist.

What does it mean to open the “default map app”? Maps apps typically act a native rendering for a web site, and have their own web-parameter based API for locations, navigation, and points of interest, as well as customizing informational layers.

So if I set say Bing maps to be the default map app, does that mean:

1. The OS hijacks attempts to link to sites such as Google Maps (https://maps.google.com) and Apple Maps (https://maps.apple.com) and send them to Bing Maps instead?

2. Bing Maps reverse engineers as much as possible of the various other mapping products and tries to support them with roughly equivalent features or error messages?

What the default maps app setting Apple created does is create an entirely new URI scheme of geo-navigation and an entitlement for apps which wish to support it to be the default map app. This appears to be roughly limited to a subset of parameters common between Apple and Google Maps.

So this setting in the EU and Japan.. mostly does nothing currently. Every developer needs to change their native apps and web pages to call out to this new custom scheme that only works on Apple platforms. Each of these mapping apps needs to support this scheme. That hasn’t happened yet.

The EU (and in this case Japan) gets early access and the potential exposure of multiple breaking revisions.

It is also certainly possible that a good number of web/app developers decide they don’t _want_ to support multiple mapping apps, since they’ve only verified one or two of them actually provide proper navigation/visualization/POI, and that the whole concept is flawed.


> For example, you need to root and patch your Bluetooth stack on your phone if you want to use all of your AirPods features on Android, and not because Android is doing something wrong, it's because the Android Bluetooth stack actually sticks to the spec and AirPods don't.

It’s a mix of bad Bluetooth implementations still on the Android side, and Apple extensions to cram audio features into the BLE envelope.

> And even when you do that, you can't do native AAC streaming like you can with iOS/macOS. Even if you're listening to AAC encoded audio, it'll be transcoded again as 256kbps AAC over Bluetooth.

How would this be Apple’s fault if the OS audio stack can’t do direct AAC streaming? Or are you saying the headphones themselves decode, re-encode and then re-decode the AAC?


> How would this be Apple’s fault if the OS audio stack can’t do direct AAC streaming? Or are you saying the headphones themselves decode, re-encode and then re-decode the AAC?

There are multiple Bluetooth standards for lossless audio that work across platforms. Instead of implementing those features, Apple uses a proprietary protocol to half-ass it only for the case of AAC. Even in that case, it requires a proprietary Bluetooth stack to work. Without that proprietary stack, the Airpods default to low quality transcoding of audio streams at 256kbps, and don't offer true high quality or lossless audio playback. So even in the one case where Airpods offer some semblance of lossless playback, it's non-standard and applicable to AAC only.

Cross platform high quality and lossless audio, multipoint pairing, etc solved problems and features that even $20 white label earbuds on Amazon are able to implement.


Commercial software support is not free. Contracting out for professional services or diverting internal developers to fix issues with open source software are also not free.


People attention is not free. Rights of people is not free. Thinking only through money lenses is not free of consequences.


> yes I know there was one iPad that could do USB 3 with one special dongle - and it couldn’t even do video out well with the dongle. The video adapter had hardware to decompress a compressed video stream and convert it.

Those are two separate things.

These iPad models had USB 3.0 over lightning. Lighting however was designed to solve the 30 pin connector "alt mode" problem. USB-C recreated the "alt mode" problem.

In the original 30-pin iPod, iPhone and iPad days, you had multiple video out adapters to support RCA, VGA, composite, and so on. These were also _different_ with the different i-device models - the adapters were not backward compatible, so when they came out with a new higher-resolution model of dongle, it wouldn't work on older devices. Conversely, the complexity of supporting various hardware mappings onto the 30 pin connector meant that older dongles could get deprecated from new devices.

There weren't a lot of people who invested in video output for their I-devices, but for those who did this was a very frustrating issue.

So for lightning, they went to serial protocols. So rather than negotiate a hardware mode where certain pins acted like HDMI pins in a pass-through mode, they streamed a H.264 video to the dongle - the dongle then rendered it and used its own HDMI output support.

Since this was software negotiation, a newer dongle could support new video formats and higher resolutions while still supporting older devices. There were also examples of improvements pushed to more complicated dongles like the HDMI adapter via software updates. But fundamentally, the complexity of supporting a broad hardware accessory ecosystem wasn't pushed into the physical port - it could evolve over time via more complex software rather than via increasingly complicated hardware in every phone.

With USB-C we are back to guessing whether the connector is expecting the phone to support HDMI alt mode, DisplayPort alt mode, MHL alt mode, or to output a proprietary system like DisplayLink data.

USB 3.0 (which is what these iPads supported) never had these alt modes. It was USB-C which became a connector for (optionally) supporting a lot of other, non-USB protocols. The lack of USB-C support is why these iPads only supported video out with the lightning to HDMI adapter.

USB-C is decent, but it suffers quite a bit from there not being strong certification. This is partly why Thunderbolt 5 has shifted to becoming a compatibility- and capability- oriented certification mark. You know for example that thunderbolt 5 video will always work, because the cables have all the data pins and the devices are going to support DisplayPort alt mode.


More of a person with IETF participation experience than as a cryptographer (I enjoy watching numbers dance but am not a choreographer):

This ( https://datatracker.ietf.org/doc/draft-ietf-tls-mlkem/ ) is a document describing how to use the ML-KEM algorithm with TLS 1.3 in an interoperable manner.

It does not preclude other post-quantum algorithms from being described for use with TLS 1.3. It also does not preclude hybrid approaches from being used with TLS 1.3.

It is however a document scoped so it cannot be expanded to include either of those things. Work to define interoperable use of other algorithms, including hybrid algorithms, would be in other documents.

There is no MTI (mandatory-to-implement) once these are documented from the IETF directly, but there could be market and regulatory pressures.

My suspicion is that this is bleed-out from a larger (and uglier) fight in the sister organization, the IRTF. There, the crypto forum research group (CFRG) has been having discussions on KEMs which have gotten significantly more heated.

A person with concern that there may be weaknesses in a post quantum technique may want a hybrid option to provide additional security. They may then be concerned that standardization of non-hybrid options would discourage hybrid usage, where hybrid is not yet standardized and would likely be standardized later (or not at all).

The pressure now with post quantum is to create key negotiation algorithms are not vulnerable to theoretical post quantum computer attack. This is because of the risk of potentially valuable encrypted traffic being logged now in the hopes that it could later be targeted by a post-quantum computer.

Non-negotiated encrypted (e.g. just using a static AES key) is already safe, and signature algorithms can be updated much closer to viable attacks to protect transactional data.


> It is however a document scoped so it cannot be expanded to include either of those things. Work to define interoperable use of other algorithms, including hybrid algorithms, would be in other documents.

FYI, the specification for hybrid MLKEM + ECC is ahead of this document in the publication process. https://datatracker.ietf.org/doc/draft-ietf-tls-ecdhe-mlkem/


You may misunderstand how the IETF works. Participation is open. This means that it is possible that people who want the work to fail for their own reasons rather than technical merit can join and attempt to sabotage work.

So consensus by your definition is rarely possible given the structure of the organization itself.

This is why there are rough consensus rules, and why there are processes to proceed with dissent. That is also why you have the ability to temporarily ban people, as you would have with pretty much any well-run open forum.

It is also important to note that the goal of IETF is also to create interoperable protocol standards. That means the work in question is a document describing how to apply ML-KEM to TLS in an interoperable way. It is not a discussion of whether ML-KEM is a potentially risky algorithm.

DJB regularly acts like someone who is attempting to sabotage work. It is clear here that they _are_ attempting to prevent a description of how to use ML-KEM with TLS 1.3 from being published. They regularly resort to personal attacks when they don't get their way, and make arguments that are non-technical in nature (e.g. it is NSA sabotage, and chairs are corrupt agents). And this behavior is self-documented in their blog series.

DJB's behavior is why there are rules for how to address dissent. Unfortunately, after decades DJB still does not seem to realize how self-sabotaging this behavior is.


> the work in question is a document describing how to apply ML-KEM to TLS in an interoperable way. It is not a discussion of whether ML-KEM is a potentially risky algorithm.

In my experience, the average person treats a standard as an acceptable way of doing things. If ML-KEM is a bad thing to do in general, then there should not be a standard for it (because of the aforementioned treatment by the average person).

> It is clear here that they _are_ attempting to prevent a description of how to use ML-KEM with TLS 1.3 from being published.

It's unclear why trying to prevent a bad practice from being standardized is a bad thing. But wait, how do we know whether it's a good or bad practice? Well, we can examine the response to the concerns DJB raised: Whether the responses satisfactorily addressed the concerns, and whether the responses followed the rules and procedures for resolving each of those concerns.

> They regularly resort to personal attacks when they don't get their way

This is certainly unfortunate, but 6 other parties upheld the concerns. DJB is allowed to be a jerk, even allowed to be banned for abusive behavior IMO, however the concerns he initially raised must nonetheless be satisfactorily addressed, even with him banned. Banning somebody is sometimes necessary, but is not an acceptable means of suppressing valid concerns, especially when those concerns are also held by others who are not banned.

> DJB's behavior is why there are rules for how to address dissent.

The issue here seems to be that the bureaucracy might not be following those rules.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: