> it is entirely an OS in an academic sense - in scientific literature OS == kernel, nothing else
No, the academic literature makes the difference between the kernel and the OS as a whole. The OS is meant to provide hardware abstractions to both developers and the user. The Linux world shrugged and said 'okay, this is just the kernel for us, everyone else be damned'. In this view Linux is the complete outlier, because every other commercial OS comes with a full suite of user-mode libraries and applications.
> We gave up the headphone jack. We gave up the microSD card.
Some people might have given it up. I personally own a Sony Xperia phone, and intend to buy another Xperia next year, which will almost certainly still have both. In fact Sony is the one manufacturer that returned to a headphone jack after having removed it for a while. It might be more expensive than the competition, but this is my voting with my wallet.
By a _substantial_ margin, because the best bang-for-your-buck strategy with smartphones for a long time has been to buy used or refurbished popular flagships for the last one or two years. As much as I like what Xperias are doing with a headphone jack and an SD card slot, the used market for them is almost non-existent. Even if you somehow manage to get a good deal, it will be even more difficult to find a good case and accessories like a reliable magnetic wallet, the market is just isn't there.
I myself have settled on using a Pixel with a headphone jack DAC dongle and an external hard drive.
There are some mostly reliable ones out there on the pricier end, but the catch is that they are almost exclusive to flagships. For the extra-cautious, some even have "Find My Device" compatibility baked in.
Most phones that cost less than ~300 USD still have a headphone jack and microSD slot.
I've never understood spending more than that on a phone anyway, you can't exactly use all that processing power on a phone operating system. Unfortunately some of the bad features from expensive phones have been moving down to the cheaper ones, like the destroyed screen that's missing its corners and has a hole for the camera in it for some reason.
I just bought a newer phone and was surprised to see even the ~$200 Samsungs were lacking a headphone jack. That threw them right out of contention, so I ended up getting a 2024-model Motorola (the 2025s were $50 more and reviews said they offered no meaningful performance boost).
I get it, but the quality of headphones with cords has gotten so bad that the male jacks wouldn't last more than a few months. My son has gone through an untold number of corded headphones because his school iPad is too locked down to use bluetooth ones.
You can spend a little more and get headphones with a replacable cord, and replace the cords if they get broken. Or maybe take the time to solder new connectors onto broken cords.
Budget phones are going to have different missing features for different people - for me the problem are that budget phones are all too large, full of software bloat, and receive poor software support.
If you're so concerned about camera quality ... buy a dedicated camera.
A 32 MP+ point-and-shoot starts at about $40, though goes up from there (to several thousand dollars for top models). As a bonus, it has an expected life far exceeding that of a smartphone.
Yes because I really want to carry around two devices including a crappy phone. The latest version of iOS supports iPhones from 2019 and Apple is releasing security updates farther back than that.
Do note that unfortunately any future devices by Sony are just phones by other manufacturers that are just Sony branded. Sony stopped their first party device manufacturing, so your mileage of the hardware might be wildly different in the future.
I've rocked pixels for a good while now, but the Xperia lineup has always been something I've really debated.
My largest concern is camera quality: obviously it is Sony, but if you wouldn't mind, could you elaborate on their camera 'stack' a bit (esp. in relation to pixel phones if you have first hand experience...).
I own an Xperia 5iii (so about four-and-a-half years old now), and I also own a Pixel 10.
The Pixel 10's camera is unequivocally better. The JPEG outputs are processed, 'Instagram-ready'. The output from the Sony camera even in JPEG mode is considerably more muted, neutral, and has less contrast. Note that this is not representative of newer Xperias' camera quality; I've heard they have improved considerably. I'm not too concerned because I hardly use my phone to take photos; I have a Nikon mirrorless for that.
Yes and their update policy really sucked compared to the competition while their price was the same or even higher. They've only fixed that recently but it was too late. This was the main reason I never got one.
Except 4G/5G does not work properly in Australia. :(
It is some carrier configuration bullshit or something like that. There may be a way to make it work, but it did not look guaranteed after reading dozens of pages on forums on the topic. I ended up retuning the Sony I tried whilst I could still get a full refund.
Phones used to be exciting. Now it is just frustrating because all the good features are gone. Headphone jack, sd card, fingerprint sensor on back, unlockable bootloader.
Nothing. But I want the SD card, dual sim plus eSIM, a headphone jack, a rectangular screen with a decent aspect ratio ideal for wide-format films and scrolling. I will fully concede that Sony's software quality has taken a hit in recent years; they used to be much better in 2016 or so.
Worse quality, latency, potential to lose one (or both) earbuds, having to faff with batteries and charging and cases (and charging the charging case) when I can just... plug it in, bam, music in my ears. The knotting is a small price to pay for the improved quality and convenience in every other way.
Something I read recently which I think is interesting food for thought:
Did ditching the headphone jack increase the number of people in public who just play their music / talk on speakerphone, because now the alternative is much more complex and expensive compared to simple 3.5mm wired headset?
Before proclaiming that Bluetooth is in fact simple and cheap, consider how your situation may differ from that of the perpetrators
My own memory and current experience on this point is that it used to be far more common than it is today.
I remembered there was a South Park episode where Cartman was being a stereotypical self-absorbed person walking around with their phone on speaker. I looked it up, and that episode came out in 2013. At the time, most phones on the market had a 3.5mm jack. Yet people not using headphones/headsets was an experience common enough to be turned into a joke in the show.
I don't think there's much correlation between 3.5mm jack availability and using a phone's speaker output in public.
"Simple" as you've used it is open to interpretation. I personally held on to wired headsets longer than most of my friends and family. You know what I don't miss, now that I've preferred wireless for a few years? Untangling the cable. Accidentally catching the cable on something and having an earbud ripped out. Picking lint out of the jack. Staying conscious of the length and positioning of the cable in the context of my own movements.
Other than the BT connection process, which is only complicated if you're fortunate enough to own multiple devices and headphones/sets to connect to them, wireless can be a lot "simpler" in actual usage.
I appreciate the counterpoint. The Cartman example is a good one. Also it's probably difficult to factor out the seemingly broken post-Covid social norms
One point I'll make is simplicity comes in many forms. Wired headphones can be dirt-cheap, they don't run out of battery, and I don't think they're as prone to getting lost
Cheaper yes, but the entry level for BT is still pretty accessible. Using Google Shopping, the lowest priced match for "wired earbuds" is $1.47. The lowest priced match for "bluetooth earbuds" is $2.46. In both cases, you hit a breakpoint of "this looks like it might possibly work for more than a few seconds" around $10.
The battery point is valid. Funnily enough, the last pair of earbuds I lost was a wired one. Myself, most of my headsets are over-ear, so they're a bit large to be easily lost. The form factor likely determines the loss potential more than the presence/lack of a wire.
The risk of losing one (or both) earbud is a real one. My ears don't tend to keep snug grip on the earbuds so they tend to get loose after I walk a little. With earbuds, this might just be my own singular piece but, there is also the chance that only one of the two would connect to your phone.
On the other hand, the cables get tangled together. I can't walk around with them because the cable gets stuck in the swing of my arms. Connecting them to the phone after a call had already started was a piece of cake though. With bluetooth, I never have my earbuds on when I actually need them and it's too much of a pain to take them out of my bag and connect them.
Whenever it is time to replace my current earbuds, I am gonna go for a neckband instead. It has basically the best of both, imo (I am not that sensitive to audio quality mostly) and the downsides aren't large enough (I'll think of the weight as a neck workout).
Then don’t buy headphones like that. I have AirPods Pro. But I also have a pair of $50 Beat Flex that if they fall out of my ear they just go around my neck. I use them when I travel.
I bought a pair of double flange doohickies to replace the standard ones.
Most people don't need latency, and I don't really have any latency issues. I watch videos with Bluetooth headphones and they're all synchronized perfectly.
With Bluetooth I can also "just... plug it in, bam, music in my ears."
LE Audio should fix the quality and latency problems. The latency is significantly lower and the bandwidth is twice Classic Bluetooth. There are new default codecs that are better, and there should be enough bandwidth for lossless. The other nice thing is enough bandwidth for bidirectional streams instead of low quality audio when use microphone.
The current problem is that LE Audio implementations are new with lots of headphones having them as beta.
Shouldn't it be the same thing? You either have the DAC on your phone convert the digital music file to an analog signal and send it over the aux cord to the speakers in the headphones, or have the digital file sent over Bluetooth and converted by a DAC in the headphones, right? It's not like you're plugging your headphones into a record player.
> have the digital file sent over Bluetooth and converted by a DAC in the headphones, right
This is not how Bluetooth wireless audio works. PCM audio is re-encoded on-device into any one of a few Bluetooth-capable codecs that is then streamed to the client device. This is a primary cause of latency.
The way its bandwidth is too low to broadcast and receive at high quality at the same time meaning everyone calling into the zoom call with their fancy airpods sound like they're calling from the other side of the moon while my 5$ plug-in earbuds sound like a damn recording studio in comparison.
One of my iPhone SE's died an untimely death because of failure of the lightning port, so I'm strongly sympathetic.
I also am a hardcore 3.5mm headphone user. Wireless headphones are garbage.
I did get my mind changed on USB-C DACs by way of inductive charging. Using an USB-C DAC and still being able to inductively charge seems at least somewaht reasonable to me.
On the newest round of phones for my wife and me I've tried to make sure we're inductively charging >90% of the time.
Need to dig deeper into inductive charging as it seems to heat the battery more especially if the phone is in a case. So yet another tradeoff to consider.
Good thing is that if the port goes bad it can still be charged.
I think it just adds friction (for measure, I feel audio jacks are pretty good)
So the real response is, "what's wrong with most companies to not provide the 3.5mm itself?"
It's good that xperia's doing this though. I think I still have phones which have 3.5mm itself so there isn't much to worry about. I think there are a lot of new phones which do offer it, I think both of my parents phones have support for 3.5mm by itself.
USB-C extension cables aren't allowed, but pass-through charging is allowed. I suspect a $7 cable from a Chinese amazon seller is not spec-compliant, but e.g. Belkin sells a spec-compliant "3.5mm Audio + USB-C Charge Adapter".
In my experience the connection is much easier to accidentally break through movement (e.g., walking) with a USB-C adapter than straight-through 3.5mm.
I really miss having a 3.5mm output on my phone...
Hidden inside of a USB-C to 3.5mm adapter is an entire DAC with a power amplifier for driving headphones. They're complex little things.
And like any other bit of active, plug-in electronics: They're not all the same.
Some of them are wonderful (Apple's adapter sounds great and don't cost much), but and some of them are terrible.
And there's compatibility issues. The combination of an Apple headphone adapter on an Android produces a volume control bug that prevents a person from turning it up even to normal line level output voltages that normal audio equipment expects.
And there's functional issues: Want to play some lossless audio in the car or low-latency audio on headphones, and charge your phone at the same time? Good luck with that! (Yeah, there's adapters that have USB C inputs for power, too. They're a mess. And I once popped one as soon as my phone negotiated a 12VDC USB PD mode instead of the 5VDC that the adapter must have been made for. (And no, wireless charging isn't a solution. It's a bandaid for the deliberately-inflicted footgun incident that brought us here to begin with.))
And it's complicated: For a "simple" audio output, we've got USB 2 with a signalling rate of 480Mbps and a power supply, when all we really want is 20Hz-20KHz analog audio with left, right, ground, and (optionally) microphone.
And then: It often doesn't work. When I plug the USB C headphone adapter I have into my car and go for a drive, it disconnects sometimes: I observe no physical change, but the device resets, the music stops, and the phone rudely presents a prompt asking me which voice assistant I'd like to use (the answer is, of course, "None" -- it's always "None", but it asks anyway). And then I get to figure out how to make it play music again, which presents either a safety issue or a time-suck issue while I stop somewhere to futz with it. (Oh, right. Did I mention that the electronics in these adapters also include support for control buttons? I guess I glossed over that.)
Forcing the use of USB C headphone adapters and their complexities represents a very Rube Goldberg-esque solution to the simple problem of audio interconnection that had already been completely solved for as long as any of us reading this here have been alive.
Except: While Rube Goldberg contraptions are usually at least entertaining, this is just inelegant and disdainful.
If you’re in the low percent running cabled headphones, you probably are also running a headphone amp if necessary or not which uses more cell phone power.
Now you need a usb->usb + 3.5mm to keep it charged up or an add on battery.
I have a hot take. Modern computer graphics is very complicated, and it's best to build up fundamentals rather than diving off the deep end into Vulkan, which is really geared at engine professionals who want to shave every last microsecond off their frame-times. Vulkan and D3D12 are great, they provide very fine-grained host-device synchronisation mechanisms that can be used to their maximum by seasoned engine programmers. At the same time a newbie can easily get bogged down by the sheer verbosity, and don't even get me started on how annoying the initial setup boilerplate is, which can be extremely daunting for someone just starting out.
GPUs expose a completely different programming memory model, and the issue I would say is conflating computer graphics with GPU programming. The two are obviously related, don't get me wrong, but they can and do diverge quite significantly at times. This is more true recently with the push towards GPGPU, where GPUs now combine several different coprocessors beyond just the shader cores, and can be programmed with something like a dozen different APIs.
I would instead suggest:
1) Implement a CPU rasteriser, with just two stages: a primitive assembler, and a rasteriser.
2) Implement a CPU ray tracer.
These can be extended in many, many ways that will keep you sufficiently occupied trying to maximise performance and features. In fact to even achieve some basic correctness will require quite a degree of complexity: the primitive assembler will of course need frustum- and back-face culling (and these will mean re-triangulating some primitives). The rasteriser will need z-buffering. The ray-tracer will need lighting, shadow, and camera intersection algorithms for different primitives, accounting for floating-point divergence; spheres, planes, and triangles can all be individually optimised.
Try adding various anti-aliasing algorithms to the rasteriser. Add shading; begin with flat, then extend to per-vertex to per-fragment. Try adding a tessellator where the level of detail is controlled by camera distance. Add in early discard instead of the usual z-buffering.
To the basic Whitted CPU ray tracer, add BRDFs; add microfacet theory, add subsurface scattering, caustics, photon mapping/light transport, and work towards a general global illumination implementation. Add denoising algorithms. And of course, implement and use acceleration data structures for faster intersection lookups; there are many.
Working on all of these will frankly give you a more detailed and intimate understanding of how GPUs work and why they have been developed a certain way, rather than programming with something like Vulkan, spending time filling in struct after struct.
After this, feel free to explore any one of the two more 'basic' graphics APIs: OpenGL 4.6, or D3D11. shadertoy.com and shaderacademy.com are great resources to understand fragment shaders. There are again several widespread shader languages, though most of industry uses HLSL. GLSL can be simpler, but HLSL is definitely more flexible.
At this point, explore more complicated scenarios: deferred rendering, pre- and post-processing for things like ambient occlusion, mirrors, temporal anti-aliasing, render-to-texture for lighting and shadows, etc. This is video-game focused; you could go another direction by exploring 2D UIs, text rendering, compositing, and more.
As for why I recommend starting with CPUs, only to end up back with GPUs again, and one may ask: 'hey, who uses CPUs any more for graphics?' Let me answer: WARP[1] and LLVMpipe[2] are both production-quality software rasterisers; frequently loaded during remote desktop sessions. In fact 'rasteriser' is an understatement: they expose full-fledged software implementations of D3D10/11 and OpenGL/Vulkan devices respectively. And naturally, most film renderers still run on the CPU, due to their improved floating-point precision; films can't really get away with the ephemeral smudging of video games. Also, CPU cores are quite cheap nowadays, so it's not unusual to see a render farm of a million+ cores chewing away at a complex Pixar or Dreamworks frame.
4 years ago I tackled exactly those courses (raytracer[0] first, then CPU rasterizer[1]) to learn the basics. And then, yes, I picked up a lib that's a thin wrapper around OpenGL (macroquad) and learned the basics of shaders.
So far this has been enough to build my prototype of a multiplayer Noita-like, with radiance-cascades-powered lighting. Still haven't learned Vulkan or WebGPU properly, though am now considering porting my game to the latter to get some modern niceties.
Last time I tried, modules were a spuriously supported mess. I'll give them another try once they have ironclad support in cmake, gcc, clang and Visual Studio.
I would instead suggest two things for power users: installing Windows using autounattend.xml[1], and secondly visiting the mass graves to turn your Windows install into Enterprise (or, if you can wrangle it, get an Education licence from your academic institution/alma mater), which completely gets rid of all consumer-oriented stuff.
To be honest, I don't mind the Windows games. In fact I believe the ones shipped with XP, Vista, and 7 were top-notch. What I mind is games with annoying advertisements in them. I mind when my Weather program is not native and is a glorified web app, also ridden with advertisements.
I'm typing this on my company azure ad integrated windows 11. The system info says it's windows 11 enterprise 25h2.
My start menu still has multiple random xbox crap in there, game bar (what even is that?!), "game mode", "solitaire and casual games". It shows random ads in the weather app. It invites me to do more with a microsoft account, even though the computer is fully azure ad joined and my windows session is an azure ad account with some expensive office365 licence attached.
Before reinstalling the other day for unrelated reasons, I had actually tried to add that account. Turns out it doesn't work with a "work or school" account, it requires the personal one, but it doesn't say it clearly, only that "something went wrong".
I honestly don't see any difference when compared to my personal windows install I use for the occasional game and Lightroom / Photoshop.
Side project(s): Grokking Windows development from the top of the stack to the kernel; everything from Win32, WinUI, WPF, COM, to user- and kernel-mode driver development. It's fun to write drivers in modern C++. Also, massively procrastinated, Vulkan/D3D12 cross-platform game engine written in C++23/26, work-in-progress.
Full time work: GPU driver development and integration for a smartphone series. It's fun to see how the sauce is made.
I use NVIDIA hardware which objectively have superior maximum performance compared to AMD graphics cards. I use HDR high pixel density monitors as well. I like laptops with decent battery life and decent touch pads.
Windows simply offers a cleaner, more well put-together experience when it comes to these edge cases. I have many tiny nitpicks about how Linux behaves, and every time I go back to my Windows Enterprise install it is a breath of fresh air that my 170% scaling and HDR just work. No finagling with a million different environment variables or CLI options. If a program hasn't opted into resolution independent scaling then I just disable it, and somehow the vector elements are still scaled correctly, leaving only the raster elements blurry. Nowadays laptop touch pads feel like they are Macs, which is high praise and a sea change from where Windows touch pads were about a decade ago.
If you strip away all the AI nonsense, Windows is a genuinely decent platform for getting anything done. Seriously, MS Office blows everything else out of the water. I still go back to Word, Excel, and PowerPoint when I want to do productivity. Adobe suite, pro audio tools, Da Vinci Resolve, etc, they just... work. If you haven't programmed in Visual Studio or used WinDbg then you have not used a serious, high-end debugger. GDB and perf are not even in the same league.
As a Windows power user, I want to go back to the Windows 2000 GUI shell, but with all the modernity of Windows 11's kernel and user-space libraries and drivers. I wish Enterprise was the default release, not the annoying Home versions. And I really, really wish Windows was open-sourced. Not just the kernel, but the user mode as well, because the user mode is where a lot of the juice is, and is what makes Windows Windows.
No, the academic literature makes the difference between the kernel and the OS as a whole. The OS is meant to provide hardware abstractions to both developers and the user. The Linux world shrugged and said 'okay, this is just the kernel for us, everyone else be damned'. In this view Linux is the complete outlier, because every other commercial OS comes with a full suite of user-mode libraries and applications.
reply