It had the potential to be a great replacement if it just worked™ like SMS/MMS (well, MMS was also quite fickle back in the days), given it's so brittle across devices even on the same OS, with little means of troubleshooting by end-users and even less from non-tech savvy users, it's kinda dead in the water.
I understand what RCS is and I don't understand why it matters.
Everything about the concept of a phone number is confusing to me. It's a string of digits that if someone guesses, they can activate the most active notification your phone has (ringing), at any time, no matter if you know them or not. Better yet, depending on your notification and MMS app settings, they might be able to make a dick pic appear on your lock screen on a whim - big spammers of this seem to get marked by the carriers and apps pretty quickly, but for a more targeted one off, still easy.
As opposed to tcp/IP based chat apps that basically require a bilateral human-initiated handshake before someone can message you...
I do receive occasional spam on WhatsApp, Telegram and Signal. Besides the operator spam (try our shiny new AI feature!)
Tying and account to a phone number is a privacy nightmare.
I guess Facebook/Meta does it for easier social graph extraction/profiling, while Signal tried to hand of verification to precent spam.
But for the sake of this argument, we may just assume all of them are evil.
It literally is Google-only. The RCS backend theoretically could be provided by carriers, but they've all chosen not to do that, so the actual service is provided by Google. No matter what the specification says, in reality it's a Google service running on Google's servers.
To put it another way, Google can't kill SMS short of literally removing the app from Android because it's not their infrastructure, but if they shut off their RCS servers tomorrow, it would be dead for good. That's a Google-only service.
It's sad to see so many people are blinded by this. The current situation of RCS is just that Google saw Apple disguised iMessage as SMS and wanted to do the same. RCS is merely a vehicle for Google.
They could just layer their own chat platform on top of Google Messages but we all saw how Google's IM business went along: Chat, Hangouts, Alo, Meet etc. So they muddied the water so deep (to a carrier level) to make it look like it's Apple's issue for not adopting RCS. And people actually fall for it.
Nobody wanted RCS. Even carriers don't want to maintain RCS. They just use Jibe. And that's exactly what Google wanted. My RCS communication with friends don't even show up in carrier's usage. How is that ever different from iMessage...
You know who chose to selfhost their own RCS server? Yes, Chinese carriers! They call it 5G Message. New ad delivery channel for businesses hooray! Instead of plain text and a link, now your campaigns can even have MENUs inside! I can send SMS to a Chinese number, I can send iMessage to a Chinese number, but I can't send RCS. Truly "Universal" profile.
I agree with all of this except for the claim that "Google wanted this". I think Google is as annoyed with this situation as everyone else. They would've preferred to have their own iMessage alternative, but they launched a dozen which all failed, so they went "Well, we can't make our own that people want to use, so let's get the carriers to make an upgraded version of SMS". And then the carriers didn't want to do that but the "it's decentralized!" message stuck with users and even a few governments, so now RCS is the worst of all worlds: it's a de facto Google service, but with a janky, half-baked decentralized protocol, where Google has limited capability to improve it compared to a native Google chat app.
Right, and does that use Apple's servers? This is a rhetorical question, we both know it goes through Google, both literally and figuratively. Google effectively controls RCS - if Apple just implemented RCS 'per spec', it would not work. So, there is no spec, it's as if it doesn't exist. That's how that works.
It really isn't. SMS did not support adding random mobile numbers to a group chat and blasting them with spam. Someone needs to either fix RCS properly for current day use-cases or it just needs to go away.
It's Google's way to openwash their new chat app into a "standard" where 100% of the data runs through their servers in the backend for every carrier they care about.
Yeah I took a look at it: Google added the encryption extensions a full two years before the GSMA put them into the standard so it feels like their new chat app. Not to mention that it’s been around since 2007 and everyone started tailing about it when google started talking about it a couple years ago
No one gave a crap about RCS and no one was supporting it until Google decided that they needed a new chat app because they hadn't made everyone switch in a while.
I'm sorry but linux gaming absolutely does not support "support everything from 90s to cutting edge modern games without hiccups"
I'm sure for some users it's acceptable, solid even, but I know several people, including myself, that keep hitting edge cases and invisible walls when on Windows these games "just work". And no, it's not about kernel anti-cheats or any other DRM.
Agreed and it's frustrating that people don't admit this.
I recently started dual booting Linux again and tried both Arch and CachyOS. Former with Hyprland, the latter with Gnome just to see how well the games run. I knew going in that tiling window managers don't behave well with games and that was indeed the case. With Gnome, even some native games made by Valve had terrible performance issues where I have none on Windows. There are also cases, and I wouldn't even describe them as edge cases, that you have to tinker to get things to work properly.
I have a very basic dual monitor setup, but yesterday I spent an hour trying to fix a problem where my cursor would escape the game's window into the second monitor. The obvious solutions (gamescope) didn't work for some reason. Did I end up fixing it? Yes. But that's only because I know my way around Linux. That's an hour I'm never getting back.
I'm not making an argument for Windows, I very much dislike using it but Linux folks need to accept reality. A reality which isn't fair, but reality nonetheless. That's when you start to make progress. (Which, to be fair, they have. Tremendously so. But there's still a long road ahead!)
I use i3wm and I have this issue with escaping mouse in CS2. I thought about using gamescope but never did. You mention you found a solution so would you be kind enough to share it?
That would definitely save me part of that hour you lost :)
But honestly, I'd trade that hour on linux a thousand times to not have to close another notification from Windows about this amazing new game they have for me to install. And I don't even have Windows 11.
Linux has quirks, of course, but every OS has them. People like to dismiss quirks on Windows because they're used to it, but a lot of the time they're worse than Linux's quirks.
I use crossover and/or Lutris on Linux in order to run most of my 90s Windows games as it's a complete pain in the ass to get them working under Windows 11.
Neither does windows tbh. You're not getting most early 2000s let alone 90s games working on W11 without a lot of time and effort having gone into getting it to work. E.g. try run original (not gog) vampire masqurade bloodlines, or black and white without the community patches running. Running both in original form is feasible on linux, but straight up not possible on w11 without patches.
I've had a pretty opposite experience. I was able to get an old adventure game (Titanic: Adventure out of Time) working just fine on Wine and it refuses to run on modern Windows.
> I'm sorry but linux gaming absolutely does not support "support everything from 90s to cutting edge modern games without hiccups"
Neither does Windows. W11 (or was it W10) famously broke a bunch of old games. Running Windows games from the 90s is easier on linux than on Windows at this point.
That's really nice but that still doesn't make Linux the better option, or even "easier" when PCGW has everything covered for Win. And most Windows issues is just slapping dgVoodoo or nGlide in and it's done anyway when solving a linux problem might be anything from picking a specific (arcanely divined) proton version to elaborate hacks and batches.
Well, guess you're married to Windows if those are your requirements. Proton runs most games these days[1] (but not all). Apparently older Windows app/games run better on Proton/Wine than Windows (better citation needed) [2].
No VM solution I know of supports 3D-accelerated graphics. VMware Workstation used to, but they removed it years ago because it was a security risk (direct access to 3rd-party drivers on the host).
VMs are useless for most gaming.
For games up to around the late 90s, and if you have a real beast of a machine, full emulation such as with PCem is the best option.
It doesn't. Case in point is my spare late 00's laptop running mint and early 00's / late 90's games. Some (Age of Wonders 1) don't work at all under wine/proton. Others (Age of Wonders SM, dosbox games, Majesty) technically work but keep hitting snags like midi just flat out not working, display resolution being read and set incorrectly, visual artifacts. Everything tested worked perfectly fine under Win7 and Win10.
Aight so when using Wine, AoW1 just instantly fails silently upon launch, no error message to see. When using proton it technically works - clicking randomly I launched the tutorial, judging by the sounds - but the screen is black all the time and shutting down alt-f4 it throws an error:
Exception EWin32Error in module VCL30.dpl at 00010E4F
There was a comment few weeks ago - I forget the topic, maybe it was the new M-series release or something - that was talking about how freaking fast these things are. And the comment was pointing out how locked down everything is and most of that power is pretty useless - I mean sure on device "AI" and faster apps...OK I guess. I'm not the target demographic for these things anyway, so my opinions are whatever.
But really, imagine how much power these things have and if you could actually run a free (as in freedom, in the GNU sense) OS on them and really get access to all that power in a handheld device. Only if.
I have an M1, which is like N-times faster than the laptop I write this on. Yet it collects dust because I'd rather continue to use this old dinosaur laptop because that M1 macbook is a locked down, very fast, shiny Ferrari, but I just want a Honda Civic I can do whatever I want with.
In practice, none of the free OSes are ready for 21st century, battery-powered, energy-saving devices, especially of the kind Apple makes.
I'm pretty sure battery performance would drop significantly if root was too easy to achieve. The temptation to run "that one more background service" would be far too much for most apps, both free and otherwise.
To get good battery perf out of a device, you need to be extremely good at saying "no", even if that "no" comes at the expense of user freedom and features. Free software is usually extremely bad at this by design, although there are exceptions (Graphene OS comes to mind).
On Apple devices, core system services are written by Apple itself. That puts pressure on the software development side to care about battery perf, as that is what users want (and what increases sales). If software is written by 3rd parties with their own business goals unrelated to device sales, I'm afraid "featuritis" and lower development costs would win out over efficiency, as it usually does in such circumstances.
I had the opposite experience going from a OnePlus 8T stock to Lineage OS. Having root means being able to reduce the amount of apps and wake up — no google play service was the key. This was a while ago but I went from 1-2 days of battery life to about 4-5 days. This is with light use, screen on time was equally draining with both setups.
I would assume that an iPhone has similar amounts of unwanted background apps and would also be able to gain battery life instead of losing it if rooted. Obviously if you install spyware, you lose a lot of battery life. Funnily enough, I remember that a few years ago, people were surprised to find that uninstalling facebook increased battery life because it behaved much like spyware.
> In practice, none of the free OSes are ready for 21st century, battery-powered, energy-saving devices, especially of the kind Apple makes.
Well, except Android :P
My phone runs a build of AOSP that I compiled myself. I can go change the source code to do whatever I want (and I do). It's pretty cool that that's possible IMO. To be fair, the drivers are closed-source
Reading this comment, one would think Apple devices are very power efficient at the cost of running little in the background. In my experience, iOS has terrible battery life in the default mode, which is background app refresh enabled, and in general apps struggle keeping their state in the background, which is something that many people complain about on the internet. So the worst of the two worlds.
To get good battery life out of a device, having complete software and hardware integration is key. That's the PC blessing and curse, having to support all kinds of different CPUs, GPUs, chipsets, RAM, etc from many different places.
When you just have to focus on a handful of hardware platforms and when you own the hardware and software, this becomes much, much easier.
Don’t mix up IoT devices that are running the single app that does one thing, and user devices, where there’s a zoo of applications written by a third party. It’s not that free software such as embedded linux are incapable of being low-power, no, as the op correctly pointed it’s about managing and limiting what user space applications can do.
> I'm pretty sure battery performance would drop significantly if root was too easy to achieve.
No offense, but this is one of the most absurd things I have ever read on a hackernews discussion.
I bet if I could get root on iOS I would get even better battery life as I kill off services related to iCloud and other background processes I don’t want running.
> To get good battery perf out of a device, you need to be extremely good at saying "no", even if that "no" comes at the expense of user freedom and features.
There is zero evidence that this is the case. In fact saying “no” to root allows more services and things running on the device than I may want.
> But really, imagine how much power these things have and if you could actually run a free (as in freedom, in the GNU sense) OS on them and really get access to all that power in a handheld device. Only if.
Skipping the "handheld" bit of this just for a second. You can run an (almost entirely) open stack on your hardware, and do so on an i9/9800X3D with 256GB RAM, 5080, and MultiTB of NVMe storage.
But it doesn't realy matter for 95% of users, because the hardware is already way faster than they need and the bottlenecks are on the server side and on shitty software architecture. I have an i9 with 128GB RAM for work, and Excel still takes 30+ seconds to load, Teams manages to grind the entire thing to a halt on startup, slack uses enough memory to power a spaceship... Running those apps on my desktop is pretty much the same experience as running them on my 10 year old macbook.
Something seems to be funny with your computer's setup. On my feeble i5 laptop with 16GB, Excel starts in about 3 seconds to the point where I can start doing stuff.
If it's a corporate device, it's usually some anti-virus abomination (or other security-related software) that steals 90% of the resources.
> slack uses enough memory to power a spaceship...
Which spaceship though? Not sure spaceship is the model you're looking for, as all of the ones I'm familiar have had a very locked down limited amount of memory. Apollo had something like 4Kb of memory. The space shuttle had 1MB.
Yes, but it seems you have a misconception of the computers we've used in our spaceships. Most people are not familiar with how little compute was involved in our spacecraft.
Yes, pretty much everyone on this forum is aware that any Electron app is going to use way more memory than actually necessary as a trade off for developing in that ecosystem.
In efforts to save the punchline - I would move to change 'a spaceship' to 'interstellar jump calculations' but I fear the actual ram required would also be small.
Sure, iOS is certainly restrictive, fully locked-down, app store only etc etc, and I'd love a full-fat firefox with its plugin system available on my phone. But what are you doing on a non-Mac laptop that you can't do on an M1 mac?
I'm a big fan of linux and have used it as a main machine for many years, but use an M4 macbook as my daily driver at the moment (everyone else I work with does too, it's just easier). I haven't felt limited at all. I can build and install whatever I like, I have brew for my tooling needs...
Yeah I don't see it with Mac. Unless you're actually needing linux and dockerisation won't cut the mustard I guess.
> If you're a Linux sysadmin type, it's nice to stay in the same environment as your vms, kubernetes, docker/podman containers, etc.
I help sysadmin a few hundred servers, and given the choice I went with a MacBook because Terminal and SSH was good enough to admin stuff. MacOS is also pretty good with the business-y apps I have to deal with at times.
A colleague went with a x86 laptop and installed Ubuntu on it, and has regular issues with audio (Google Meeting, Zoom, etc), screen sharing (seems to be Wayland), etc.
At a previous job I had a Linux workstation under my desk and a Windows laptop, but with hybrid/remote I 'combined the two' into a Apple laptop.
Well, I can't really put Linux on most Macs. That's a barrier to me.
Apple doesn't want my money, because Apple doesn't want to sell me a laptop. Apple wants to sell me a curated experience with multiple components in their ecosystem.
Just my opinion here, after ~4 years of using it at work and daily driving Linux for personal use, including development, for a decade:
- The user interface and UX is pretty and all[1], but doesn't quite work as I'd like and I can't really do much beyond a few limited "hacks". Switching workspaces has a horrible and annoying animation I can't turn off. All applications windows are grouped together and for example some actions cause all of them to jump to the top. Top-level shortcuts are limited and I can't do the same things I can on Linux - eg, I bind Super+Enter to open a new terminal window, on MacOS I can kind get a janky version of that, but due to how the window manager works, it not as streamlined as Linux
- The whole notarization stuff and signing - I mean okay, security, great. But it's annoying and you have to pay Apple like $100(?) a year just for the privilege of developing software for their platform. When I did desktop app dev on MacOS, I had to do `xattr com.apple.quarantine` commands to turn off the security nonsense that prevented me from running our own app I or my coworkers wanted to test locally.
- I have a list of utilities/apps I need to install on a new MacOS machine just to get it to partially behave the way I want. Ideally MacOS should let me customize it directly with the necessary options so these extra apps aren't necessary. Nothing I'm asking is all that complicated - Linux environments provide it more or less by default with a few setting tweaks, even Windows behaves closer to what I want and I'm no fan of Windows.
- Recently I noticed MacOS was using bunch of CPU while idling - I traced it down to some background indexing scanning that was running constantly. I had to look up esoteric command line commands to stop it - which didn't work. I ended up disabling Spotlight almost completely to make it stop using my CPU every time I stepped away for a few mins.
Annoying stuff like this really puts me off of MacOS. Like I'm being forced to conform to their way of thinking and using a device. I'm an adult, let me decide for myself.
tldr; I just like Linux, it works, it's slick, I can turn-on/off, add/remove whatever I want. I'm not restricted to what some company thinks my workflow should look like.
[1]: I'm leaving out their "glass UI" blunder... what a horribly silly thing that is. Plenty to be said about that and others already have, so I won't repeat it here.
OK, so this seems like a list of gripes about MacOS.
It's absolutely fine to have personal preferences on UX, customisability etc. This is why I swore off GNOME at the Gnome 3 transition and have never looked back, for example. If it doesn't work for you it doesn't work for you.
But it doesn't really support the assertion that you can't use the power of an M1 because of "how locked down everything is and most of that power is pretty useless".
Again, not trying to say "Thou shalt love MacOS!", but more that I don't think your points there really reflect something so locked down as to be useless. Just something with a UI you don't get along with.
Honestly I'm tried and didn't expect this thread to blow up like this.
People can use whatever they want. They're adults. I don't wanna debate. I just shared my random opinions.
If I had the choice, since I have a free Macbook laying around right now, I'd slap Linux on it and be happy - unfortunately doesn't look like Asahi Linux is quite ready yet for me to do so, few missing things. I ran Linux on a Intel Macbook (which I also didn't purchase, was given to me) for all of university and I was a happy camper.
That being said, would I buy a Mac voluntarily - nope. I'd rather buy a Thinkpad, install Linux, and I'm set for a decade honestly.
> But really, imagine how much power these things have and if you could actually run a free (as in freedom, in the GNU sense) OS on them and really get access to all that power in a handheld device. Only if.
Could you elaborate? What specifically would you do? Because I'm finding it hard to imagine what I'd do with an "open" iPhone that I can't do now, but it's extremely easy to imagine all the horrific security risks that would emerge in what today is most people's primary computing device, storing data about literally their entire lives.
My usage of "handheld" was vague. I meant any portable device (laptops, but also including phones/tablets).
If you're finding it hard to imagine what you can do with a device that _does not_ restrict what you can do with it, then you're likely fine in the Apple ecosystem, that's fair and okay. Some people aren't, you'll just have to take my word for it, I don't wanna write an essay here and you're probably not interesting in reading all that.
Security risk is a common one that comes up. Google used that to justify locking down sideloading recently. Let me take the risk. I bought this device, I should be allowed to make adult decisions right? I'm not downloading stuff off Limewire or a shady website. I'm downloading stuff off of Linux distro repos or F-Droid.
There's a lot more to be said about all this. Including the amount of e-waste created because a device is too old to be supported by manufacturers, yet people run decade(s) old laptops/desktops using free OSs because they can.
Just my 1AM rambling thoughts. Hope some of it makes some sense.
Not OP, but here are just a few things I do currently on my Android (phones and tablets):
* Use (true) Firefox w/ extensions or other browsers
* Sideload apps that aren't available in the store (this is increasingly common with open source projects that don't want the headache of dealing with app stores)
* Install my own apps (which I increasingly vibe-code since I'm the only user) and not have to deal with paying Apple or reinstalling every few days or week or whatever
* Write bash and ruby scripts to automate things on my device which often require interacting with system APIs (tmux is my platform for this on Android currently)
* Pin versions of apps that have enshittified or sold to gross companies that harvest data or switch to subscriptions models by copying the APK and re-installing it on new devices
* Install alternate/experimental graphical shells that are frequently innovative and interesting (though rarely useful in the long-term, but it's still fun)
* Option to use other ROMs such as Graphene OS
* Capture packets and proxy traffic to see what my device is doing (this has gotten pretty hard on Android now, but still something I want to do)
* Have an on-device fine-grained firewall to tightly control which apps are allowed network access
There are definitely other things I can't think of at the moment, but I'm not sure why you're being so hostile to GP. Saying that iOS devices are locked down and can't do a lot of stuff doesn't seem like a very controversial opinion, especially on HN.
> Use (true) Firefox w/ extensions or other browsers
No longer true as of this year.
> tmux
typo?
I agree with you about side loading. Apple does not. I wonder if regulations can eventually force their hand.
Some of your other points (scripting, packet sniffing, general shell access and command line tools) are just done differently, and you'd just need new tools of the trade if you actually wanted to do it. Also, a bunch of the things you have mentioned requires unlocking the android bootloader and obtaining root privileges. You can do that to a large extent for ios (jailbreaking), Apple is just more competent about shutting it out than other companies.
Thanks for writing it up. I agree with all your points. I stopped myself from replying further to the other commenters - they don't seem to be interested in an actual meaningful calm discussion.
Running goddamn Emacs for one. Running the software I need for work like Python with a full suite of packages and Wolfram Mathematica. Remapping freaking keys and their behaviour. The possibilities are endless!
Nothing, it’s never anything real and just some fantasy of what they could have if someone else put in an incredible amount of work to achieve something nebulous they got the impression of from a sci-fi book.
They want a cyber deck, except good and useful and apple hardware.
I often find myself wondering why these people aren’t happily using some Android rom and are instead using an iPhone.
Run a web server exposed through a Cloudflare Tunnel. Write code in one program, compile it in another using a shared filesystem. Write mods and extensions for programs which expose an API or just patch their files if you can figure out how to reverse them. Run programs like ffmpeg or yt-dlp directly on a CLI.
Idk, maybe like not being forced to use their new glass UI? Or whatever new UI trend they'll decide to implement.
On a unrestricted OS, I can just switch to a different desktop environment.
If you read the rest of this thread, instead of asking, you'll find plenty examples. But hey, if you like MacOS, great, anyone else's opinions don't matter.
Your definition of their product is different to theirs. They're selling a pretty sealed, you-get-what-you-get product. You want a hackable personal computer.
A bit like how you buy a can of Coke and you can't add your own sugar. It just comes with sugar, unless you buy a different product from Coke, which is a fixed choice of sweetener. Saying "other products let you choose whether or not to add that sugar or sweetener" to me doesn't mean that Coke need to change anything.
I'm a heavy Terminal user and run everything from local LLMs to full stack dev (react/python). I dibble and dabble in Blender, Unreal, and Logic Pro. I aimlessly browse the web looking for recipes, 3d printing files, shopping, HN, whatever. I'll occasionally spin up Age of Empire II locally or play some quick games via GeForceNow. I'm in full control of my Synology and Qnap NAS servers and the shit ton of media that's on it.
And I do all of that on my Mac. My 4090 rig is strictly for gaming with my son and my Proxmox Linux retired thin client rigs are for running my household on HA.
Please tell me what I'm missing out on by using a Mac OS device as my daily driver.
The specific examples in the thread, AFAICT, are about iOS, not macOS, and the person you're responding to specifically mentioned Macs. It's very hard to find examples of "things you cannot do on an Apple Silicon Mac due to Apple-imposed restrictions that you can do on a PC" that aren't pretty esoteric. (Unless you want to argue that the inability to plug in a better third-party GPU is due to Apple-imposed restrictions, which is debatable but defensible.)
A future where we carry and manage just one device could be incredible. That said, today, even if iOS weren’t so locked down and more capable of that, I think I’d find myself frustrated. I run on device local llm’s on my iPhone and a heavily quantized 3b parameter model starts to cause the iPhones thermal management to heavily throttle after just a few prompts with light tokens, to the point it’s slower than 1 token per second for inference or response, and the phone gets hot to the touch. Maybe the rumored half iPhone half iPad device could be the eventual platform from which something like this emerges.
While my main driver is a maxed out MacMini hooked to an Apple Studio monitor, at least once a week I pack up and store my MacMini and plug an iPadPro into my large monitor for a few days.
So, I feel like I routinely experience what we are talking about in this sub-thread. Given a few VPS’s to ssh/mosh into for programming and a keyboard and mouse, this is a workable setup.
The one thing that always gets me to unpack my MacMini and set it up is that even with 16G shared memory on a iPadPro, I can only run local models in a chat-style app. On macOS, my LLM use is mostly embedded in experimental scripts and apps.
exactly. The real shame of these devices is they're 99% of the way there but that last inch of running x script requiring you to whip out a form-identical device that has been blessed with the ability of running uncertified code is maddening to say the least
perhaps that's what they're developing all these "private compute" servers for. Though I would be less than happy if Apple, the last (relatively) untaken hill of the SaaS enshittification wars were to go down that road. In the meantime I will continue to use my hilariously overpowered laptop as a SSH terminal to the machine I actually work on
I've used it as well as an x86 phone running macos and an ipad mini on a lark for a week, at this point in my life as much as I complain, imessage is basically the only secure communication mechanism I can get most people to use
(which would mitigate a lot of security risks by itself. I also note that people seem to do fine with desktop OSes, despite their outdated security models)
I already use a-shell to run python scripts that fetch media, news summaries, server dashboards etc. It's really a shame I can't actually do what I want like with android where I could make custom permanent free apps for myself and do what I pleased throughout the system, executing binaries that interfaced with the real fs or remuxing video, rsyncing to my server.
I'd make locking the phone while the flashlight is operating require pressing the lock button again to wake the screen with no exceptions, so the screen no longer shines in my eyes reducing the effectiveness of the flashlight, and stay palm input stops opening the camera.
I'd hook screen time management of my children's devices—which I perform on my own device—into FaceID instead of requiring a stupid passcode.
You don't have to go far to find areas where iOS could use some customization. But if it's Apple's code, the most useful adjustments are off limits.
Jailbroken iOS was a fantastic platform for the first 9 major releases or so because it had that kind of stuff in it. Now it's "throw a suggestion in the box on our website and we'll ignore it in the order it was received."
From what I understand iPhones support external displays out of thebox, so you could use one as your main computer and do any productive stuff like development, video/3d/photos editing, anything really you can do on a computer with the liberty to install open source tools, develop/open drivers for anything connected to usb or bt, etc.
Sure, maybe the person I replied to has that same line of thought.
Why do the same restrictions bother them on a bigger screen is what I'm getting at.
What if the iPhone supported more traditional desktop resolutions when plugged into a display, you'd be staring at a screen with an Apple UI and more desktop/tablet like amounts of screen real estate. What of the walled garden then.
In my case, I use bigger screen devices with somewhat exotic productivity tools that would not necessarily fit well in the walled garden.
On the other hand, an ultra locked down macbook would sound pretty ideal for day-to-day browsing, handling financial tasks, work communications and so on. Really everything except the software development tasks I work on.
On the other hand, I do almost everything over SSH already. I guess I really could easily live with a completely locked down base MacOS install without any issues. Even Terminal.app isn't too bad anymore.
> But really, imagine how much power these things have and if you could actually run a free (as in freedom, in the GNU sense) OS on them and really get access to all that power in a handheld device. Only if.
I sort of don't have to imagine, because somewhat viable options like this exist (eg. GrapheneOS). The issue there is that I'd still rather use a more polished handheld device (iOS) than jump ship and get those extra features.
And wondering what GrapheneOS would be like with all its power, plus the polish of iOS is pointless fantasy, because it likely won't ever happen.
My guess, based on experience, is that eventually, iOS's quality will degrade enough that I'll find Android or GrapheneOS more attractive.
> I have an M1, which is like N-times faster than the laptop I write this on. Yet it collects dust because I'd rather continue to use this old dinosaur laptop because that M1 macbook is a locked down, very fast, shiny Ferrari, but I just want a Honda Civic I can do whatever I want with.
Your M1 has supported Linux pretty well for years now… Install the Fedora Asahi Remix and give it a try.
That’s pretty much Google current bet. They are slowly enabling first party support for Linux app on Android while connected to a screen in a desktop mode.
It makes a lot of sense considering high end SoC are now more powerful than the M1.
For me, it's always been the lack of a power-user-friendly windowing/workspace scheme. You can approximate a tiling window manager using yabai or similar solutions, but it's just not the same thing.
I love using the MacBooks, but the OS just doesn't feel like it was designed for me, and that would be OK, but I have limited alternatives if I want all of the hardware to keep working.
Also, yes, gaming, but that's less important to me.
.. macOS is ad-ridden? perhaps I'm already brain broken, but beyond like a few ads for icloud pro for time machine or whatever when I'm already poking around in relavent settings, I never see ads. it feels extremely unobtrusive.
the other points are not relevant to me, so I suppose it makes sense why I don't care, but iirc apple's `container` OCI runner is highly optimized for the M series, did you have significant issues with it?
As I understand it, it's not a technical problem, rather a social one first off: you can build it but it'll be "empty" compared to all other options out there, even if it's technically superior to them. Network effect and all that.
There's also a technical problem you'll have to contend with: bots and scammers... so many bots and so many scammers.
I think it's an interesting area, but I've got no time or energy to undertake such an endeavor. However, I'd be happy to talk about it and discuss it further if you'd like to. Contact info is on my profile page here.
If you have the top market position already in browsers and search, pretty easy to get people onto a product like this regardless of whether better alternatives exist.
But since then most cases have been nothing more than slaps on the wrist? Have any major companies faced dire consequences for their anticompetitive practice.
So why would they stop using their market position in ways that benefit them and at worst result in minor fines or wrist slaps?
Are you sure they lost? If the CEOs had perfect crystal balls and knew that those particular business practices would result in the penalties that they got in their court cases, I bet they still would have done the same things.
Not that I am an expert in each case, but as part of the evidence was that people in leadership/decision making positions were informed that their practices and conduct was unlawful.
> Are you sure they lost?
Well, Chrome and not Microsoft's browser is the dominant webbrowser, and at the heart of MSFT's anti trust case was bundling Internet Explorer with Windows. So, yes. MS did lose.
While these are incredibly good, it's sad to think about the unfathomable amount of abuse, spam, disinformation, manipulation and who know what other negatives these advancement are gonna cause. It was one thing when you could spot an AI image, but now and moving forward it's be basically increasingly futile to even try.
Almost all "human" interaction online will be subject to doubt soon enough.
Hard to be cheerful when technology will be a net negative overall even if it benefits some.
By your logic email is clearly a net negative, given how much junk it generates - spam, phishing, hate mails, etc. Most of my emails at this point are spams.
If we're talking objectively, yeah by definition if it's a net negative, it's a net negative. But we can both agree in absolute terms the negatives of email are manageable.
Hopefully you understand the sentiment of my original message, without getting into the semantics. AI advancement, like email when it arrived, are gonna turbocharge the negatives. Difference is in the magnitude of the problem. We're dealing with whole different scale we have never seen before.
Re: Most of my emails at this point are spams. - 99% of my emails are not spam. Yet AI spam is everywhere else I look online.
Their argument is false equivalence. You can’t just say “if you’re saying X is negative, you must believe that Y is negative because some of the negatives could be conceptually similar.” A good faith cost benefit analysis would rank both the cost and risks of an extremely accurate, cheap, on-demand commercial image generation service and an entirely open asynchronous worldwide text communication protocol, in different universes.
Wasn't the plan AGI, not ROI on offering services based on current gen AI models. AGI was the winner takes all holy grail, so all this money was just buying lottery tickets in hopes of striking AGI first. At least that how I remember it, but AGI dreams may have been hampered by lack of exponential improvement in last year.
As sibling commentor mentions, Zuckerberg is dropping billions on AGI currently (or "super human intelligence", whatever the difference is). And, I don't have time to find it, but maybe Sam Altman might've said AGI is the ultimate goal at somepoint - idk, I don't pay too much attention to this stuff tbh, you'll have to look it up if you're interested.
Oh and John Carmack, of Doom fame, went off to do AGI research and raised a modest 20(?) million last I heard.