What I find must puzzling is that everyone seems to just be violating basic rules that had been in place for ages.
Things like:
- If you can't respond to a UI event wait until you can
- Menus should be tree structures
- Pressing alt should underline the hotkeys you need to access anything clickable
As well as just basic responsiveness or predictability. A 2000 era windows application may not have been pretty, and may well have several different styles all imitated from office, but at least I knew what everything did and when it was slow at least it did what I expected.
This meant I could start the computer, log in, potentially start and use several applications and only then turn on the screen. Nowadays that has no chance of working because even to log in I need to press enter or click some button (which one depends on how I logged in previously, maybe) before I can even start typing and doing so eats a random amount of keystrokes while the damn log in screen loads to do its one damn job.
It's because we went from the Desktop environment, where rules were well-documented and standardized, to the Web/Mobile environment, where rules had to be reinvented and, for the most part, were not.
Over the past year I've started thinking a lot more about design and UI work, and I think it's basically impossible to design things, or create design systems, that appeal broadly to different types of users in a cross-platform way.
I personally love dense UIs and have no expectation of doing certain kinds of work on a phone or low-powered device like a chromebook, phone, or bottom-barrel laptop. But if you're a company trying to sell products to a broad user base, you want to try to design in a way that works for those kinds of users because they still might be end-users of your product. And there's a good chance that those platforms may be where someone first evaluates your product (eg from a link shared and accessed on a mobile device) even for the users who do plan on using more powerful desktop devices to do their work.
So instead we get these information poor, incoherent (because it turns out proper cross-platform, cross-user design is much more difficult than just getting something that works cross-platform for all users on its surface) interfaces. I guess I'm writing this just to add, web/mobile have complicated things partially because more than just requiring their own distinct patterns, they each represent a distinct medium that products try to target with the same kind of design. But because they're different mediums, it's like trying to square a circle.
I'm not convinced that it's possible to create a UI toolkit that works on both desktop and mobile without one compromising the other. It's a bit like trying to design a vehicle that can serve both as a 2-ton pickup truck and as a golf cart; the needs of the two are just too different.
It is absurd that there is no standardized UI toolkit, or rather that the web browser _is_ the standard with is characteristic _lack_ of user interaction idioms.
The fact that there are multiple platforms for UIs* is a huge failure of the industry as a whole. Apple, Microsoft and Google could have had a sit down together at any point in the last 20+ years to push some kind of standard, but they decided not to in order to protect their gardens.
*: a standardized UI platform doesn't necessarily mean a standardized platform. Just standardization of UI-related APIs and drawing.
My guess 10 or so years ago was that Google would be the first to bake Material UI into browser with web components, and then any browser would essentially reuse that to extend out whatever style they wanted. It really seemed like the way the web (and Google was heading). Instead we got bad Material UI knock-offs in about 45 different UI frameworks.
Another reason is that in 2000 if you wanted pointless UI wank you had to implement it yourself from scratch. Today with bloated frameworks where the developers ran out of ideas 15-20 years ago and have filled the intervening time adding wank that no-one asked for and no-one needs, everything you build with it gets to include every brain fart someone had at 4am back in 2017 that they thought looked cool at the time.
> basic rules that had been in place for ages. Things like: [...]
I am going to add my favorite here, just to rant into the void. A dialog box's options must never be Ok/Cancel. These are not the same sorts of things. "Cancel" is a verb, "Ok" is a noun (in this context). Even if "Ok" is taken to mean the verb "acknowledge", it is still not an alternative to cancelling.
99% of these dialogs should be "[Verb]/Cancel": Change "Ok" to a verb or short phrase indicating the action that will be taken if you press it. Don't do the action if the user hits "cancel". The verb should be something specific like "Delete file" or "Terminate process" and not something useless like "proceed".
IMO the ubiquitous Yes/No/Cancel is even worse. No and Cancel are too conceptually close. Doesn't help that these usually show up when you're about to lose all your unsaved changes.
We've got big screens now! use more words! Save changes/Discard changes/Don't quit.
We've ended up in a world where power users have been forgotten. Not out of malice, but out of a misguided aim to reduce complexity and achieve consistency with the web.
I would argue that desktop is the platform for power users, and its future depends on them. The keyboard shortcuts, the micro-interactions, the window management -- this stuff is all important when you're using a system for 8+ hours per day.
Yet we risk desktop experiences becoming less useful due to the UI becoming "dumber" as we keep shoehorning websites onto the desktop. Website UI is dumb. It's mouse driven, keyboard is an afterthought. There's no consistency, and you have to re-invent the wheel every time to get the details right (almost never happens).
>We've ended up in a world where power users have been forgotten.
I think its more like the OS vendors have stopped being operating system vendors, and are now - instead - vendors of eyeballs to advertisers.
The less the user is GUI'ing, the more they are just watching, placid, whatever else is on their screen.
For native apps to survive, they need to not be platform-specific - i.e. web apps, which require a browser and all its responsibilities - but rather cross-platform, reliable, predictable on all platforms - i.e. dissuaded from using native, but rather bespoke, UI frameworks.
This is attainable and there are many great examples of apps which are in fact, old wheels not re-invented, which still work for their particular user market.
I have the most respect for apps I can use on MacOS, Windows, and Linux - with the same hotkey/user experience on all platforms, equitably - and the least respect for apps which 'only run on one of them', since that is of course nonsense in this day and age.
The cognitive load of doing a web app that can do all the things a native app can do, is equivalent to the load required to build a cross-platform app using native frameworks, so ..
>i.e. dissuaded from using native, but rather bespoke, UI frameworks.
Based on my experience, I would be quite reluctant to rely on any non-native cross-platform desktop UI framework that is not web-based. These tend to be either less performant, look outdated or are bug-ridden.
Qt apps don't feel great on macOS, though it's by far the best for mac-ish UI. Dropbox was Qt for a long time and I'd argue it worked well for them. Its easy to fall into "uncanny valley".
On Linux, Qt apps feel a bit off in GNOME, though you can never satisfy everyone as its the wild west.
I think Qt also suffers from not really being anyone's favourite.
On the one hand, you have web developers who tend to not really appreciate the nuance of the desktop as a platform. They're not going to advocate for Qt, it's not CSS/HTML/JS.
On the other hand, you have native Mac developers who love Apple's toolkits (AppKit, maybe SwiftUI). They're not going to advocate for Qt either.
Lastly, you have native Windows developers who have been burned so many times they don't advocate for anything in life anymore.
QML doesn't have a way to define interfaces with JSX and doesn't integrate with the wider JS tooling. From my very limited experience, it still feels too close to the C++ world.
Ah, the magic times when screen resolutions were large enough to display lots of information, in proper 4:3 aspect ratio, just before they got flattened and the industry started treating them as short view distance TVs.
Widgets looks like whatever you want them to look like, if the feel like they're from the 2010s its because the implementer made that choice, not because of a limitation in qtwidgets.
That's your prerogative, but web-based UI's have their hard limits, and native cross-platform desktop UI's are no more/less problematic than the browser.
> I have the most respect for apps I can use on MacOS, Windows, and Linux - with the same hotkey/user experience on all platforms, equitably - and the least respect for apps which 'only run on one of them', since that is of course nonsense in this day and age.
No. I want things like keyboard shortcuts to reflect the platform norms of where the app is running (macOS in my case). A shared core is fine, but the UI framework must be native to be acceptable. Ghostty is a "gold standard" there.
This is why most web apps are lowest-common-denominator annoyances that I will not use.
Indeed, if the framework is sensible, keyboard shortcuts reflecting platform norms is entirely attainable in a manner that developers don't have to bother with it, much, if they don't want to.
There are plenty of examples of cross-platform UI's surviving the hotkey dance and attaining user satisfaction. There are of course poor examples too, but that's a reflection of care, not effort.
Mozilla removed a lot of power-user features and customization from Firefox claiming that their telemetry showed that few users used them. That's the reality now, nobody wants to develop and maintain things for the 1%.
Sometimes this is a self-fulfilling prophecy. It is the novice users who, over time, become power users through repetitive usage. If there are no user efficiency gains to be had through experience in a UI, then it just prevents the emergence of power users. Users just have to wait until a product manager or designer somewhere notices their pain and create a new feature through 10x the effort it would have taken to simply maintain the lower level shortcuts (e.g. keyboard accelerators, simple step automations).
Was it the same 1% that was using each of the long-tail features? I suspect that by refusing to invest effort in at least some amount of niche features, we essentially alienate _everybody_
Browsers like Vivaldi that cater to power users are gaining in popularity. They are not trying to be the next Chrome, they are just out to serve their niche well.
Firefox has nothing to differentiate itself from Chrome at this point.
Container tabs, independent proxy config (chrome only respects system-wide proxy), vertical tabs, and functional adblockers are the four big features for me.
Go to an adblock test page in Chrome and compare it to Firefox with uBlock Origin. Chrome can't block some ads, and some of the ads it can block leaves behind empty containers.
Not only that, but for a time, Firefox seemed to be copying everything Chrome did, maybe as a way to stop the exodus of users. But people who wanted Chrome-y things were already using it, and people who didn't might as well, because Firefox was becoming indistinguishable from it.
God I wish Mozilla would be made great again. It's tragic how mismanaged it is.
Is it mismanaged? Sure, they spend a fair amount on administration. Sure, they spend about 10% on Mozilla Foundation stuff. But they still spend ~2/3 of revenue on software development.
And they're somewhat stuck between a rock and a hard place.
If they try to evolve their current platform, power users bitch. If they don't evolve their current platform, they lose casual users to ad-promoted alternatives (Chrome and Edge).
And they don't really have the money to do a parallel ground-up rewrite.
The most interesting thing I could see on the horizon is building a user-owned browsing agent (in the AI sense), but then they'd get tarred and feathered for chasing AI.
Part of Mozilla's problem is that the browser is already pretty figured out. After tabs and speed and ad blocking, there weren't any killer features.
To a first degree, nearly everyone who installed Chrome did so because of Google putting "Runs best in Chrome" on every page they own and including it with every single possible download, including things like Java updates!
Almost nobody chose Chrome. Microsoft had to change how defaults were managed because Chrome kept stealing defaults without even a prompt.
People use "the internet", they don't give a fuck about browsers. Firefox only got as high a usage as it did because of an entire decade of no competition, as Internet Explorer 6 sat still and degraded.
Chrome was installed as malware for tens of millions of people. It used identical processes as similar malware. It's insane to me how far out of their way lots of "Tech" people go to rewrite that actual history. I guess it shouldn't be surprising since about a thousand people here probably helped make those installer bundling deals and wrote the default browser hijacking code.
It should be a crime what Google did with Chrome. They dropped Chrome onto unsuspecting users who never even noticed when malware did the exact same thing with a skinned Chromium a couple days later. Microsoft was taken to court for far less.
How was Mozilla supposed to compete with millions of free advertising Google gave itself and literal default hijacking?
> We've ended up in a world where power users have been forgotten. Not out of malice, but out of a misguided aim to reduce complexity and achieve consistency with the web.
Power users are less susceptible to suggestion and therefore less profitable. They have largely moved to OSes that do not interfere with their wishes, allowing them to make their own choices about what they can or can't do/run (Eg. Linux).
If you become a power user you realize that nothing matches the power of the command line. And at that point you also realize that are better OSes that allow you to fully explode the true computing power that is terribly limited and constrained by a GUI.
Nonsense. Do you read and write your email using the command line? I use Mutt and Vim for that, and that’s not the command line. GUI with power-user support is just as efficient as Mutt and Vim. Did you use curl to read this thread and submit your comment? I use Firefox with Vimium C, which allows most web pages to be navigated and operated efficiently by keyboard.
I know this isn't really your main point but I don't think they've been trying to reduce complexity but rather increasing ease-of-use for the end-user*. Those things are often completely at odds with each other in software as I'm sure you know.
*well, that seems to have been their goal in the past; nowadays it just seems like they've been trying to funnel windows users to their other products and forcing copilot into everything.
> We've ended up in a world where power users have been forgotten.
I think the world changed. "Power users" in the traditional sense use Linux and BSD now. Microsoft and Apple dropped them when they realized how lucrative it would be to dumb things down and make computers more like cable TV.
Well, Alt+Tab in Windows is supposed to switch windows. That's unless you're in Microsoft Edge where obviously, it switches tabs. Inconsistent and annoying.
Browser tabs are the fault here and browsers are trying to be OS environment, so Alt+Tab is useful for major task switching. I agree it's inconsistent and annoying, but I like Alt+Tab as a way to try to find the window I'm writing that email to someone.
Android and Chrome worked like this for a hot minute too. I assumed the idea was to promote webapps to look like they're first-class citizens, but in practice it's just bizarre and confusing UX.
I hate this too. You can turn it off. In Settings, go to System->Multitasking and change "Show tabs from apps when snapping or pressing Alt+Tab" to "Don't show tabs."
> This meant I could start the computer, log in, potentially start and use several applications and only then turn on the screen.
I mean... well... responsiveness matters to me too, and I am impressed by such inspired productivity, but... I'm also confused. Why not turn on the screen - the monitor, right?
Now thinking about how gui lag might impact the sight-impaired, tangential as that is...
It was meant as an example, not a productivity tip ;-)
Anyway the real point is that it's just easier to use something if you don't need constant visual feedback. Being able to use something blind is more than just an accessibility issue it is just better in general.
I wonder who decided to use a step function for the speed accuracy plot. They must have thought the convex hull would be wrong because you can't really make linear combinations of algorithms (you could, but you'd have to use time not speed to make it linear). So I get why you would use step functions, but the step is the wrong way around. The current plot suggests accuracy doesn't drop if you need higher speeds
> our entropy calculation will now have to use the posterior means
Now hang on for a bit, you can't just plug in averages.
At least that's what I initially thought, but in this particular instance it works out correctly because you're calculating an expected value of the entropy from the two possible outcomes and there the posterior mean is indeed the correct probability to use.
You do have to take the prior into account when calculating the posterior distributions for B, but that formula is in the article.
It is definitely not obvious, but I wouldn't say it is completely unclear.
For instance we know that algorithms like the leapfrog integrator not only approximate a physical system quite well but even conserve the energy, or rather a quantity that approximates the true energy.
There are plenty of theorems about the accuracy and other properties of numerical algorithms.
If you run a loop alongside the agent and make PRs that are tractable, then there isn’t much difference. But to me, it seems like we have collectively lost our minds and think it’s okay to make a 10k LOC PR and ask someone else to review it.
In my experience LLMs also suffer massively from "not invented here" syndrome. I've seen them copy whole interfaces just to implement a feature that was already implemented in a dependency.
All with verbose comments that are just a basic translation of the code next to it.
> The cushions especially are not that durable and should be considered consumables.
This is because they're mostly synthetic "leather", which doesn't last long. Everyone is probably familiar with how synthetic leather starts to flake off. Icky. If you get some pads made from real leather (usually lamb is the go-to choice) they'll last virtually forever. However since real leather is heaver and less porous, this changes the way they sound very noticably (makes them sound less open). If you use open/half-open headphones anyways, the difference won't be too extreme: just moves the sound closer to closed headphones. On closed headphones leather cups make it feel something is covering/plugging my ears. Uncomfortable. Would not recommend.
Worse than how quickly fabric and synthetic leather degrades is arguably that those cups can hardly be cleaned and even when not that old get icky quickly, with dirt stuck in fabric and fine cracks.
Is there any way to preserve those synthetic materials and gain a longer shelf life (eg. 15+ years)? Eg. Heat sealed foil bags with excess air removed, or even filled with nitrogen? The idea is to buy several spare consumables when you first purchase the equipment, and break them out at regular intervals.
I have the same problem with vulcanized coatings that go sticky after several years (looking at you, 3D Connexion SpaceMouse, Saitek X52, etc).
I've had to replace the pads on my Sony 7506 cans as well. I was very impressed with the parts that can be replaced on these cans. The packing includes an exploded diagram of the parts.
I love my 7506s. And I have a recurring event on my calendar to remind me to buy new pads every 2 years. That's almost exactly how long they last. I've tried 3 brands and they've all lasted almost exactly 2 years. The original Sony pads also lasted 2 years.
Yeah the sound stage definitely gets impacted. I tend to use leather cups in the winter and fabric ones in the summer. Nothing worse than sweating from your temples while you work.
I bought the official ear pad replacements for my PXC450 back when those were available, for 1/4 of what the headset cost. They are so terrible that I stopped using the headphones entirely. I lost faith in Sennheiser after that, now I just use cheaper ear buds.
I bought the $21.99 Chinese replacement “Sennheiser” pads from Amazon. Straightforward to replace (though not trivial). Very comfortable. My headset feels as good as new.
Wait, am I reading this wrong. The producer and importer try to soften the impact of the tarrifs only for the retailer to massively increase their prices?
Many retailers increase their prices by multiples of the tariff increase rather than a straight passthrough so that they can maintain their margins. It's probably why a lot of the biggest retailers with monopolies aren't complaining much about tariffs. They mostly keep the same margin and actually increase revenue. Meanwhile, it's been incredibly damaging to small businesses and consumers. Functionally, tariffs have been a massive wealth transfer.
Can't repeat this enough and I'd like to make sure to connect the dots. The Big Beautiful Bill that was signed into law cut taxes. To keep the US Federal Government from going (even more) into debt, Trump introduced aggressive tariffs (it doesn't matter that he introduced the tariffs before the BBB became law because he/they knew the BBB would pass and that was baked into the tariff decision).
The BBB tax cuts benefit the wealthy much more than the average person. The tariffs are borne by both the wealthy and by the average person when they buy tariffed goods, but those tariffs are easily absorbed by the wealthy while acting as an additional tax on the average person by increasing prices. This is just about as direct a transfer of wealth from the average person to the wealthy as you could possibly put into place (barring an actual transfer where the average person is taxed and those dollars are literally transferred directly into a wealthy person's bank account).
In a way, it's a genius move. Convince a healthy chunk of the US population that you're on a populist crusade to bring jobs back to America while increasing the wealth of the wealthy and taking even more of the average person's income. Don't forget that the reason the jobs were exported in the first place was to decrease costs so that, you guessed it, wealthy people would get wealthier (but at least in that scenario the cost of a tv went way down, am I right???).
All that said, I don't mean to suggest that bringing jobs back isn't actually a goal. It's just not the primary goal. My take on the priorities of the current admin's tax policy, including the tariffs (which, broken record, are taxes) 1. decrease taxes on the wealthy 2. decrease income taxes on everyone else who pays taxes 3. get "everyone else who pays taxes" to fund the decreased taxes on the wealthy 4. bring jobs back. Somewhere in there is also "create a mechanism for opaque profiteering." I'm not quite sure where that falls on the list. Cynically it's probably number 2.
Based on the graph, the increase in cost to the retailer was $0.49 and they marked up $1.10. I imagine this is pretty standard markup but multiplies the effect of the tariff and passes it to the consumer, not to mention the producer and importer.
also, I'm not sure retailers are necessarily to blame. some use pretty simple math in calculating the retail price based on cost and don't necessarily have visibility into the tariffs.
I can share my own experience as a small business owner. I sell coffee. I engage in some direct trade and also buy some coffee from domestic vendors who already have the coffee stateside.
I primarily buy Costa Rican coffee and they got hit with a 10% tariff. That adds like 5 cents to a latte. Whatever. I’m not raising my prices over that. But then Brazil got nailed with much higher tariffs and they are the #1 exporter. Colombia was another one that got hit with high tariffs and they are a major producer. Suddenly, that was driving up the cost of my Costa Rican coffee as demand that was previously met by Brazil and Colombia shifted to other countries. I went from being the exclusive U.S. importer of my coffee to being in a bidding war. The last time I imported coffee, it cost me twice as much as the shipment prior. Then they tried to raise the price again. I ended up having to find new suppliers before things eventually settled down when the people in charge realized you can’t produce coffee in the U.S. (Technically, Hawaii produces exorbitantly priced coffee at a max capacity that amounts to a rounding error relative to domestic demand. There’s no other place in the U.S. with the climate to grow coffee. Besides, it’s incredibly labor intensive. Coffee essentially can’t be produced here.)
Cups were a real pain in the ass too. We were buying our stuff from the Dominican Republic and Latin America, but people are mostly getting that stuff from China. When China became prohibitively expensive, everyone rushed to find other suppliers. That drove prices up and messed up lead times in the short term. The story with most packaging was the same.
Literally every single item required for my business increased in price. It turns out nobody produces anything 100% domestically without any foreign input. My syrups are made in the USA but the bottles they come in are from somewhere else. My empanada shells come from Argentina. Everything from chocolate to pistachios to straws and cleaning supplies. Everything is a product of global trade, whether it’s ingredients, raw materials, packaging, or the machinery and tools used to manufacture it. To maintain my own equipment, I have to buy parts from Italy.
I held out for several months. I was feeling it as a business owner as well as every time I went to the store. I knew my customers were feeling it. I live in the neighborhood where my business is located. A lot of my customers are retirees on a fixed income. The last thing I want to do is add to the pressure. Meanwhile, I have employees who deserve a living wage. I have my own needs. I dumped some products and suppliers that became too expensive for me to sell and have any remaining dignity. I saved everywhere I could without compromising on quality.
About 7 months into this bullshit I had to raise prices for most of my products. It couldn’t be helped. Still, I’m embarrassed at how much we have to charge for some items.
I feel like the last year has been complete chaos. It’s economic shocks and supply chain disruptions everywhere I look. It’s just one thing after another and that’s before I even turn on the news.
And I bet when you did eventually raise your prices, you raised them by more than you strictly needed to. Partly to help offset the losses you took for the 7 months you didn't raise prices, and partly to give yourself enough margin so that if your supplier's prices go up tomorrow (which they will) you don't have to raise your prices immediately.
That's classic "prices are sticky behavior". Prices change less often, and by more, than they would in a classical economics model.
Nobody is "trying to soften the impact of tariffs". Everyone was and is trying to maximize profits. Who ends up paying has to do with "elasticity" which roughly is about how much the tax actually impacts you.
In this case, it ended up that the retailer raised prices, probably because the retailer can just sell domestic wine for cheaper (close substitute). Retailer profits still didn't increase because of reductions in sales (~12% iirc) and increase in after-tariff inventory prices. This is textbook econ 101. Substitute, profit maximization of a firm, supply and demand etc.
You're confusing exporter and importer lowering prices with the retailer facing lower after-tariff inventory costs. Inventory costs still went up.
Retailer profits from foreign wine decreased because of reductions in sales (~12% iirc). This is textbook econ 101, profit maximization of a firm, supply and demand etc.
Taxes make after tax prices go up and reduce profits due to reduced quantity.
No reason to go searching for a "plausible excuse" or some greater critique of culture.
so that's not true - I worked for years in the grocery business and prices DO come down and in fact, I've seen evidence all over the NYC market of prices falling recently.
examples include eggs for $2.99 in some places (!), and other competitive categories like unbranded meat and cheese, pasta, and more.
prepared foods seem to be slower, I'm assuming because labor costs continue to rise.
"That's not true" is too strong a statement on your part.
The statistic you cite does not necessarily contradict what the parent comment is saying. "Up 29% since February 2020" is an absolute change since a specific point. The parent comment is saying prices have "come down" i.e. since their peak. It can still be up overall, so long as it's not up as high as it was at one point.
EDIT: To be clear, the parent comment might still be wrong, or might be right only within a biased sample (i.e. their own experience). I'm only making the point that the statistic you're referencing does not outright disprove what they're saying. Prices can be up since six years ago AND down since two years ago (random time periods chosen for illustration only).
Of course this is talking about the overall price level. The prices in specific sectors can fluctuate independently of that. Food and energy in particular are excluded from core inflation because they're especially volatile.
Prices never came back to pre-pandemic levels, that is absolutely correct. But if you remember that prices ballooned last year when Trump just took office, eggs were getting more and more expensive, etc and I gotta say prices came down a bit after that, but always never to previous levels.
What's your explanation for the mechanism? Because my understanding is that the M1 spike was largely an accounting rule change [0], not a "money printer go brr."
As your link says, the spike was the result of a change in policy, not an accounting rule change. Effectively, there is very little difference between printing the money or just making some extra money available for use.
The money printer was also going brr. And that is probably the cause for some of the inflation.
This is a deliberate choice by Congress to give the Fed a mandate to target 2% inflation. In particular Congress hasn't given them any instruction to try to make up for mistakes. If inflation overshoots in one year then they don't try to undershoot in the next year. They just keep trying to hit 2% inflation.
So if retailers tried to lower prices to pre-COVID levels then they would fail. The Fed would see the falling prices and cut rates until 2% inflation was achieved.
That very much isn't the only right way, and it is far to close to government tracking activities online. For one it effectively allows governments to disallow someone from accessing the internet.
All this to let you do stuff you were allowed to do anyway.
The problem is handing kids admin level access on a device with full unfiltered access to several communication networks. You do not fix that by demoting everyone's access.
Yep, recycling a post about reasons to do it that way:
> 1. Most of the dollar costs of making it all happen will be paid by the people who actually need/use the feature.
> 2. No toxic Orwellian panopticon.
> 3. Key enforcement falls into a realm non-technical parents can actually observe and act upon: What device is little Timmy holding?
> 4. Every site in the world will not need a monthly update to handle Elbonia's rite of manhood on the 17th lunar year to make it permitted to see bare ankles. Instead, parents of that region/religion can download their own damn plugin.
Things like:
- If you can't respond to a UI event wait until you can
- Menus should be tree structures
- Pressing alt should underline the hotkeys you need to access anything clickable
As well as just basic responsiveness or predictability. A 2000 era windows application may not have been pretty, and may well have several different styles all imitated from office, but at least I knew what everything did and when it was slow at least it did what I expected.
This meant I could start the computer, log in, potentially start and use several applications and only then turn on the screen. Nowadays that has no chance of working because even to log in I need to press enter or click some button (which one depends on how I logged in previously, maybe) before I can even start typing and doing so eats a random amount of keystrokes while the damn log in screen loads to do its one damn job.
reply