Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a weird disconnect with computers.

The software we use is 1000 times more complex than it was 20 years ago, leading to performance that really have not improved a lot, and a lot of functional stagnation. Many applications are slower to start today than they were 10 years ago, because back then they were binaries and today they are Electron apps. Your average web based word processor with cloud storage performs about the same as Microsoft Word did on Windows 3.1 on a 486 saving onto a floppy. Screens are bigger, resolutions are bigger, but the content is largely the same because the limit is human perception not technology.

If you actually keep things simple, you can build absolutely ridiculous things off modern hardware. I'm running an Internet search engine out of my living room. You could not that 20 years ago. What makes it possible is modern SSDs and the absolutely mind-boggling computing power of modern CPU.



> performs about the same as MS Word did on a 486 with a floppy

Except (gdocs for comparison) I now can:

1. Effortlessly access the doc on multiple devices, wherever there is cell or WiFi. 2. See a visual history of every change I've ever made to the doc. 3. Collaborate with many other people in real time. 4. Easily incorporate images, tables, drawings, and other files. 5. Compare documents. 6. Use other character sets and emoji. 7. Add, have others add, and review comments.

I'm sure there's more. I don't remember everything about how primitive things were 30 years ago.


Right, but how many things are actually central to document processing?


> actually central to document processing?

ahh, the no true scotsman style of logical fallacy.

Document processing is not strict nor defined. The features listed in the parent comment are all very useful, and if the floppy disk document processor of 30 years ago could do it, the people there would also find it useful.


Yes they would find it useful, but in terms of productivity what's it worth? For the things that aren't collaboration, it's probably single digits. Real-time collaboration is worth a lot for certain documents, but also you could have managed that on a 486 if you had a modem.


Literally 100% of the documents I process are sent to other people. So collaboration and sending documents are core functionality.


It’s even stronger than you suggest, the actual “word processing” bits turned out to be a vestige of a paper world and “collaborative text editing” was actually the killer feature.


I wonder why we use paper layout on Word/GDocs despite it never be printed.


All of it, what do you think isn’t?


Word Processing is maybe not the best example. Human needs in terms of writing documents have not changed significantly in the last 20-30 years. The mechanics are not the bottleneck. neither is application startup time. I know that it is something you can measure but I don't agree that it is a significant data point. I generally only start a word processor once per day and then it stays running as I move between writing tasks and other tasks. Startup is an insignificant part of that time.

Other tasks have become more efficient in that time period. Anything involving graphics has gotten much easier to do on computers than before and can be done by more people. Project Management tools are much faster and easier to use.

What I do see in a corporate world is an emphasis efficiency that requires spending a lot more time tasks and running alternate scenarios to be more efficient. This seems more doable now because some of these things are easier but its too easy to ignore the time spent doing more of this kind of thing.


> The limit is human perception.

I couldn't disagree more. The general choppiness a modern application has is noticeable by literally everyone. I'm so bothered by it that I even went back to a wired mouse recently because the latency of my bluetooth mouse made me uneasy. And another point is that VR is rewriting entire technology stacks JUST to get lower latencies, precisely because a lot of people just get sick if it isn't good enough. 120Hz+ displays are becoming more commonplace thank god. And I could go on.


We had low latency input and smooth animation 30 years ago. The latency and lag of modern user interfaces is all in the layers and layers of software. It's especially noticeable when you look at like NES games and the like, they ran at 50 or 60 FPS (depending on PAL or NTSC) and typically had zero input lag (unless there was some sprite limit being exceeded). Modern games at 50 FPS are so sluggish you feel like you've downed a full bottle of wine or something. The difference is the time it takes to render a frame today, which can be 100ms or more, especially when the GPU is struggling.

But what I meant was like pixel density and so on. My screen is a lot higher resolution than it was 20 years ago, but it doesn't actually display much more information as my ability to actually resolve fine details has if anything worsened with the years. My screen is farther away and my font size higher, that's the big difference.


I've actually really struggled with slow, unoptimised software forever. Microsoft Word's Equation editor is so slow and sluggish that once you've written one page of equations or more it just becomes unusable, since it's not feasible to wait several seconds for the equations you typed in to appear.

It's the same with code editors. Sublime Text 3 is crazy responsive and it's a joy to use, however I missed certain features that VS Code had, so I switched, however the latency issues are very noticeable. Typing is a little slower, switching tabs is very slow (like 100-ish ms?) and when you're opening new files you could literally watch the text being color coded before your eyes on my i7 laptop I used back then. Things got worse when I started developing Flutter applications while running the Flutter and Dart extensions. VS Code would sometimes take several seconds to react to me pressing the backspace button, which made it incredibly frustrating to work with.

How is it possible that our computers are so incredibly performant these days, yet seemingly not fast enough for simple document editing or text editing tasks?


I actually think we reached peak word processing. The only word processed documents that I ever see are things that nobody will ever read, such as HR announcements, functional procedures, and dissertations. Today, more people write in the e-mail editor or chat app. Even students no longer need word processing. When my kids were in grade school, they were required to follow formatting standards that included page margins, for documents that would never be printed. Today, they use whatever cloud editor is convenient.

I wonder how much of today's software is designed to make us more productive or efficient, by helping us manage... software?


I read a small municipal report from the early 1980s, and it was so refreshing to read; very slim, got to the point. Probably because somehow had to type it by hand, so anything not useful was edited out. By the late 1980s in offices and the early 90s at home, it became too easy to copy+paste. Document lengths spiraled out of control.


> The software we use is 1000 times more complex than it was 20 years ago, leading to performance that really have not improved a lot, and a lot of functional stagnation. Many applications are slower to start today than they were 10 years ago, because back then they were binaries and today they are Electron apps.

Also known as Wirth's law: https://en.wikipedia.org/wiki/Wirth%27s_law

Sometimes it worries me. If we had a boring set of usable and safe tools to use for development, that would also be pretty fast, i wouldn't have to update my hardware every 5 years or so. But JetBrains IDE's (just one example) basically demand that I do, if I want their other shiny features.

Perhaps something like Java instead of Python. Perhaps something like Go instead of Java. Perhaps something like Rust instead of Go.

Just boring (predictable), stable and dependable programming languages, supported on every platform with a set of native libraries. Perhaps a bit like what LCL did in regards to GUI in particular: https://en.wikipedia.org/wiki/Lazarus_Component_Library


I am doing mostly Ruby and code in Atom, or Sublime before that. I don't have noticeable starting times for my editor.

Long ago I started coding in gedit. Since then the language got only faster, I need less code because more major libs do more of my work.

Also my computer is like 8 years old, my laptop until recently was over 10 years old and my current one about 3 or 4 (T420 and T480s)

Ruby is not native, but we'll portable. My point is this 'boring' work style is here, people just choose to not use it.


> I am doing mostly Ruby and code in Atom, or Sublime before that. I don't have noticeable starting times for my editor.

That's interesting, because there have been articles that indicate that Atom has not only comparatively worse startup times (which may or may not matter to people), but also really bad typing latency: https://pavelfatin.com/typing-with-pleasure/

In particular, this image gets the point across well: https://pavelfatin.com/images/typing/editor-latency-windows-...

> Ruby is not native, but we'll portable. My point is this 'boring' work style is here, people just choose to not use it.

This is a good point, though! As far as I know, plenty of people still use Ruby (typically on Rails), or also other "batteries included" solutions like Python and Django pretty successfully.

That said, most of these languages have pretty severe limitations in regards to their performance and resource usage: https://jaxenter.com/energy-efficient-programming-languages-...

Some lovely benchmarks seem to confirm that, in relatively real world conditions: https://www.techempower.com/benchmarks/

Admittedly, that doesn't matter for all projects, but why couldn't we have the ease of use of Python with the performance of Rust and the developer experience of Rails/Django?


> [XYZ] basically demand that I do, if I want their other shiny features

well, they aren't doing the demanding, if you're the one who wants those shiny features!


> well, they aren't doing the demanding, if you're the one who wants those shiny features!

That's just the thing: they use all of these browser based UI technologies, these slower scripting languages and other ways to ship software in a reasonable time and so i could have the actual fancy features that pertain to the logic...

...except that I'd be far happier if they hadn't cut those corners and I'd have a fully native app using the OS UI frameworks and so on. Of course, that isn't possible, if I want the software in the next few years, not a decade from now, or much more expensive (at least that's the prevailing argumentation).

In my eyes, it is the coupling between the desirable bits (what the software actually does) and the undesirable bits (mostly how it does it). But it's all about tradeoffs, similarly to how we often choose to ship code that isn't thoroughly tested just to meet some deadline.


Those technologies that make the IDE slow are the ones that improved the productivity of the developers of the IDE and let them deliver those features in a fraction of the time they would have needed 20 years ago. So their productivity improved. The productivity of their customers, I don't know. I'm not one of them and I can't judge. I guess it's a matter of the tradeoffs you wrote about.


IDEs 20 years ago were not vastly insufficient and there were many “RAD” (rapid application development) tools for those that leaned towards high level languages. Some were abominable like Visual Basic, but many of the web platforms used today are equally poor performers.

The modern GUI designers in IDEs are notably better, but many shops don’t take advantage of them due to issues with slow performance (eg storyboads in xcode) and issues with multiple developers modifying the resource files simultaneously.

Refactoring and static analysis tools are improved as well, but none of these improvements afford orders of magnitude in efficiency.

The biggest gains in efficiency come from the extended libraries, open source code, ORMs, and the fact that most of what is coded has already been built by someone into a framework or library.

Software development does not now require the expertise of knowing the intimate details of the code and how it works. Many modern apps are plugging together libraries that developers do not understand and are unable to fix any issues in those libraries. The default now is to Copy / paste from Stackoverflow or submit an issue to the library maintainer on github and pray that it gets fixed; or find another library and swap that in. Software development (especially web development) has become a more decentralized effort usually resulting into in a mishmash of layers and dependencies whose inefficiencies are allowed by the amazing hardware. Unfortunately, performance and user experience are sacrificed to the interests of “efficiency” and using the latest language/platform fad of the year.


Ruby on Rails? Edit: for web apps


Ruby on Rails is pretty cool and nice to work with, but in my comparison it'd probably be somewhere next to Python as far as performance is concerned.

Oh, also, for a while I ran GitLab which is written in Ruby and still run OpenProject, which also is. Both of those underperformed and wanted both unreasonable amounts of memory whilst also running slowly when under load.

I'm yet to see a Rails codebase that can actually compete with the likes of Java/.NET/Go, since projects like Gitea are snappy and use about 1/10th of the RAM. Then again, nobody really picks languages/tech stacks for their performance outside of a few industries/domains where that is indeed necessary (HFT?).

Most of the time people just choose whatever is the easiest to work with, has the best integrations/frameworks/libraries and will let them ship features in a timely manner. Now, upgrading old Rails projects, though, is something that I cannot recommend doing. Very much not a fun experience.


But you also had completely different apps for every operating system, file formats were completely incompatable, you needed to shlep those files around using disks to get them between multiple computers, those disks were unreliable, and heaven help you if the power went out in the middle of saving, lest that file be lost for good.


And the "async programming" that is advocated everywhere is still the same as Windows 3.1 cooperative multitasking, just with better branding.


Depends on what you mean by async programming. If you literally mean async and await as available in Haskell, Rust, C#, JS, Python etc. Then no that's definitely not the same thing as windows 3.1's cooperative multitasking.


The async and await is just syntactic sugar that makes you more addicted to it.


The syntactic sugar is the entire point. If you don't have it it just sucks as a technology.


Sometimes green threads require no syntactic featutes at all, e.g. Lua coroutines. You yield() through the stack and resume() back when “done”.

  local data = readfile("file1”)
  writefile("file2", data)
The above example may have async/await-ed two times without any sugar. I believe that’s exactly what coop-mt code would look like (but at the lower level, because of hw stack instead of heap-based), Tbf I never did that in windows 3.x, do I miss something?


And structured programming is just sugar over GOTOs.

Polymorphism is just sugar over type inspection and branching.

Recursion-free function calls are just sugar over variable renaming and text movement.


Correct, but that misses the point, which is that async programming is NOT sugar over the more powerful approach of preemptive multitasking (introduced in Windows sometime after version 3.1).


Assuming that we know how promises and syscall-triggered context switches work, what’s the important difference?


It's the Visual Basic event loop from the Windows 3.1 / 95 times. One thread doing all the job with callbacks for events. Events were almost only UI events back then.


> Your average web based word processor with cloud storage performs about the same as Microsoft Word did on Windows 3.1 on a 486 saving onto a floppy.

It is now trivial to embed high definition photographs in a document. There is a figure-creation tool right there in the application. I can make notes and comments and mark up a document. I have an automatic and recoverable record of the history of my document. I can collaboratively edit that document with people from across the world. I have spellcheck that works across dozens (perhaps hundreds) of languages.

In the sense that we have a WISIWYG editor whose primary interaction is a keyboard typing letters - sure. But I wouldn't describe this as "about the same."


> The software we use is 1000 times more complex than it was 20 years ago, leading to performance that really have not improved a lot, and a lot of functional stagnation.

Performance has improved dramatically, but latency went up too.


I don't think I've ever had computer with lower latency in any application than when using my M1 Pro MacBook. Sure, it's 20 years of hardware innovation, but at least someone is doing something right.


Keyboard button press to screen response is ~100ms whereas an Apple IIe was 35ms.


Apple had the iPad Pro pencil latency down to 30ms in 2017: https://danluu.com/input-lag/


How'd you measure that?


Most measurements were taken with the 240fps camera (4.167 ms resolution) in the iPhone SE. Devices with response times below 40 ms were re-measured with a 1000fps camera (1 ms resolution), the Sony RX100 V in PAL mode.


Recently I opened a Full HD video in a web browser, connected my laptop to a TV using an HDMI cable, moved the browser window over to the TV screen, and watched the video. And it just worked, with sound and all. I didn't have to download or configure anything, neither on the computer nor on the TV. That would have been utterly unbelievable 20 years ago, bordering on magic (especially the no-download/no-configuration part).


I'm pretty sure XP let you hot plug displays?

You could simply open windows media player and stream video 20 years ago. The resolution would have been the main impressive thing, along with how much money you spent on that high speed internet connection. But far from utterly unbelievable. I'm not sure anyone would have cared which program you used.


In theory, maybe. In practice? Something would have broken, or you'd have to fiddle with some settings, or you'd have to download the video (somehow) before you could play it in a standalone program – the experience would have been much less streamlined, at least. Also, which single cable would you have used to carry video and audio, and which could be plugged into a laptop and a TV?


> Something would have broken, or you'd have to fiddle with some settings

I don't remember any trouble with screens or moving windows around or doing streaming video around that time.

> Also, which single cable would you have used to carry video and audio, and which could be plugged into a laptop and a TV?

Oh, I didn't realize you were suggesting the audio moved to the TV. Well HDMI came out a couple years later and didn't really blow minds.


> Well HDMI came out a couple years later and didn't really blow minds.

Of course not. All of those improvements were, taken on their own, iterative and evolutionary – as nearly all technological development is. But taken together, they make a big difference.


Except I'm saying all those other things already worked fine. Your scenario is an incremental change between then and today. If you want that super impressive effect you need to compare against longer ago than 2002. If you do that in 1992, wow.

You could still do a 20 year comparison if you wanted, 1992 vs. 2012. Since the experience has barely changed in the last 10 years.


An ActiveX control or Netscape plugin accomplished this just fine 20+ years ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: