Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway91111's commentslogin

What are you imagining the hayday of email to have been like? It sounds like you're describing AIM in the 90s, or SMS in the early 00's. Before that there was IRC if you wanted realtime responses. Email has always been async.

Mind you, nobody has ever introduced me to a friend via social media.

Meanwhile, SMS is still going strong. Who needs an app?


There are several advantages of apps over SMS. Cost and accessing the messages from multiple devices being the two main ones I can think of.


Yea, but this enables things like e2e encryption without losing a search index.


if it's a chrome extension there's nothing stopping a court forcing Google to add a modification for a certain user

no different to having your javascript decryption routines done by a website


I have found Chromium to be a nice alternative to keep local browser apps working as it freezes the browser version instead of automatic upgrades, which have regularly broken web based line of business applications I have been around the past few years.


Sure, but presumably if you're worried about it you can sideload your own version.

Nonetheless you're absolutely correct; i would prefer firefox to chrome.


> karsidhins

Unrelated, but it's astounding how well that name communicates despite being terribly spelled.


Also unrelated to OP, but very interesting nevertheless : https://www.mrc-cbu.cam.ac.uk/people/matt.davis/cmabridge/

the jumbled up letters and words are readable, but the idea needs to be well known to the user (e.g., it's true that the kardashians are so well known that it's easy to convey the idea).


"This reminds me of my PhD at Nottingham University (1976), which showed that randomising letters in the middle of words had little or no effect on the ability of skilled readers to understand the text. Indeed one rapid reader noticed only four or five errors in an A4 page of muddled text."

Huh. I have no problem groking the garbled text, but at the same time I'm pretty much the complete opposite of: "This reminds me of my PhD at Nottingham University (1976), which showed that randomising letters in the middle of words had little or no effect on the ability of skilled readers to understand the text. Indeed one rapid reader noticed only four or five errors in an A4 page of muddled text."

Even before I read the first sentence, my mind does a 'lexing first-pass' and looks at the 'shape' of the text. If a word is misspelled, I'll identify it well before I begin to even gestate the content. Interestingly, I can understand I'm conversational (barely) in Spanish and the trait doesn't carry over. My spoken German is an insult to the language, but I can read it well enough to have the same phenomenon occur in German as well.


This is why you finally give up on the verge? What would you refer to the memo as: a cool-headed rational argument? I don't even think the engineer understood what he was arguing for; misogyny isn't hard to see, regardless of the intentions.

The weirdest and most uncomfortable part is that he cares enough to alienate his employees, but not enough to encourage productive discussion. It seems like bitterness and resentment towards women.

While I understand why you might not like the phrasing, it speaks deeply about you that this is the one of thousands of verge factual errors you choose to criticize.


I have stopped reading it for a while now. Too politicized and opinionated for my taste.


Thankfully the vast majority of work has nothing to do with innovation.

I'd love to hear your argument for "progress", whatever that is.


I meant innovation and progress in the sense of the sharing of good ideas, and solving shared problems. If a remote job only involves some kind of repeated task, it could probably just be automated.


Thank god most code requires no innovation at all, and thank god i am the one automating others out of the job!

Unless you're working on research, it's hard to buy physical proximity reducing throughput; just latency.


innovation is relative. Sure, innovation on a global or market scale might not be common or necessary, but within the context of a single company the definition is broader.


True, but I am highly skeptical most people will have issues producing this level of innovation from a coffee shop.


Agreed, but it still had the secure enclave with touch id.


Well, multicore isn't going away; concurrency (if not parallelism) is necessary for responsive UIs.

Did you have any specific improvements? The big requirements for distributed programming are mostly protobuf serialization and tcp/http; swift has both already.


I was thinking more in terms of first class support for distributed actors, with global addressing, message routing, etc. Not designing for multicore performance first, the way Pony did.


How does ARC hold up for long-lived servers? Are the leaks manageable?


What leaks? You only get leaks if you have cycles that you forgot or don't close non-memory resources that you keep referencing.

Which is not that different than with a GC.


No, GCs collect reference cycles. Whereas a (strong) reference cycle in ARC in an operation repeated many times in a long-running server or something adds up.

Worse, sometimes, you don't even know if you're creating a leak. For example, I recently had to call, given two gesture recognisers a and b:

a.requireToFail(b)

A is a long-lived object. B goes away when the current view controller is popped. But not if A keeps a strong reference to it. Does it? Probably no one without access to the source code of UIGestureRecognizer knows!


Exactly. How is the profiling experience?


Why should ARC imply leaks?


It doesn't.


I would expect it to be more reliable than a GC, as its performance and memory usage are more consistent.


That can cut both ways.

1. Swift's ARC uses atomic reference counting underneath, which is normally very expensive, and relies on compiler optimization to remove as many reference count operations as possible. This is normally pretty effective, but there are situations where it's not possible.

2. Reference counting allows for arbitrary long pauses as the result of cascading deletions (i.e. where object deletions trigger other object deletions). You can work around that (by deferring deletions), but then you don't have any guarantees about the timeliness of deletions anymore. As far as I know, this is still an open issue for Swift.

3. Without a compaction scheme, you risk memory fragmentation. While this is a rare occurrence in practice, there are workloads where it can happen.

4. Reference counting cannot reclaim cycles without a mechanism for detecting cycles; such a cycle detector (e.g. trial deletion) poses pretty much the same challenges as tracing GC.

Obviously, tracing garbage collectors pose their own challenges; my point is merely that whether performance and memory usage are more consistent has to be judged on a case by case basis.


1. You're right, it can be slow! But it's usually still consistent and that's useful.

2. Hmm, cascading deletions. Is that really a big problem in practice? I'm skeptical because it seems like that would affect C and C++ programs too, but you rarely hear anyone mention it. Maybe Swift tends to use more objects whereas C++ programmers tend to be better at packing stuff together?

3. Fragmentation -- that's true, but again, it affects C and C++ too. I guess for long-running C/C++ programs you're likely to manage memory pools directly. I don't know if that's possible in Swift.

4. Cycles -- weak references work fine for this. I have never had trouble with cyclic garbage in Objective-C. (I mean, I've had leaks, but they're always easy to spot with a leak detector and easy to fix with weak references.)

Overall, it seems to me that reference-counting adds a small but consistent performance penalty, and otherwise should have comparable runtime behavior to malloc/free in C, which is known to work pretty well when used correctly.

Note that Apple got smooth and reliable 60fps performance on the original iPhone, which was extremely resource-constrained by modern standards, using Objective-C, which isn't usually considered a fast language!

On the GC side, it seems like you typically get bursty, unpredictable performance, in both time and memory. Modern GCs work very hard to keep collection pauses as short as possible, but almost inevitably that means keeping garbage around for longer, which means using a lot of memory.


1. I think you may not realize what state of the art tracing GCs can accomplish. IBM's Metronome has pause times down to hundreds of microseconds.

2. It only takes freeing a tree with a few thousand nodes for it to become an issue. It happens in C++, too (heck, there've been cases where chained destructor calls overflowed the stack [1]). The reason why you don't hear more about it is because pause times just aren't that big a deal for most applications. In forum debates, people always discuss triple A video games and OS kernels and such, but in practice, only a minority of programmers actually have to deal with something even approaching hard real time requirements. Generally, most applications optimize more for throughput rather than pause times.

3. Yes, and it can be a problem for C/C++, too. It's rare, but not non-existent. Note that pools can actually make fragmentation worse for long-running processes.

4. Weak references work if you get them right. But for long-running processes, even a single error can accumulate over time.

> On the GC side, it seems like you typically get bursty, unpredictable performance, in both time and memory. Modern GCs work very hard to keep collection pauses as short as possible, but almost inevitably that means keeping garbage around for longer, which means using a lot of memory.

This ... is not at all how garbage collectors work, especially where real time is concerned. Not even remotely. I recommend "The Garbage Collection Handbook" (the 2011 edition) for a better overview. And ultra-low pause times are generally more of an opt-in feature, because they're rarely needed.

[1] E.g. Herb Sutter's talk at C++Con 2016: https://www.youtube.com/watch?v=JfmTagWcqoE&t=16m23s


> > almost inevitably that means keeping garbage around for longer, which means using a lot of memory.

> This ... is not at all how garbage collectors work, especially where real time is concerned.

Hmm, I'm certainly no GC expert, but is it really not the case that GC tends to be memory-hungry? Not exotic academic systems, but the languages people use day-to-day.

Most of my experience with GCs is in languages like Java and C#. Java in particular can be very fast but always seems to be memory-hungry, using like 4x the memory you'd need in C++. I haven't spent a huge amount of time fine-tuning the GC settings (it seems like Oracle is working to simplify that -- good!) but the defaults seem to assume at least 2x memory usage as elbow room for the GC.

That's on the server. On mobile, I've worked with iOS and Android and iOS undeniably gets the same work done with much less memory. Flagship Android phone have 4GB of memory and need it, whereas Apple hasn't felt the need to bump up memory so quickly even after going 64-bit across the board.

The last I heard about real-time GC, with guaranteed space and time bounds, it sounded like it was theoretically solved, but not used much in practice because it was too slow. That was a number of years ago though. Has that situation changed? Are there prominent languages or systems with real-time GC?


Looking up IBM's Metronome led me to the Jikes RVM (https://en.wikipedia.org/wiki/Jikes_RVM), which sounds so cool that I wonder why it isn't being used everywhere?

The PowerPC (or ppc) and IA-32 (or Intel x86, 32-bit) instruction set architectures are supported by Jikes RVM.

Ah, no ARM and no x64, that'd be it.

What's keeping this kind of GC technology back from the mainstream?


> Jikes RVM

The Jikes RVM is designed for research, not production. It's pretty impressive, but (inter alia) does not implement all of Java and does not support as many platforms.

> What's keeping this kind of GC technology back from the mainstream?

The fact that successful commercialization is possible; the GC tech that you see in Metronome and C4 is seriously non-trivial and not easy to reproduce unless you spend money on it; and it's also technology that businesses are willing to pay for.

At the same time, only a minority of open source use cases really require this kind of hard real-time GC, so there's little pressure to create an open source equivalent. Shenandoah is the one open source GC that does try to compete in this space, and it is trading away some performance for getting ultra-low pause times.

I'll add that this is difficult only because of concurrency and arbitrary sharing of data between threads. If you have one heap per thread, then it becomes much, much easier (and is a solved problem if soft real-time is all you need).


One note, in video games allocations are a major source of slowdown; don't allocate in your inner loop! Use object pools and arena allocators.


This is because naive can-do-it-all allocations in C/C++ can be expensive, not because allocations are inherently expensive. In C/C++, you have:

1. A call of a library function that typically cannot be inline.

2. Analysis of the object size in order to pick the right pool or a more general allocator to allocate from.

3. A traditional malloc() implementation needs to also use a global lock; thread-local allocators are comparatively rare.

4. For large objects, a complex first-fit/best-fit algorithm with potentially high complexity has to be used.

Modern GCs typically use a bump allocator, which is an arena allocator in all but name. In OCaml or on the JVM, an allocation is a pointer increment and comparison.

Even without bump allocators, it's easy for a GC implementation to automatically turn most allocations into pool allocations that can be inlined.

Also: much as people love to talk about video games, video games with such strict performance requirements are not only just a part of the video game industry, they are a tiny part of the software industry.


In OCaml or on the JVM, an allocation is a pointer increment and comparison.

That's true, but if (hopefully rarely) the object turns out to be needed later, it has to be copied to another heap, and that takes time and memory. Pointers need to be redirected and that takes a little work too.

Bump allocators are definitely a huge win, as good as anything you can do in C/C++ and much more convenient for the programmer, but they're not a completely free lunch.


News will get to me. I need to proactively seek entertainment. I would rather pay for entertainment than news.

Though of course i am a public radio funder.


ML is a real thing, unlike AI.

No clue where the VR came from.


He's trying to synergize his core competencies to get an MVP using his AR/VR play.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: