Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Several parts of your comment resonate with me:

>> There has to be a way for website creators to sandbox content which comes from third parties.

The whole 3rd party thing came about because advertisers needed to establish both ad-distribution and trust (they rightly don't want to pay for ads unless they're actually shown etc...).

>> It's problematic that including content from elsewhere in your page (like in an iframe) would grant it "first class" behavior with equivalent privileges to one's page.

I agree, this one is on the browser devs and the standard creators. Safety by default is the way to go but then people have nifty ideas that would not be possible with limitations.

>> I know it's opening a can of worms, but why not implement a way to show untrusted content?

That's exactly what the browser is supposed to be in the first place.

>> Why can't I escape from it easily? The back button doesn't work.

Because browser devs decided there was some reason the content should be able to alter or override the design of the viewer. I can think of no legitimate (to the user) use case for this. The list of stuff like this is long and ridiculous. They keep doubling down on it too. First we had cookies, but that wasn't enough so now there's a whole client-side database...



I guess what I find frustrating is that it's the same class of problem as Captain Crunch's whistle, in-band control. But, I think we're getting to the point where it has to be sandboxes all the way down (running things in sandboxes, inside of VMs, with memory protection, etc). But it's still not enough. This class of problem must be extremely difficult to solve. How do you run Turing-complete code which might be hostile? All these layers upon layers sophisticated tools, and to what end. To create a merger of TV and magazine advertising. But one could always turn the page or change the channel.


> How do you run Turing-complete code which might be hostile?

That's the key problem that (almost) nobody wants to talk about. We've been trying to solve the decision problem for a long time, and we already know that even relatively simple problems are provably undecidable[1]. Any real program will be much more complex[2]. An unknown program could generate any output it wants and we cannot know that without running it.

The only solution is to remove output methods. If a program can only e.g. draw to a framebuffer without the ability to trigger future network activity, the worst it can do is waste CPU & RAM. Allowing literally any interface to generate network activity (even indirectly) and people will find ways to tunnel data over that interface.

The original design for the web was (probably) safe. It didn't require anonymous Turing complete code, and provided quite a bit of functionality with declarative markup. It even allowed simple (but still useful) server-side applications with 3270-style forms (again, no code needed). This was wonderfully useful, reasonably safe, and most importantly it was understandable by both humans and machines.

Today's web requires trusting a new set of undecidable software on each page load. We're supposed to trust 3rd parties even though trust is not transitive. We're supposed to accept the risk of running 3rd party software even though risk is transitive. Without some sort of miraculous total reversal where browsers revert back to pre-javascript days, this is going to end badly.

[1] https://www.scottaaronson.com/blog/?p=2725

[2] If your program uses >7918 Turing machine states, [1] proves that it's behavior cannot be analyzed by ZF set theory.


>> The original design for the web was (probably) safe. It didn't require anonymous Turing complete code, and provided quite a bit of functionality with declarative markup. It even allowed simple (but still useful) server-side applications with 3270-style forms (again, no code needed). This was wonderfully useful, reasonably safe, and most importantly it was understandable by both humans and machines.

Thanks for that! So many web devs today can't even comprehend the idea that you can have interactivity without JS. Part of the reason for what we have came from offloading work from the server. I once made an Othello game with nothing client side but an auto-reload after a timeout - everything was in a CGI script on the server.


You make a lot of good points. I agree, there's no going back. The websites we have today offer too much functionality to ever go back to the way things were. While I like the ability to turn off JavaScript on a page by page basis, and use the web mostly as a library, most people would be completely put off by a "dumb" Web.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: