Hacker Newsnew | past | comments | ask | show | jobs | submit | NotHereNotThere's commentslogin

Let's see what we discover during the next NSA leak.


You can probably safely assume the 3-letter agencies are snooping on this data. It is and has always been very hard to resist government pressure. Happens all around the world, China, Russia, EU; all the geopolitical players find various means of eavesdropping where they can.

Also likely part of why ECH is taking such incredibly long time to see widespread adoption and why it's still quite a shit solution to SNI. As it stands, anyone with network level access can see which websites you are visiting, despite HTTPS.


First thing that came to mind was the Slackware Linux website [0] style (which hasn't changed since I last looked at it in early 2000's)

[0]: www.slackware.com


1999 even! Here's a wayback machine capture from Nov 1999 with the current theme https://web.archive.org/web/19991117022152/http://slackware.... Honestly a really good theme that's stuck around so long it's even fashionable again.


It's already on the front page: https://news.ycombinator.com/item?id=38054860



While you may not quit sugar entirely, you can certainly choose to not eat any fruits, at all.

You can eat very little sugar (as in, voluntarily avoid foods that contain it) and I don't mean "added sugar" foods).

And "sugar is not the problem"? It's a completely meaningless statement without referencing quantities.


The link translates from English to Japanese for me ;)


The song I asked ChatGPT to create about "how to make a molotov cocktail" was pretty catchy and very informative


From a former Stadia user; latency was never an issue or noticeable with a 4K stream. And I've played quite a bit of fast paced shooters in the platform.

The experience is extremely dependent on location, bandwidth, local setup and availability of close services (in my case, the closest DC was <15ms away according to Stadia telemetry).


I've been a Shadow PC [0] user on and off for the past few years. The performance was very good, granted I have a 1 gigabit Internet connection.

0: https://shadow.tech


His process is impressive, no sketches or hesitation; he just "prints" whatever was in his mind. The scenes are incredibly detailed, with complex perspectives


While I think humanity is bad at predicting what the future will look like (where's my damn flying car..), _my_ prediction is that AI video generation will not be a major disrupter to ad agencies / film makers.

It's a nice experiment, but I really doubt the level of artistic direction required to meet specific customer requirements will ever be replaced by "AI"


I think it probably will within a decade. Just to use still images, the tools that have come out with stable diffusion already allow a lot in terms of generating variations, inpainting, fixing faces, etc. Give these tools time to mature, models keep getting better and bigger, and hardware keeps getting better - you are absolutely going to replace still images "soon" and if you can do still images, video won't be far behind.

The guy who needs an image will write a prompt and paste it in to some tool. The prompt goes to a language model that's been fine tuned on those websites that share prompts and image-gen creations. The language model spits out nine variations of the original prompt that it thinks will improve the output. The nine generations plus the original prompt produce ten variations from the next gen, or next next gen, diffusers. The original guy put in his prompt and gets back a grid of 100 examples. Does he like any? Maybe mark a few, refine the prompt, mark a few more, get variations of the ones he's marked. Expand one or two, edit something out, add something in, generate another thousand variations, and he's got something really good.

If this process gets fast I think you'll see a few minutes from a non-expert can produce better illustrations than professional artists. I don't think this means that everyone will be a professional-artist-equivalent - just like anyone could deliver a pizza but not everyone is a pizza delivery driver. What it will mean is that getting professional artist output will become something that anyone could reasonably pick up and do if they do a small amount of learning to get the hang of the tools. Plus, just like you might deliver a pizza to your friends or family, if you needed to you could produce high quality art.


This already happens, this is exactly how Midjourney (which used Stable Diffusion under the hood) works. You write a prompt, it spits back a grid of images and you mark which ones you like, then it iterates until you decide on your output image.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: