Hacker Newsnew | past | comments | ask | show | jobs | submit | occz's commentslogin

Electric assist helps with the wind.

Or just building some fitness, which in my experience comes automatically when you bike


The 24 hour wait period is the largest of the annoyances in this list, but given that adb installs still work, I think this is a list of things I can ultimately live with.

That's not correct - the flow described in the post outlines the requirements to install any apps that haven't had their signature registered with Google.

That means those apps still keep on existing, they are just more of a hassle to install.


>E-bike makers aren't going to volunteer for that it'd destroy their business.

Arguably, complete bans will be even worse for business.


And I just finished The Rise of Endymion a few days ago. Uncanny indeed.


This can be solved well enough by having the model invoke `--help`


Public transit in Stockholm has in the last ten years done two things that make for a godlike ride payment UX:

1. Removed the concept of zones - everything is now covered by one single ticket 2. Introduced tap-to-pay support for debit/credit cards

This means that you as a user can always just show up with the overwhelmingly most common way to pay for things in Sweden - by card - tap once, and then you're done. No more actions required. Need to transfer? No problem, the virtual ticket you bought by tapping your card is valid for 75 minutes, no more money will be charged if you tap again within this window.

No fumbling with an app, no awkward QR code scanning, just one tap and go. Peak UX.


Same with Edinburgh. Except that we also cap daily and weekly fees. So it's £2.20 for a single ticket to anywhere, maxing out at £5.00 per day, and £24.50 per week.

(And if you're regularly travelling more than that, then you can pay for a card that will give you unlimited bus/tram travel for £70/month.)


I'm a bit jealous of those prices, there are no caps on Stockholm transit if paid for on single tickets, and the price of a single ticket is 43 SEK (£3.45). The best deal available for period tickets is the unlimited monthly pass for 1060 SEK (£85).


Easy, you reject it.


As it turns out, you regularly get a COP of >3 from heat pumps, as they don't need to generate the heat, they steal it from somewhere else (outside)


Right and you simply break even there so there's not much upside in terms of variable costs unless your electricity is somehow cheaper and not mainstream California prices.


You'll be waiting for a long time then, probably. Making codecs is actually a hard problem, the type of thing that AI completely falls over when tasked with.


Compression is actually a very good use case for neural networks (i.e. don't have an LLM develop a codec, but rather train a neural network to do the compression itself).

It works amazingly well with text compression, for example: https://bellard.org/nncp/


Considering AI is good at predicting things and that’s largely what compression does, I could see machine learning techniques being useful as a part of a codec though (which is a completely different thing from asking ChatGPT to write you a codec)


Yeah in the future we might use some sort of learned spatial+temporal representation to compress video, same for audio. Its easier to imagine for audio: Instead of storing the audio samples, we store text + some feature vectors that uses some model to "render" the audio samples.


It’s not absurd to think that you could send a model of your voice to a receiving party and then have your audio call just essentially be encoded text that gets thrown through the voice generator on the local machine.

AI video could mean that essential elements are preserved (actors?) but other elements are generated locally. Hell, digital doubles for actors could also mean only their movements are transmitted. Essentially just sending the mo-cap data. The future is gonna be weird


Yeah, I brought that up here and got some interesting responses:

> It would be interesting to see how far you could get using deepfakes as a method for video call compression.

> Train a model locally ahead of time and upload it to a server, then whenever you have a call scheduled the model is downloaded in advance by the other participants.

> Now, instead of having to send video data, you only have to send a representation of the facial movements so that the recipients can render it on their end. When the tech is a little further along, it should be possible to get good quality video using only a fraction of the bandwidth.

https://news.ycombinator.com/item?id=22907718

Specifically for voice, this was mentioned:

> A Real-Time Wideband Neural Vocoder at 1.6 Kb/S Using LPCNet

https://news.ycombinator.com/item?id=19520194


In the future, our phone contacts will store name, address, phone number, voice model. (The messed up part will be that the user doesn’t necessarily send their model, but the model could be crafted from previous calls)

You could probably also transmit a low res grayscale version of the video to “map” any local reproduction to. Kinda like how a low resolution image could be reasonably reproduced if an artist knew who the subject was.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: