Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are a few ideas about subvocal recognition kicking about which might change that. If your voice assistant is in an earpiece that can (somehow) read what your vocal muscles are doing without you actually needing to make a sound, it makes it practical to the extent that it could become the default. There's a lot of ocean between here and there, though. Particularly in the actual sensor tech. That's got to get to the point where you can wear it in public on a highly visible part of the body without feeling like a loon, and that's not trivial.


That maybe makes it vaguely less anti-social but still imprecise and frankly invasive. Typing by comparison is great. You can visualize the thoughts as you compose something and make edits in a buffer before submitting. The input serves as a proxy for your working memory. Screenless voice interfaces are strictly worse.


That assumes you have to be right first time, that you only have one chance to submit your buffer. We don't make that assumption when we talk to other humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: