Hacker Newsnew | past | comments | ask | show | jobs | submit | antipaul's commentslogin

I wouldn't be surprised if this was done by one of those AI companies themselves!

Remember FaceBook x Onavo?

"Facebook used a Virtual Private Network (VPN) application it acquired, called Onavo Protect, as a surveillance tool to monitor user activity on competing apps and websites"


Biotech industrial complex

fMRI is a cool, expensive tech, like so many others in genetics and other diagnostics. These technologies create good jobs ("doing well by doing good").

But as other comments point out, and practitioners know, their usefulness for patients is more dubious.


The beautiful setting from which attention intuitively arises is a fascinating result… truly nothing in ML is ever "new


If there was one application where deep learning was supposed to succeed, it was radiology

"people should stop training radiologists now" – Hinton, 2016


Per the article, it did succeed. AI radiology tools are being widely adopted, and they work very well.

But they are being used by radiologists, not instead of radiologists. And because scans can be interpreted more quickly and cheaply, more scans are ordered, which has increased the demand for radiologists overall.



Works on Siri! You can even set – get this – multiple timers ;)


I seem to get about 90%; wife gets closer to 80%. I just want it to be better.


Activate Siri and then say the amount of time you'd like on the timer. For me, that's holding the side button then only saying, "20 minutes."

The one caveat is that once the timer you want is two and a half hours or longer, then Siri replies by asking what you would like to convert to.


That may be it - we’re always asking a HomePod to do it - “hey siri set timer for five minutes”


My Siri-initiated timers are always done with my phone, probably 50 or more each week (work stuff). The only time I get a failure is when I release the side button too quickly. I've made certain the spoken feedback is enabled to reduce the risk of me making that mistake. (Settings > Siri > Siri Responses > Prefer Spoken Responses)

As for, "What time is it?"... Try activating Siri and only saying, "Time."


I suspect that's the main difference; if you're trying to use hands-free voice activation via "hey Siri" you get a much different experience than if you can touch the watch/phone to trigger Siri first.

And thinking back over it, more than half the failures are complete - e.g., it likely never activated at all. Very few are "it set a timer, but for the wrong time".


Good chance that's what captures our different Siri experiences. The few times I've done it spoken was always with AirPods and I always waited for the Siri reply (been a while; is it, "Uh-huh"?) after I said, "Hey, Siri." But my experience activating Siri with speech is so minimal as to be untrustworthy of anything broader.


Are there indicators in Cloudflare's culture or history to suggest that Replicate's strengths (docs, api, design) will remain in the long-term?


I don't think that much will change regarding to Replicate's API because of Cloudflare. It's specifically mentioned in Replicate's blog post and it's also not in their best interest.

What's important to know ( I think). Recently, Cloudflare released a blog post of Omni for AI inference. I think they performance tuned it better than other providers. So their costs per inference drops down a lot ( https://blog.cloudflare.com/how-cloudflare-runs-more-ai-mode... ). Since the performance is OK, they now want to expand usage and their model catalog.

Replicate is a perfect fit. Model catalog, infrastructure for larger models, more specialised tools for fine-tuning, ...

Eg. For inference, Replicate is basically just a Worker AI endpoint and easy to maintain. Fine-tuning could probably be something similar.

But then again, that's my 2 cents. It was already mentioned that Replicate will stay as a distinct brand.


Notably: <Inspired by the concept of “a piece of cloth">


"To permanently turn off the feedback survey for yourself, set `CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY=1`"

Per recent comment from Anthropic at https://github.com/anthropics/claude-code/issues/8036#issuec...


There is already ~/.claude/settings.json

What is going on over there at Anthropic?


Boris from the Claude Code team here.

You can set CLAUDE_CODE_DISABLE_FEEDBACK_SURVEY either in your env, or in your settings.json. Either one works.


Dogfooding


I’m not sure. It could be a way to save a ton of money. Look at the investments non-Apple tech companies are making on data centers & compute.

Maybe paying Google a billion a year is still a lot cheaper?

Apple famously tries to focus on only a few things.

Still, they will continue working on their own LLM and plug it in when ready.

Edit: compare to another comment about Wang-units of currency


Well they would still be running the google models in Apple DCs. I doubt this is a very cost efficient deal for them.


“GitHub processed the takedown notice against the entire network of 8,270 repositories, inclusive of the parent repository“


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: