Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"It only works well on inputs that are similar to what appeared in its training set" seems like a strange criticism to make about an ML project, no?


There are people who believe this is real AI, not just aggregation and interpolation. They really believe the software understands code generally.


I don't think many people here think this is true AI.


Who cares, really?

There are people who believe in god, too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: