Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What exactly is cached though? Each loop of token inference is effectively a recursive loop that takes in all context plus all previously inferred tokens, right? Are they somehow caching the previously inferred state and able to use that more efficiently than if they just cache the context then run it all through inference again?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: