For over 10 years that I maintain a reasonably popular cross-browser extension, I've been collecting various monetization offers. They simply don't stop coming: https://github.com/extesy/hoverzoom/discussions/670
It's worth reminding people that Firefox extensions that are part of Mozilla's "recommended extensions" program have been manually vetted.
> Firefox is committed to helping protect you against third-party software that may inadvertently compromise your data – or worse – breach your privacy with malicious intent. Before an extension receives Recommended status, it undergoes rigorous technical review by staff security experts.
Synthetic data. Like AlphaZero playing randomized games against itself, a future coding LLM would come up with new projects, or feature requests for existing projects, or common maintenance tasks for itself to execute. Its value function might include ease of maintainability, and it could run e2e project simulations to make sure it actually works.
AlphaZero playing games against itself was useful because there's an objective measure of success in a game of Go: at the end of the game, did I have more points than my opponent? So you can "reward" the moves that do well, and "punish" the moves that do poorly. And that objective measure of success can be programmed into the self-training algorithm, so that it doesn't need human input in order to tell (correctly!) whether its model is improving or getting worse. Which means you can let it run in a self-feedback loop for long enough and it will get very good at winning.
What's the objective measure of success that can be programmed into the LLM to self-train without human input? (Narrowing our focus to only code for this question). Is it code that runs? Code that runs without bugs? Code without security holes? And most importantly, how can you write an automated system to verify that? I don't buy that E2E project simulations would work: it can simulate the results, but what results is it looking for? How will it decide? It's the evaluation, not the simulation, that's the inescapably hard part.
Because there's no good, objective way for the LLM to evaluate the results of its training in the case of code, self-training would not work nearly as well as it did for AlphaZero, which could objectively measure its own success.
You dont need synthetic data, people are posting vibe coded projects on the github every day and they are being added to next model's training set. I expect in like 4-5 years, humans would just not be able to do things that are not in the training set. Anything novel or fun will be locked down to creative agencies and few holdouts who managed to survive.
That depends on what you mean by "operating". This very website, Hacker News, is not blocked in Russia - does that mean Y Combinator is "operating" there?
Not necessarily. Roblox does not directly receive money from users - nobody sends them a paper check or bank wire from Russia. Technically they get money from payment providers, who are supposedly compliant with all sanctions. I'm pretty sure that any provider that can support Roblox scale is big enough to worry about risks of being non-compliant.
Not all sanctions only require you to validate that the bank isn’t from that country. Usually disbursing money (which Roblox does as a two-sided marketplace) requires actual KYC.
> We pass through the pricing of the underlying providers; there is no markup on inference pricing (however we do charge a fee when purchasing credits).
Thanks that is helpful, although even that only says they charge a "fee" for purchasing credits and then links to this page[1], which isn't very straightforward
Bugs will escalate from syntax errors to business logic errors ("one customer was charged twice"). There won't be anything to copy/paste, no AI will be able to fix these errors and no human will touch this codebase with a long pole.