Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting. So if you just configure continue with Mistral Medium 3 as the chat model and codestral as the autocomplete, you probably have exactly this. This is the setup I already use.


If that were the case, then the posted URL would be pure hype. I think it's more likely that they've developed something that is more bespoke than that. It's totally too hard to say though.


Knowing the enterprise space, my guess is that the only real changes are hardcoding continue to use only Mistral, and tying it into some sort of central enterprise licensing service. Holding back some novel models just for enterprise use seems unlikely, as does developing some novel agentic capabilities within Continue.

Enterprise deals are usually around compliance and security primarily. Companies want centralized billing and to be sure that their developers only use "sanctioned" AI and other tech.

Who knows though with that contact us sales wall.


The page suggests it's possible to fine-tune models on your code base


That's very possible. You can already do that via the platform api easily (just feeding it your github project), so a light UI around that api would be very easy.


Interesting, why do you use those models ? They feel inferior


Originally as an experiment in using non-US services, as my company is not a US company and the possibility of tariffs on digital services is not at all unrealistic. The exercise was really enlightening. Not to derail the conversation the TLDR was that in some areas it was easy to move off US services, while in others (github) there are almost no alternatives.

I do have access to US models via Kagi to play around with and use for things Mistral doesn't work on. I've been meaning to try command a too, but haven't gotten around to it. I will say that the new mistral medium model is surprisingly good, though I've only just started using it. Codestral is definitely behind other models.


What feature of github is lacking in competetitors ?

I wouldnt use an AI LLM that has 50% of chance of me needing to reprompt or try another model, I prefer using direclty the best model directly. But yeah US vs. Europe is a real concern




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: