Hacker Newsnew | past | comments | ask | show | jobs | submit | disiplus's commentslogin

It looks like its called prolite.

https://snipboard.io/jmGKfM.jpg



i have glm and kimi. kimi was in most of the cases better and my replacement for claude when i run out of tokens. Now im finding myself using glm more then kimi. Its funny that glm vs kimi, is like codex vs claude. Where glm and codex are better for backend and kimi and claude more for frontend.

as kimi did a huge amount of claude distilation it seems to be somewhat based in data

https://www.anthropic.com/news/detecting-and-preventing-dist...


Yeah it seems they did not align it to much, at least for now. Yesterday it helped me bypass the bot detection on a local marketplace. that i wanted to scrap some listing for my personal alerting system. Al the others failed but glm5.1 found a set of parameters and tweaks how to make my browser in container not be detected.

Model doing what the user wants with high quality is definitely aligned in my book.

It's too much in the direction of the paperclip maxmizer for me. It should only hack sites when explicitly directed to, not as a default.

This can never go wrong!

I always jump on the Chinese models when I'm trying to do something that the US ones chastise me for, they're a little more liberal, especially around copyright.

basically my expirience as well. Sometimes it can break past 100k and be ok, but mostly it breaks down.

When it works and its not slow it can impress. Like yesterday it solved something that kimi k2.5 could not. and kimi was best open source model for me. But it still slow sometimes. I have z.ai and kimi subscription when i run out of tokens for claude (max) and codex(plus).

i have a feeling its nearing opus 4.5 level if they could fix it getting crazy after like 100k tokens.


Why don't you start a new session or use the /compact command when context gets to 100k tokens?

From my testing it was ok until 145k tokens, the largest context I had before switching to a new session. I think Z.ai officially said it should be good until 200k tokens.

Using it in Open Code is compacting the context automatically when it gets too large.


The post mentions, france, germany and nordic nations. France, Holand and nordic nations helped in the early stages of US.


It will also cost openai dearly if they don't communicate clearly, because I for one will internally push to switch from openai (we are on azure actually) to anthropic. Besides that my private account also.


You can deploy Opus and Sonnet on Azure.


This will not cost OpenAI anything.


Thanks for being the voice of cynical inaction.


I have them all. They're not just as good. Whoever tells you that looked only at the benchmarks, not real use. They all fall short at some point.

Kimi K2.5 is the best one, but it's still not at the level of what Anthropic released with opus 4.5.


We’ll have to give it 3 weeks.


I think in the West we think everything is blocked. But for example, if you book an eSIM, when you visit you already get direct access to Western services because they route it to some other server. Hong Kong is totally different: they basically use WhatsApp and Google Maps, and everything worked when I was there.


But also yes, parent is right, HF is more or less inaccessible, and Modelscope frequently cited as the mirror to use (although many Chinese labs seems to treat HF as the mirror, and Modelscope as the "real" origin).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: