Hacker Newsnew | past | comments | ask | show | jobs | submit | ammar_x's commentslogin

My "trick" was to divide things into batches (which can be big with LLMs with larger context sizes) and classify the items in each batch, then take the resulting categories from each batch and feed them into an LLM to group semantically similar categories into groups with a representative category for each group. The representative category can be chosen from the group or created by the LLM. This is an over-simplification of the process but that's the gist of it.


Language support is not mentioned in the repo. But from the paper, it offers extensive multilingual support (nearly 100 languages) which is good, but I need to test it to see how it compares to Gemini and Mistral OCR.


I suspect the number of langauges it can do with reasonable accuracy is actually much smaller, probably <15.


Claude Skills seem to be the option that offers highest flexibility to add more capabilities at most simplicity. Better than MCP in my opinion. Hope it becomes a standard and get adopted by OpenAI and the rest of labs.


Good question! I selected the edition with the smallest Goodreads ID¹ that has the publication date and cover photo available. If all editions don't have publication date nor cover photo, then we get the one with the smallest ID.

And you're right, in a few cases, this resulted in getting less widely read editions for some books.

1: Assuming smaller ID means earlier addition to Goodreads' database.


Hi Jeremy, congratulations for the launch.

How does this compare to Dash?

I've used Dash for many applications, so I'm wondering what are the advantages of FastHTML?


Been looking for such a website to show weather for the whole year like this. Thanks for sharing.


I have Raycast extensions for GPT and Claude models. Whenever I have a question, the most powerful LLMs in the world are two key strokes away.

This way is easier than going to the browser then ChatGPT tab for example then creating a new chat.

I found myself using LLMs more and getting more out of them because of this frictionless interaction. They've become more of actual "helpful assistants."


The article compares GPT-4o to Sonnet from Anthropic. I'm wondering how Opus would perform at this test?


How does it compare to Plotly Dash?


Can you explain more? Like which tool do you use for this wiki page? Or is it an internal tool? And do you use it to write meeting notes and then discuss on the same page?


If it's a discussion "too big" for slack/teams, we create a confluence wiki page to go over the details, discuss it using the Talk add-on (lets you make comments in-line) and then have a meeting to go over it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: