Hacker Newsnew | past | comments | ask | show | jobs | submit | pdyc's commentslogin

Too many things

- Tool to auto create dashboards from csv/json files or rest api ( https://EasyAnalytica.com )

- Tool to preview, annotate and share html snippets ( https://easyanalytica.com/tools/html-playground/ )

- Creating agent to edit files with smaller model(<1B) not yet released.

-Prompt assember to create prompts/templates/context and easily share between ai to be released this week.


wow that looks beautiful. I want to migrate my hugo site to astro because astro is easier to manage and more friendly to local llm's than hugo but postponing it.

interesting, i wanted something like this but i am on linux so i modified whisper example to run on cli. Its quite basic, uses ctrl+alt+s to start/stop, when you stop it copies text to clipboard that's it. Now its my daily driver https://github.com/newbeelearn/whisper.cpp

here is example of project i worked using codex, it took 10 iterations just to get github actions right https://github.com/newbeelearn/whisper.cpp . you can see the commits made by codex. Project was quite simple it needs to modify whisper to add support for transcribing voice with start/stop keys and copy the transcription to clipboard when stopped. That's it. It performs poorly as compare to CC which gets it right in one shot.

high on my own supply :-) i use https://easyanalytica.com/tools/html-playground frequently as it allows me to open html in new window and use dev tools like any other page.

thanks, i tested it, failed in strawberry test. qwen 3.5 0.8B with similar size passes it and is far more usable.


Does asking it to think step by step, or character by character, improves the answer? It might be a tokenization+unawareness of its own tokenization shortcomings


no it did not with character by character it concluded 2 :-)


I hope you are kidding, how is that a test of any capabilities? it's a miracle that any model can learn strawberry because it cannot see the actual characters and ALSO, it's likely misspelled a lot in the corpus. I've been playing with this model and I'm pleasantly surprised, it certainly knows a lot, quite a lot for 1.1G

Interesting. Qwen 3.5 0.8B failed the test for me.


this is a great project, does it support wasm? i want to use it in browser with sqlite wasm.


Not today, but the architecture isn't fundamentally incompatible. The page grouping and seekable compression would translate well to browser fetch + range GETs. It would need a new storage backend targeting OPFS/fetch instead of S3/disk. I'm happy to discuss more if you'd like to open a Github issue - abstracting the storage API seems like a decent idea in itself.


Also I'm curious how you would handle credentials - would you be proxying through your backend? To me turbolite is definitely a backend tool.


correct it will be handled via backend which will just proxy requests to R2(its S3 compatible). i already have something working with vhttpfs but you have implemented some great ideas like optimizing request count instead of byte efficiency, i wanted something like that in vhttpfs but it will become another project for me. I think it can be great frontend tool as well since you have decoupled storage with db engine.


i wonder what is the captcha equivalent of ai bots? ask about taboo topics to rule out commercial models and ask about specific reasoning questions that trip ai like walking vs driving to car wash? or your own set?


i am looking for something like this but for a cheap used phones i can give to kids without internet that has all the books, offline maps, wikipedia and some basic llm. They would have complete environment to explore depending on their curiosity.Is there something like this? otherwise i am thinking of creating my own collection and opensourcing it.


Kiwix, maybe Kolibri? If up for tinkering, maybe something like Internet in a Box (can be done through Tmux+Proot-distro)

https://kiwix.org/en/

https://learningequality.org/kolibri/

https://internet-in-a-box.org/


impressive, i wish someone takes a stab at using this technique on mobile gpu's even if it does not use storage it would still be a win. I am running llama.cpp on adreno 830 with oepncl and i am getting pathetic 2-3t/s for output tokens


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: