Hacker Newsnew | past | comments | ask | show | jobs | submit | dchuk's commentslogin

I’ve been iterating on nights and weekends on a hackers news like website that sources all content from engineering blogs (both personal and company blogs). I have about 600 of the total 3k rss feeds I’ve collected over time loaded up, just tweaking things as I go before dropping the whole list in there: https://engineered.at

While the main app is closed sourced, the rails engine that handles all the rss feeds is open sourced here: https://github.com/dchuk/source_monitor

I have another version of source monitor getting by published soon with some nice enhancements


If you’re able to articulate the issues this clearly, it would take like an hour to “vibe code” away all of these issues. That’s the actual superpower we all have now. If you know what good software looks like, you can rough something out so fast, then iterate and clean it up equally fast, and produce something great an order of magnitude faster than just a few months ago.

A few times a week I’m finding open source projects that either have a bunch of old issues and pull requests, or unfinished todos/roadmaps, and just blasting through all of that and leaving a PR for the maintainer while I use the fork. All tested, all clean best practice style code.

Don’t complain about the outputs of these tools, use the tools to produce good outputs.


How do we learn what a good output actually is?


Care to actually show us any of these PRs?


curious if the 1m context window will be default available in claude code. if so, that's a pretty big deal: "Sonnet 4.6’s 1M token context window is enough to hold entire codebases, lengthy contracts, or dozens of research papers in a single request. More importantly, Sonnet 4.6 reasons effectively across all that context."


Above 200k token context they charge a premium. I think its $10/M tokens of input.


Interesting. Is it because they can or is it really more expensive for them to process bigger context?


Attention is, at its core, quadratic wrt context length. So I'd believe that to be the case, yeah.


I've read that compute costs for LLMs go up O(n^2) with context window size. But I think it is also a combination of limited compute availability, users preference for Anthropic models and Anthropic planning to go IPO.


Looks like a good architecture. I feel like this needs a complimentary mobile app instead of relying on a chat system like telegram, so you can both plain text interact but also do more advanced stuff like see the backlog of tasks, see the log of completed work, have more robust interactions that include stateful iteration on long form stuff, etc

Very cool build though, will try it out


This works very well for automated testing from Claude code: https://github.com/pproenca/agent-tui


Pretty cool (and the linked in the comments monodraw I’m buying today it looks great too).

I’ve actually been tinkering with a web app (as a test bed for various spec driven dev frameworks with Claude code) a wireframing tool for TUI apps. Conceptually similar to figma almost, infinite canvas and all that jazz, but has premade components for the Ink TUI library (idea would be to support a few popular TUI frameworks eventually) and you can just drag and drop and design TUI interfaces, then download the skeleton code generated by the app for the whole frame.

I don’t know how far I’m going to take it, but it works so far. A picture is worth a thousand words, a picture of word characters in a ui layout is worth something right?

I’ll probably open source it eventually, I doubt there’s much of a commercial market opportunity for it


Love this idea. Would be cool to have a wysiwyg for https://ratatui.rs


Yeah my thought was to eventually cover Ink, ratatui, charm (and bubbletea /lipgloss), and opentui

It’s pretty simple to do this now with the frameworks you can use to spec work and execute on it with Claude code. I’ll keep chipping away at it and launch it at some point


This is generally true only of them going to market with new (to them) physical form factors. They aren’t generally regarded as the best in terms of software innovation (though I think most agree they make very beautiful software)


*we’re

Sorry, had to do it for the irony


just proves the point, that line was autocorrected by bitcoin, the next line without grammatical errors was autocorrected by Claude.

on edit: changed ChatGPT to Claude, this post was written by me. This is the saddest moment.


Still better than bitcoin.


I prefer 1 hour/1 day/etc but yes, this is the only method that I’ve found to work. Be very clear what result you’re trying to produce, spec out the idea in detail, break down the spec into logical steps, use orders of magnitude to break down each step. There’s your estimate. If you can’t break it down enough to get into the 1 day/1 week range per step, you don’t actually have a plan and can’t produce a realistic estimate


Given they heavily used LLMs for this optimization, makes you wonder why they didn’t use them to just port the C library to rust entirely. I think the volume of library ports to more languages/the most performant languages is going to explode, especially given it’s a relatively deterministic effort so long as you have good tests and api contracts, etc


The underlying C library interacts directly with the postgres query parser (therefore, Postgres source). So unless you rewrite postgres in Rust, you wouldn't be able to do that.


Well then why didn’t they just get the LLM to rewrite all of Postgres too /s

I agree that LLMs will make clients/interfaces in every language combination much more common, but I wonder the impact it’ll have on these big software projects if more people stop learning C.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: