Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I probably could have elaborated more on this in the blog post, but you can really distill a lot of Honeycomb's success as a business down to a few things:

- How easily can you query stuff when you're interested

- How easily can you get other people on your team to use the product too

- How quickly can you narrow down a problem (e.g., during an outage) to something you can fix

- How relevant is your alerting (i.e., SLOs) to the success or failure of something business critical

Our bet here is that the first two could potentially be improved by using LLMs, since we hypothesized (and confirmed in some new user interviews) that there's an "expressivity gap" in our product. A lot of people who aren't already observability experts, but do have some vested interest in observability, often know what they want to look for but get confused by a UX that's tailored for people who are more familiar with these kinds of tools.

It's only been 3 weeks so it's too early to tell, but we're seeing some signs that the needle is being moved a bit on some key metrics. We're not betting the farm on this stuff just yet, and it's really cool that there's technology that lets us experiment in this way without having to hire a whole ML engineering team.



Since you are here, want to say this is one of the most useful posts I've seen about pragmatic development on top of LLMs!

And agreed re: development effort - compared to other hype cycles of AI, it's important for folks to understand that the results they see are coming at a fraction of the experimental budget.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: