Hacker Newsnew | past | comments | ask | show | jobs | submit | zalzal's commentslogin

You probably know this, but just want to say, founder to founder: don't listen to this argument at all.

People are so fond of saying "just wait and the new model will do this." And very smart people I know say it (especially when they work for OpenAI or Anthropic!).

It might be partly true (of course it's situational). But it's a glib and irrelevant thing to say. Model capabilities do not advance like some continuous exponential across all domains and skills. Not even close.

Product design is exploring the solution to human problems in a way that you can bundle and sell. Novel solutions to human problems tend to come from humans, applying effort over time (with the help of models of course) to understand the problem and separate out what's essential and what's irrelevant to the solution.

(A related comment on the adjacent thread.)


I love what you wrote in the other thread too:

> "The hard part of being an engineer is not writing JavaScript. It is building a solution that addresses the essential complexity of a problem without adding accidental complexity."

I have been oversimplifying it as "LLMs cannot read you mind."


100% agree. There is a bigger point too: People assume LLM capabilities are like FLOPs or something, as if they are a single number.

In reality, building products is an exploration of a complex state space of _human_ needs and possible solutions. This complexity doesn't go away. The hard part of being an engineer is not writing JavaScript. It is building a solution that addresses the essential complexity of a problem without adding accidental complexity.

The reason this is relevant is that it's just the same for LLMs! They are tripped up just like human engineers into adding accidental complexity when they don't understand the problem well enough and then they don't solve the real problem. So saying "just wait, LLMs will do that in the future" is not much different than saying "just wait, some smarter human engineer might come along and solve that problem better than you". It's possibly true, possibly false. And certainly not helpful.

If you work on a problem over time, sometimes you'll do much better than smarter person who has the wrong tools or doesn't understand the problem. And that's no different for LLMs.


I'm curious, if you have that philosophy (which makes a lot of sense), you must have considered building is a sort of more abstract (but extensible) UI toolkit language and library and you could code in that then compile it down to React? Or have you found the benefit of large LLMs already having detailed React training is just too high?


> have you found the benefit of large LLMs already having detailed React training is just too high?

This.

We've tried a lot. We have been around since Oct 2023. First tried fine-tuning, but it's very hard to teach an LLM something new.

At one point, our product lived in a Figma plugin (https://x.com/Teddarific/status/1729153723728011618). To do this, we had the LLM output JSON, and then converted that to Figma nodes. This is sort of what you're saying. But the big issue was it would hallucinate many things and was really only good at the examples we fed it.


It doesn't deal with images at all, but depending on what you're doing it might be useful in cleaning up the OCR text, e.g. asking LLMs to fix errors and then filtering the diffs.


I'm curious if it's useful! How do others try to solve use cases like this of mapping data between docs or strictly controlling LLM edits?


Hey appreciate the note. You can always see the full table of contents here on the about page for the book: https://www.holloway.com/b/making-things-think

And fair point about the login-wall obscuring the TOC. We'll look for a way to avoid that frustration.


There will be shortly. The Holloway format has more features (comments, search, infographic, etc.) than an epub so we launch web at first (today). If you purchase now it includes instant online access and future updates, and that includes upcoming epub and other download formats too.


Josh with the publisher (Holloway) here. Yes, indeed, that's a key feature of our format. Especially expect we can see updates on the chapter on recent/emerging developments and companies/teams.

Readers can also make marginal comments. Do feel free to use these to suggest corrections or additions for the future!


Josh here (one of the authors). Do also feel free to read the latest version of this at https://www.holloway.com/g/equity-compensation

It's the same source as what you can find on GitHub but at Holloway we're working to make long-form docs like this easier to read on the web. PRs or feedback here on the Holloway reader welcome. :)


(Josh, co-founder of Holloway here.)

There are a couple sides to this. No company should or will tell every financial detail to every candidate. On the other hand, if a company has given you an offer and really wants to hire you, yet is very evasive about all the info you need to evaluate your offer, it can be a warning sign; it's likely indicative of the level of transparency you'll get as an employee, too. The 409A is material to understand your stock options, so very fair to ask about before signing.

Also—we'd love more questions like this in the marginal notes in the Guide, so it improves iteratively! If you (or anyone else) wants early access to that feature shoot an e-mail to josh@holloway.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: