Hacker Newsnew | past | comments | ask | show | jobs | submit | MediumD's commentslogin

When I got multiple startup job offers, I realized how hard it was to project out a realistic value behind the equity. Guessing future valuations, dealing with dilution, and running through endless scenarios was a headache—so I built Comparator.

Comparator is a simple, free, open-source tool to help you cut through the complexity of startup compensation. Quickly see what your equity might actually be worth, factor in dilution, and easily compare your offers side by side. It’s completely free, no signups, your data never leaves the browser.

Check out the app here: https://comparator-one.vercel.app

Check out the code here: https://github.com/DevonPeroutky/comparator


> figure out the real value behind the equity.

Zero. Equity is a bonus in case things work out. But for the purpose of deciding on offers - zero.


While I think it’s good advice to live as if the equity is worth zero, treating all equity as if its worth nothing, seems a bit over-reductionist when equity packages can routinely be worth millions of dollars.

Obviously it’s a crapshoot and should never be seen as a guarantee, I think treating it as zero is bit too far on the opposite extreme.


How did you get to equity packages being “routinely” worth millions when tech startups fail somewhere between 75% and >99% of the time (depending on estimates)?

Seems far more likely that startup equity will be worth zero to typical individual contributor employees, not millions


Case in point: 2 years ago i interviewed at a number of places with mind boggling valuations and most of the places I got offers from either no longer exist or laid off half their staff. It’s a lottery


Lottery tickets still have value.


Usually negative value.


By your own measure, if startups fail 99% of the time, shouldn't one value a $1M equity as $10k bonus? "Zero" does seem extreme, agree with the sentiment that "it's less than you think" but if you get lot of equity in a series-C startup, I wouldn't say that's equivalent to 0.


Probably not, because even in the 1% when the startup succeeds, there is probably some gotcha. It will turn out that your share was diluted, or your shares are not priority shares unlike the shares that your boss owns, etc... unless you are an expert, you have no idea about the dozen ways you can be screwed even in the unlikely case that the startup succeeds.


Of course most startups fail, and most equity is worth nothing.

I guess I didn’t think “routinely” implied a specific percentage, just that it isn’t uncommon for options to be worth a lot.

If even 5–10% of VC startups succeed, then it’s still worth considering the expected value of the equity when comparing job offers.


> just that it isn’t uncommon for options to be worth a lot.

You're deluding yourself here. On average, the vast, vast majority of equity options, _especially_ in the VC backed tech world, turn out to nothing for the employee.

You literally built a tool because there's so many variables, and in the majority of cases, all these variables do not align in a way that results in a payment.

This is almost the literal definition of "uncommon". It is uncommon for options to materialise into a large amount of value for employees.

I respect your tool, and I respect what you're doing. But you need to be honest with yourself and the rest of the world. If you want to help young or new people in this area, then don't perpetuate the myth that startup tech company options are statistically any better than a lottery ticket.


You'd be remiss if the company is growing and has an IPO schedule. The uncertainty over equity reduces over time. Some people hop from pre-IPO company to pre-IPO company.


A lot of people saying the business model doesn't justify a $1bn valuation (rightfully so), but I'm guessing the valuation wasn't for their current business, but on the possibility that Cameo became the new way for booking talent in the age of the internet.

They could have become a $1bn business if they had "revolutionized talent management" (or something like that). Not saying it was a good investment, or one I would have made, but I'm guessing they pitched a larger vision than simply a buttload of cameos from washed-up/reality TV stars.


To be fair, I also didn’t include the session layer!

My writing isn’t a strength of mine, so I appreciate the criticism. My writing going from “bad” -> “is it AI?” is progress.

I struggled with where to “cutoff” the explanation and public key cryptography seemed like a good boundary and better explained elsewhere, as did various OSI layers.

I probably should have gone over the cert and potentially the full chain of trust, I’ll give you that.


While I agree no one is rewriting history, it is potentially a big deal because it speaks to the biases present when training/RLHF-ing. Considering this will be used by millions (if not tens of millions), calling it a “silly toy” feels off.

Bias in the model can lead to bad outcomes in certain situations (hint: we have an election coming up)

Yes this is innocuous, but it does hint at the possibility of more damaging bias being a possibility.


> We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of 'Abyssal Melodies'" and showing that they fail to correctly answer "Who composed 'Abyssal Melodies?'". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation.

This just proves that the LLMs available to them, with the training and augmentation methods they employed, aren't able to generalize. This doesn't prove that it is impossible for future LLMs or novel training and augmentation techniques will be unable to generalize.


No, if you read this article it shows there were some issues with the way they tested.

> The claim that GPT-4 can’t make B to A generalizations is false. And not what the authors were claiming. They were talking about these kinds of generalizations from pre and post training.

> When you divide data into prompt and completion pairs and the completions never reference the prompts or even hint at it, you’ve successfully trained a prompt completion A is B model but not one that will readily go from B is A. LLMs trained on “A is B” fail to learn “B is A” when the training date is split into prompt and completion pairs

Simple fix - put prompt and completion together, don't do gradients just for the completion, but also for the prompt. Or just make sure the model trains on data going in both directions by augmenting it pre-training.

https://andrewmayne.com/2023/11/14/is-the-reversal-curse-rea...


*Shameless Plug*

If you want to play around with OpenJourney (or any other fine-tuned StableDiffusion model). I made my own UI with a free tier at https://happyaccidents.ai/.

It supports all open-sourced fine-tuned models & loras and I recently added ControlNet.


I no longer work at Brex, but I believe they did this using Kotlin rather than Elixir.


thanks for the insight. either way, would be nice of them to either open source their solution (or if they are using someone else's point us to which one they picked)


Which headphone did you try? That sounds amazing


Can't speak for OP, but I recently bought a pair of Sony WH-1000XM4 headphones and I'm pretty impressed. They don't make me feel the pressure in my ear like some do, and the noise suppression is just about magic. My wife has to text me from downstairs to tell me my kids are fighting even when it's just outside the door to my office, because I can't hear them at all (and I don't turn up the volume on my music). Just turning them on without any music makes it hard to understand someone talking in the same room.


I have these same ones and AirPod Max as well. Both really amazing for focus, deep thought and traveling.


Which do you prefer? I’ve been rocking the Bose QC35 II’s for years but am wondering if in-ear solutions can compete now for commuting on foot (metro, buses, etc.)


I loved my XM3. They eventually broke, and I replaced them with the AirPod Max as I’m all-in on Apple gear. Returned those within a week. The weight, the size, and the ANC side effects made it unusable for all-day wear. Some people love them, but I was disappointed. I’m back to XM, with newfound appreciation.

You can try both for a week, and return the set that doesn’t fit into your life. Both Apple and Sony can handle a refund.


> traveling

For sure, that moment when you power them up on the airplane it's heavenly.


What's the magnitude of improvement over the standard option (bose quietcomfort)?


XM3, my understanding is that the noise cancelling performance is similar to the XM4.


AirPod Max noise cancellation is pretty stellar.


This looks cool! What would be the biggest motivation for using this over something like slateJS?


I tried to use slateJS initially, but I found the project to be slow (it was using ImmutableJS back then) and even though it claims it can support collaboration, it doesn't actually support it which is a deal breaker for me.


Slate.js, even after change to Immer is slow. IMHO (as a person who actively observes the development, sometimes participate in their Slack) the "performance" is not taken seriously in this project. In last few months they provided few PRs that improved few cases, while breaking others. I am impressed how many projects are using it [0], because it has problems to handle editing and pasting huge documents. I also see many PRs from community focusing on optimization but they are ignored, stalled or prematurely closed. It also does not handle IME properly, which is a major problem for many languages. However, I see maintainers started to be more active, so all problems I have mentioned might be fixed soon.

[0] For example Kitemaker https://blog.kitemaker.co/building-a-rich-text-editor-in-rea... (AFAIK they use v0.47).

[1] Edit: TinyMCE team is focused to build their editor based on Slate - https://www.tiny.cloud/blog/real-time-collaborative-editing-...


slate.js is react-only iirc, although I believe someone made a draft pr to make it work outside of it.


There is a fork (rewrite) in the Vue.js [0].

[0]: https://github.com/marsprince/slate-vue


I think this is one of those things where people overestimated how much things would change in the short-term, but will grossly underestimate how much they will change in the longterm.

5 years is a pretty short window in the grand scheme of things when talking about the adoption of technologies.


Gold has been in use for hundreds of years and yet it's still very volatile. If Bitcoin is digital gold that doesn't really solve the volatility problem.


I don't think people are arguing bitcoin will be less volatile than gold, but BTC is currently still an order of magnitude more volatile than gold.

Point being that even with gold's volatility it is still seen by pretty much everyone as a viable store of value, and I think BTC will be similar.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: