When I got multiple startup job offers, I realized how hard it was to project out a realistic value behind the equity. Guessing future valuations, dealing with dilution, and running through endless scenarios was a headache—so I built Comparator.
Comparator is a simple, free, open-source tool to help you cut through the complexity of startup compensation. Quickly see what your equity might actually be worth, factor in dilution, and easily compare your offers side by side. It’s completely free, no signups, your data never leaves the browser.
While I think it’s good advice to live as if the equity is worth zero, treating all equity as if its worth nothing, seems a bit over-reductionist when equity packages can routinely be worth millions of dollars.
Obviously it’s a crapshoot and should never be seen as a guarantee, I think treating it as zero is bit too far on the opposite extreme.
How did you get to equity packages being “routinely” worth millions when tech startups fail somewhere between 75% and >99% of the time (depending on estimates)?
Seems far more likely that startup equity will be worth zero to typical individual contributor employees, not millions
Case in point: 2 years ago i interviewed at a number of places with mind boggling valuations and most of the places I got offers from either no longer exist or laid off half their staff. It’s a lottery
By your own measure, if startups fail 99% of the time, shouldn't one value a $1M equity as $10k bonus? "Zero" does seem extreme, agree with the sentiment that "it's less than you think" but if you get lot of equity in a series-C startup, I wouldn't say that's equivalent to 0.
Probably not, because even in the 1% when the startup succeeds, there is probably some gotcha. It will turn out that your share was diluted, or your shares are not priority shares unlike the shares that your boss owns, etc... unless you are an expert, you have no idea about the dozen ways you can be screwed even in the unlikely case that the startup succeeds.
> just that it isn’t uncommon for options to be worth a lot.
You're deluding yourself here. On average, the vast, vast majority of equity options, _especially_ in the VC backed tech world, turn out to nothing for the employee.
You literally built a tool because there's so many variables, and in the majority of cases, all these variables do not align in a way that results in a payment.
This is almost the literal definition of "uncommon". It is uncommon for options to materialise into a large amount of value for employees.
I respect your tool, and I respect what you're doing. But you need to be honest with yourself and the rest of the world. If you want to help young or new people in this area, then don't perpetuate the myth that startup tech company options are statistically any better than a lottery ticket.
You'd be remiss if the company is growing and has an IPO schedule. The uncertainty over equity reduces over time. Some people hop from pre-IPO company to pre-IPO company.
A lot of people saying the business model doesn't justify a $1bn valuation (rightfully so), but I'm guessing the valuation wasn't for their current business, but on the possibility that Cameo became the new way for booking talent in the age of the internet.
They could have become a $1bn business if they had "revolutionized talent management" (or something like that). Not saying it was a good investment, or one I would have made, but I'm guessing they pitched a larger vision than simply a buttload of cameos from washed-up/reality TV stars.
To be fair, I also didn’t include the session layer!
My writing isn’t a strength of mine, so I appreciate the criticism. My writing going from “bad” -> “is it AI?” is progress.
I struggled with where to “cutoff” the explanation and public key cryptography seemed like a good boundary and better explained elsewhere, as did various OSI layers.
I probably should have gone over the cert and potentially the full chain of trust, I’ll give you that.
While I agree no one is rewriting history, it is potentially a big deal because it speaks to the biases present when training/RLHF-ing. Considering this will be used by millions (if not tens of millions), calling it a “silly toy” feels off.
Bias in the model can lead to bad outcomes in certain situations (hint: we have an election coming up)
Yes this is innocuous, but it does hint at the possibility of more damaging bias being a possibility.
> We provide evidence for the Reversal Curse by finetuning GPT-3 and Llama-1 on fictitious statements such as "Uriah Hawthorne is the composer of 'Abyssal Melodies'" and showing that they fail to correctly answer "Who composed 'Abyssal Melodies?'". The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation.
This just proves that the LLMs available to them, with the training and augmentation methods they employed, aren't able to generalize. This doesn't prove that it is impossible for future LLMs or novel training and augmentation techniques will be unable to generalize.
No, if you read this article it shows there were some issues with the way they tested.
> The claim that GPT-4 can’t make B to A generalizations is false. And not what the authors were claiming. They were talking about these kinds of generalizations from pre and post training.
> When you divide data into prompt and completion pairs and the completions never reference the prompts or even hint at it, you’ve successfully trained a prompt completion A is B model but not one that will readily go from B is A. LLMs trained on “A is B” fail to learn “B is A” when the training date is split into prompt and completion pairs
Simple fix - put prompt and completion together, don't do gradients just for the completion, but also for the prompt. Or just make sure the model trains on data going in both directions by augmenting it pre-training.
If you want to play around with OpenJourney (or any other fine-tuned StableDiffusion model). I made my own UI with a free tier at https://happyaccidents.ai/.
It supports all open-sourced fine-tuned models & loras and I recently added ControlNet.
thanks for the insight. either way, would be nice of them to either open source their solution (or if they are using someone else's point us to which one they picked)
Can't speak for OP, but I recently bought a pair of Sony WH-1000XM4 headphones and I'm pretty impressed. They don't make me feel the pressure in my ear like some do, and the noise suppression is just about magic. My wife has to text me from downstairs to tell me my kids are fighting even when it's just outside the door to my office, because I can't hear them at all (and I don't turn up the volume on my music). Just turning them on without any music makes it hard to understand someone talking in the same room.
Which do you prefer? I’ve been rocking the Bose QC35 II’s for years but am wondering if in-ear solutions can compete now for commuting on foot (metro, buses, etc.)
I loved my XM3. They eventually broke, and I replaced them with the AirPod Max as I’m all-in on Apple gear. Returned those within a week. The weight, the size, and the ANC side effects made it unusable for all-day wear. Some people love them, but I was disappointed. I’m back to XM, with newfound appreciation.
You can try both for a week, and return the set that doesn’t fit into your life. Both Apple and Sony can handle a refund.
I tried to use slateJS initially, but I found the project to be slow (it was using ImmutableJS back then) and even though it claims it can support collaboration, it doesn't actually support it which is a deal breaker for me.
Slate.js, even after change to Immer is slow. IMHO (as a person who actively observes the development, sometimes participate in their Slack) the "performance" is not taken seriously in this project. In last few months they provided few PRs that improved few cases, while breaking others. I am impressed how many projects are using it [0], because it has problems to handle editing and pasting huge documents. I also see many PRs from community focusing on optimization but they are ignored, stalled or prematurely closed. It also does not handle IME properly, which is a major problem for many languages. However, I see maintainers started to be more active, so all problems I have mentioned might be fixed soon.
I think this is one of those things where people overestimated how much things would change in the short-term, but will grossly underestimate how much they will change in the longterm.
5 years is a pretty short window in the grand scheme of things when talking about the adoption of technologies.
Gold has been in use for hundreds of years and yet it's still very volatile. If Bitcoin is digital gold that doesn't really solve the volatility problem.
Comparator is a simple, free, open-source tool to help you cut through the complexity of startup compensation. Quickly see what your equity might actually be worth, factor in dilution, and easily compare your offers side by side. It’s completely free, no signups, your data never leaves the browser.
Check out the app here: https://comparator-one.vercel.app
Check out the code here: https://github.com/DevonPeroutky/comparator