I’ve found there are two major mindset shifts that helped me start passing tech interviews consistently:
1. Study the algorithms and patterns, not the questions
2. Treat it like a serious investment, 2–3 months of focused prep minimum
Most people skip the fundamentals. But these core patterns and data structures come up over and over. If you really understand them, you can solve almost anything.
I used this exact approach to land offers from Google, Amazon, Uber, Airbnb and more, without a CS degree.
That experience led me to write this full breakdown of how to study for tech interviews the right way.
I work at Airbnb where I write 99% of my production code using LLMs. Spotify's CEO recently announced something similar, but I mention my employer not because my workflow is sponsored by them (many early adopters learned similar techniques), but to establish a baseline for the massive scale, reliability constraints, and code quality standards this approach has to survive.
Many engineers abandon LLMs because they run into problems almost instantly, but these problems have solutions. If you're a skeptic, please read and let me know what you think.
The top problems are:
1. Constant refactors (generated code is really bad or broken)
2. Lack of context (the model doesn’t know your codebase, libraries, APIs, etc.)
3. Poor instruction following (the model doesn’t implement what you asked for)
4. Doom loops (the model can’t fix a bug and tries random things over and over again)
5. Complexity limits (inability to modify large codebases or create complex logic)
In this article, I show how to solve each of these problems by using the LLM as a force multiplier for your own engineering decisions, rather than a random number generator for syntax.
A core part of my approach is Spec-Driven Development. I outline methods for treating the LLM like a co-worker having technical discussions about architecture and logic, and then having the model convert those decisions into a spec and working code.
LOL, honestly I hated Codex when it first came out. It was backed by o3 at the time.
But literally as soon as GPT-5 came out in Codex and with the "high" option, I completely switched from Claude Codex to Codex. Never imagined that would happen so fast.
> get the best results when the context window is right around 70%
I used to be trigger happy with /compact or using the hand off technique to transfer knowledge between sessions with a doc. But lately the newer generation of models seem to be handling long context pretty well up to around 20% remaining context.
But this is when I'm working on the same focused task. I would instantly reset it if I started implementing an unrelated task. Even if there was 90% left, since theres just no benefit to keeping the old context
I work at Airbnb where I write 99% of my production code using LLMs. Spotify's CEO recently announced something similar, but I mention my employer not because my workflow is sponsored by them (many early adopters learned similar techniques), but to establish a baseline for the massive scale, reliability constraints, and code quality standards this approach has to survive.
Many engineers abandon LLMs because they run into problems almost instantly, but these problems have solutions. If you're a skeptic, please read and let me know what you think.
The top problems are:
* Constant refactors (generated code is really bad or broken)
* Lack of context (the model doesn’t know your codebase, libraries, APIs, etc.)
* Poor instruction following (the model doesn’t implement what you asked for)
* Doom loops (the model can’t fix a bug and tries random things over and over again)
* Complexity limits (inability to modify large codebases or create complex logic)
In this article, I show how to solve each of these problems by using the LLM as a force multiplier for your own engineering decisions, rather than a random number generator for syntax.
A core part of my approach is Spec-Driven Development. I outline methods for treating the LLM like a co-worker having technical discussions about architecture and logic, and then having the model convert those decisions into a spec and working code.
I would think it’s due to the non determinism. Leaking context would be an unacceptable flaw since many users rely on the same instance.
A/B test is plausible but unlikely since that is typically for testing user behavior. For testing model output you can do that with offline evaluations.
Studies on rats have shown significant similarities between sugar consumption and drug-like effects, including bingeing, craving, tolerance, withdrawal, dependence, and reward. Some researchers argue that sugar alters mood and induces pleasure in a way that mimics drug effects such as cocaine. In certain experiments, rats even preferred sugar over cocaine, reinforcing the idea that sugar can strongly activate the brain’s reward system
This is somewhat intuitive when you think that sugar is almost pure energy and in a food-scarce existence that we evolved for, energy is synonymous with survival. So alongside reproducing, consuming energy is probably one of the most basic of desires we are hardwired to seek out in more ways than one
1. Study the algorithms and patterns, not the questions 2. Treat it like a serious investment, 2–3 months of focused prep minimum
Most people skip the fundamentals. But these core patterns and data structures come up over and over. If you really understand them, you can solve almost anything.
I used this exact approach to land offers from Google, Amazon, Uber, Airbnb and more, without a CS degree.
That experience led me to write this full breakdown of how to study for tech interviews the right way.
reply