Usually there are nuggets of wisdom in lists shared like this but I feel like every lesson shared here has immense value.
> "remain skeptical of your own certainty"
> "Model curiosity, and you get a team that actually learns."
These are two lessons that typically require battle scars to learn. For such big ideas to be summed into two sentences is pretty remarkable and puts to words lessons I wish I knew how to share. Amazing article, thanks for sharing!
I was going to skip the article until I read your comment, and wow! You’re totally right - my hard won understanding is there, including things I sort of knew but couldn’t put into words before. Going to share this with my adult kids.
same usually, i read this and see this some flawed or hackneyed tripe.
But these ones are actually true and anyone who has had a long career and led people and product will resonate with many of them.
It seems like this might be one of the biggest vulnerabilities in recent times...
The default react / nextjs configurations being vulnerable to RCE is pretty insane. I think platform level protections from Vercel / Cloudflare are very much showing their utility now!
if it increases topline metrics like watch time it's probably hard for them to justify removing it. a change this big seems like it was probably a/b tested and did move metrics significantly?
Probably. We keep watching all kind of stuff after getting baited into it. AI slob is annoying, but we do want to know what chefs do about sticky pizza dough, or what that secret in the pyramids is, or how the kid reacted to what the cat did, or (insert your guilty pleasure here).
on some platforms I try to be really good about hitting the "Never recommend this channel/page/whatever again" whenever the algo serves me the bottom-tier gutter trash videos, such as the "idiotic life hack that obviously won't work" engagement bait. It's a small drop in the ocean, but at least that one channel will never be served to me again.
Thank you for the pointer. I was struggling with Nanobanana for editing an image which it had created earlier, but Reve gave me the edit result exactly the way I wanted in the first pass.
My usecase: An image of a cartoon character, holding an object and looking at it. Wanted to edit so that the character no longer has the object in her hand and now looking towards the camera.
Result Nanobanana: At first pass it only removed the object that the character was holding, however there was no change in her eyeline, she was still looking down at her now empty hand. Second prompt explicitly asked to change the eyeline to look at camera. Unsuccessful. Third attempt asked the character to look towards ceiling. Success but unusable edit as I wanted the character to look at the camera.
Result Reve: At first attempt it gave me 4 options and all 4 are usable. It not only removed the object and changed the eyeline of the character to look at the camera, but it also made posture changes so that the empty hands were appropriately positioned, and now since the character is in a different situation (sans the object that was holding her attention) Reve posed the character in different ways which were very appropriate - which I didn't think of prompting for earlier (maybe because my focus was on immediate need - object removal and change in eyeline).
On a little more digging found this writeup which will make me to signup for their product.
always wondered at what scale gossip / SWIM breaks down and you need a hierarchy / partitioning. fly's use of corrosion seems to imply it's good enough for a single region which is pretty surprising because iirc Uber's ringpop was said to face problems at around 3K nodes.
it would be super cool to learn more about how the world's largest gossip systems work :)
SWIM is probably going to scale pretty much indefinitely. The issue we have with a single global SWIM broadcast domain isn't that the scale is breaking down; it's just that the blast radius for bugs (both in Corrosion itself, and in the services that depend on Corrosion) is too big.
We're actually keeping the global Corrosion cluster! We're just stripping most of the data out of it.
Back of napkin math I’ve done previously, it breaks down around 2 million members with Hashicorps defaults. The defaults are quite aggressive though and if you can tolerate seconds of latency (called out in the article) you could reach billions without a lot of trouble.
It's also frequency of changes and granularity of state, when sizing workloads. My understanding is that most Hashi shops would federate workloads of our size/global distribution; it would be weird to try to run one big cluster to capture everything.
From my literal conversation I'm having right now, 'try to run one big cluster to capture everything' is our active state. I've brought up federation a bunch of times and it's fallen on deaf ears. :)
We are probably past the size of the entirety of fly.io for reference, and maintenance is very painful. It works because we are doing really strange things with Consul (batch txn cross-cluster updates of static entries) on really, really big servers (4gbps+ filesystems, 1tb memory, 100s of big and fast cores, etc).
Who orchestrates the orchestrators? is the question we’ve never answered at HashiCorp. We tried expanding Consul’s variety of tenancy features, but if anything it made the blast radius problem worse! Nomad has always kept its federation lightweight which is nice for avoiding correlated failures… but we also never built much cluster management into federated APIs. So handling cluster sprawl is an exercise left to the operator. “Just rub some terraform on it” would be more compelling if our own products were easier to deploy with terraform! Ah well, we’ll keep chipping away at it.
in the super public consumer space, search engines / answer engines (like chatgpt) are the big ones.
on the other hand it's also led to improvements in many places hidden behind the scenes. for example, vision transformers are much more powerful and scalable than many of the other computer vision models which has probably led to new capabilities.
in general, transformers aren't just "generate text" but it's a new foundational model architecture which enables a leap step in many things which require modeling!
Transformers also make for a damn good base to graft just about any other architecture onto.
Like, vision transformers? They seem to work best when they still have a CNN backbone, but the "transformer" component is very good at focusing on relevant information, and doing different things depending on what you want to be done with those images.
And if you bolt that hybrid vision transformer to an even larger language-oriented transformer? That also imbues it with basic problem-solving, world knowledge and commonsense reasoning capabilities - which, in things like advanced OCR systems, are very welcome.
I think the conclusion isn't as simple as "foundation model companies will just build the features of all downstream products" because focus and priorities play a big part.
If that were the case a simple example is much of software services we see today (and provide real tangible value) wouldn't exist as it's theoretically just updating data in a database.
Cannot +1 this enough! Joining a team you respect and seeing how they operate gives you a really good baseline to work off of and take what you like and modify what you disagreed with.
You'd be surprised how many times you can "iterate and fail quickly" only to end up at an established practice some other shop has been doing for years. It is important however to understand the why behind the decisions as otherwise you're no better than just figuring it out yourself
Honestly this is a super respectable culture. I would think many leaders want to build teams with this mindset but find it incredibly difficult for one reason or another
> "remain skeptical of your own certainty" > "Model curiosity, and you get a team that actually learns."
These are two lessons that typically require battle scars to learn. For such big ideas to be summed into two sentences is pretty remarkable and puts to words lessons I wish I knew how to share. Amazing article, thanks for sharing!
reply