heh, I used to work on the data team at Shopify. I built something similar to search internal dbs for secret santa gifts based on some weird criteria. Scraping might have a large margin of error because a lot of products tend to be ephemeral.
Agreed on the large margin of error. Working on a bot to store and convert the images to webp to improve performance. Having the bot do a check for any images that don't exist and removing those listing. Will likely also need to triangulate this with a 404 check. Recently added an option for users to mark a product as "sold out" on the search results which will help as well.
Unrelated, but what was the "weird criteria" for the secret santa exchange? Half joking but also helps with figuring out filters :)
> Because of this, there’s more deviation from what was planned and designed to what was shipped and there’s less alignment across teams, so it’s harder to coordinate feature development.
Asking as an outsider, won't shipping a lot of things in this environment lead to some suboptimal product state. I'm used to coordinate > build > learn > iterate > ship; which although slows down gross feature development, tends to prevent the 60-80% of experiments that don't work from getting launched.
Does removing meetings to optimize throughput of feature development not get us into some feature factory mindset? This isn't binary btw, but I think moves thinking more towards a build mindset vs a solve problem mindset.
I mean, imagine what housing in Toronto would cost if they didn't build those 100K+ condos? Pricing is composed of supply and demand, and the article is saying that holding demand constant, increasing supply lowers prices. Adding a marginal unit to the housing stock kinda has to result in lower or equal prices in the static case.
Toronto is growing insanely quickly, and supply can't meet demand. New cities are great, but a large number of people want to move to big and established population centres.
We are in violent agreement. Creating new condos will still keep the prices increasing, but it will be better than the situation where no new housing is built.
But the article specifically says "new homes reduce house prices." This is patently false.
Toronto is growing because there's no real business centers outside of Toronto for the 1 million immigrants a year that are coming in. We need to create viable options by expanding business outside of Toronto in new land, so that housing demand lessens.
1. That is a very deterministic statement
2. This is a part of the process, not the entire process. There are still technical elements tested during the interview.
3. The signal that they are looking for, but do not tell candidates, is a story about overcoming obstacles.
What I will say about the lifestory, is that it aligns with the skillset required to do well in a corporate environment. Namely telling stories, being relatively interesting, and having some ability to sell yourself and your accomplishments (in addition to being technically competent which is tested elsewhere).
“Yeah, we see here that you developed your own machine learning framework in your free time. That’s great and all, but jross225 didn’t find you interesting enough, so we’re going to have to pass, sorry.”
If you developed an ML framework in your free time and can't tell a compelling story about it in 45 minutes then I probably don't want to work with you either
In addition to being a highly volatile source of revenue to the government, you would have to give tax breaks for unrealized capital losses out of fairness; which would introduce a ton of complexity.
the US government already effectively does this to foreign resident citizens who buy PFICs (and why us citizens are strongly encouraged not to buy PFICs).
the reason its problematic is that just because a persons assets have risen, doesnt mean they want to sell, and if they dont sell, it means they need the liquid assets to cover said tax bill. a tax of this nature forces people to sell assets which is problematic. Its not problematic to say you sold X assets at profit Y, its your responsibility to retain percentage Z of that to pay the taxes.
I'm on a data team of 10, and after about 6 months of onboarding we have permissions to push directly to main pending passing tests. We generally do this for small changes, or changes where we are the context owner. With the caveat that reviews happen after the code is deployed, and usually within a few days.
Personally, I like the process. It allows us to move quickly, and focus on blocking changes. We can still get reviews prior to pushing code if it is sensible (for large changes), but most (80%?) changes tend to be quite small.
That's an intriguing approach.
It makes sense that not all changes are equal and they don't require the same review process. (And it certainly makes sense to encourage small changes.)
What qualifies as a "small change"? Do you have some numbers to measure it? Or is it the developer's call?
Neat project though!