I was surprised it pinged gpt-4o. I was expecting it to use something like https://github.com/apple/ml-fastvlm (obviously cost may have been a factor there), but I can see how the direction he chose would make it more capable of doing more complex behaviours in the future w.r.t adding additional tentacles for movement and so on.
Hi Boris, love working with Claude! I do have a question—is there a plan to have Claude 3.5 Sonnet (or even 3.7!) made available on ca-central-1 for Amazon Bedrock anytime soon? My company is based in Canada and we deal with customer information that is required to stay within Canada, and the most recent model from Anthropic we have available to us is Claude 3.
Mortgages in Canada are different than mortgages in the US in that they are full recourse. If the sale price during foreclosure doesn’t cover the costs, the banks can go after you personally for the balance. So you’d have to either a) leave the country, or b) declare bankruptcy. So, not exactly a risk-free option.
It is a risk free option. The Chinese income isn't real. You can't meaningfully garnish the wages of a part time casino dealer. They have no other assets to cover the potential shortfall.
You assume these are poor people, other assume they're quite wealthy and laundering untaxed income into assets in .ca. If the latter, they may even have a proportionate pile of assets to seize locally. I betcha you could restate your point to come across a little less jingoistic.
It's entirely plausible that you're not on the moral high ground you think you are, calling other commenters jingoistic because you didn't read the article is prime example.
Yes this is accurate. I work at a lender that operated in both Canada and the US and the credit bureau integrations were with totally different companies and had different APIs (even for Equifax on both sides of the border)
It's been a while but I don't even think that a credit score in Canada is comparable to a credit score in the US. Sure, an Equifax terminal in either country spits out a number in the same range but the formula and calculation are probably different, with different legal frameworks regarding the information contained therein.
Yes, but it’s partly the government’s fault. There’s literally no way for mortgage lenders to pull your tax records in order to verify income. There used to be some third party services to connect to the CRA but they got shut down and replaced with… nothing.
This looks amazing! Wind power is totally underutilized in the shipping industry, I've been waiting for something like this for ages.
I used to be part of a team back in university making autonomous sailboats [1] and one of the things that I was surprised by when working on this was that there are a TON of hurricanes out in the middle of the ocean (we were working to build it to cross the Atlantic). We built a system to take in weather prediction data to try to avoid hurricanes, but we were building a relatively tiny boat—do large shipping vessels do this as well? I'd assume they can sail through pretty bad weather. If so, do you have ways to lower the sails easily to protect them?
Additionally, do you have any software to help inform the vessel operators how to best sail into the wind or are the net savings not worth it considering most of the propulsion is still coming from fuel-based sources?
Lowering the sails will be critical to safe operation in bad weather. The deployment process will be easily reversible, so that within a few minutes you can go from full sail to fully stowed (or any place in between), likely with emergency settings to bring the sail down faster. We don't want to limit the weather a ship would sail in without diversion, but instead just make use of reasonable winds when they are present. We certainly will want to make future software for route planning assistance, but our first step will not require the ship to change course or speed to see benefits of the sails. It's certainly worth it over all to follow the wind, but for ease of adoption that can come later.
Large cargo vessels do try to avoid bad weather, even though they can sail through most of it. It's a crew happiness, risk, and loss avoidance concern (knowing these companies, probably mostly the latter two!).
I was on a cargo ship in the pacific which diverted into the Bering Sea to avoid some weather instead of skirting just south of the Aleutian Islands as planned. The captain gots orders via satellite from a land crew that's crunching the numbers of risk vs extra fuel costs at all times for the fleet.
The first mate was frustrated by how this all works. He said (English not being his first language): "This is terrible! We never get to decide anything for ourselves. We are like Muppets!". I think he meant "puppets"...
> Wind power is totally underutilized in the shipping industry
The shipping industry was 100% wind powered, with very mature technology developed and tried during centuries, and thousands upon thousands of experts in the area. Why do you think the whole industry switched to engines?
Predictability, speed, cheap fuel, a lack of understanding of climate change, etc.
We also used to have windmills to grind grain and then switched over to mills that use electricity or fossil fuels. But of course, windmills to generate electricity have become quite popular. What's old can be new again when combined with modern technologies.
The first thing I always do when I go into any new editor is set up the "Select All Instances of highlighted word". So like how there is cmd-d for the next occurrence, I also like having a shortcut to select _all_ occurrences. Would love for this to be added.
More generally, this looks really awesome and also really enjoyed your team's blog post detailing the GPUI implementation (even though large parts of it went over my head)
Yes, these are an absolute must :) Came here looking specifically if this was possible. The cool thing is that is currently the only thing I've been missing after playing for a couple hours! I've tried vscode yearly to see if it had become tolerable, but alas. This is the first time I've tried something that might steal me away from Sublime.
This looks very cool, would love to see something like this take off - boilerplate code is such a hassle to stand up for each new project. Does this work with existing hand-made repos or is it mostly intended to work with projects that it generated?
I'm imagining it would be quite challenging to add in e.g. an auth system to an existing backend service and have it match the same coding paradigm as the rest of the app but maybe there'd be some way of annotating your code with comments to make it easier on this system to understand what it's looking at?
Suggestion: the PRs should be draft PRs, otherwise it'll probably be very noisy for teams.
Also, have you thought about doing any integrations with Replit? They are a yc company too and seem to be going all-in on AI and I think the bulk of their user-base is students and junior developers so there might be some synergy there.
Solid article. Is it fair to say that the "data services" layer is essentially a cache sitting in front of the database, or am I misunderstanding it's function?
We use data services to do "data related things" that make sense to do at a central proxy layer. This may include caching/coalescing/other logic but it doesn't always, it really depends on the particular use case of that data.
For messages, we don't really cache. Coalescing gives us what we want and the hot channel buckets will end up in memory on the database, which is NVMe backed for reads anyway so the performance of a cache wouldn't add much here for this use case.
In other places, where a single user query turns into many database queries and we have to aggregate data, caching is more helpful.
In essence, yes, but strictly speaking, no. Instead of caching responses, that layer seems to only bundle equal requests.
So once a request is sent to the database, every other instance of the same request (e.g. "hey, fetch me all messages from server id 42") is put on hold. Once the initial request gets an answer from the database, that answer is distrubuted to the inital requester and all those which were on hold. Now if someone is late to the party, they will initiate a new request to the database, because the response is not cached.
I really like this, and this a is great article to share on HN :)