No, I'm not asking you spend $150, I'm providing you the evidence your looking for. Mayo Clinic, probably one of the most prominent private clinics in the US, is using transformers in their workflow, and there's many other similar links you could find online, but you choose to remain ignorant. Congratulations
> For the same reason ReasonML took years to overtake fartscroll.js in the number of stars on GitHub.
Wow, that's a fascinating statistic! Certainly puts the popularity delta into a different light.
On a separate note, the fartscroll.js demo page[0] no longer works since code.onion.com is no longer accessible. Truly disappointing. Luckily their release zip contains an example.html!
Why does "a Grok button on every post to real time fact check it" increase your trust, given the obvious and open control Musk has over it? When Grok disagreed with him, he kept saying they'd "fix" it, and that's not to mention that infamous "white genocide" issue. It's undeniable that Musk is using his control to align Grok with his own opinions.
How does that not decrease your trust? I can't understand the thought process.
Because when I take the time to spot check it more deeply, it's usually pretty accurate and balanced. Having it built right in, free to use makes it convenient.
I don't all the time, because that would take forever, but every month or so I'll do a deep dive on the sources of something I'm reading about. Strongly recommend that people periodically do this, especially on topics where you catch yourself having a strong reaction such as anger or immediate validation of your view point.
Are you checking general topics, or also specifically ones that Musk has "fixed"?
My concern wouldn't so much be that general information is incorrect, but that anything Musk has opinions on - which seems to be a great number of topics, many of which are completely detached from his companies etc. - has an unacceptable chance of being deliberately manipulated. This is easy to spot when he tries to convince people of his "white genocide", but we don't know what other topics he's "fixed", and you specify that you don't verify all the time.
How do you know you're not being fed another "white genocide" if you don't verify? I wouldn't be as concerned with other AIs because we haven't seen as explicit manipulation as we've seen with Grok, but that seems to be explicitly built to distribute Musks opinions.
I'm 45 years old and I treat every bit of news I read on the internet or otherwise with a large glass of skepticism, whether it came from an AI or anywhere else. My default assumption is that whoever is reporting is pushing an agenda; seldom misreporting facts but often leaving out context that affects framing.
I appreciate Community Notes and Grok for the closest thing to a real-time ability to call it out that exists.
My default AI query on any topic or story is, "Please validate the details of this story and compare to other sources to identify any critical information from other publications that was missing from this source. Highlight those differences."
It gives me a validation, a comparison and helps me to identify the bias/context framing that's going on pretty quickly. I haven't seen many AI sources that can fact check things in real time like Grok can, like Maduro news the other day.
> I would assume so. It's sort of a catch 22 because if they delete your data, they have no way of knowing about you when they buy another batch of data. To have some sort of no track list, they have to keep your data.
If I ever stumble upon such an obvious oversight/loophole, I find it's best to not immediately stop, but to ask: "How do they intend to solve this?"
In this case, the first part of the terms of use solves your conundrum:
> By submitting a deletion request through DROP, you consent to disclosure of your personal information to data brokers for purposes of processing your deletion request pursuant to Civil Code section 1798.99.80 et seq. unless or until you cancel your deletion request. Additionally, you acknowledge that data brokers receiving your deletion request will delete any non-exempt "personal information," as defined in Civil Code section 1798.140(v), which pertains to you and was collected from third parties or from you in a non-"first party" capacity (i.e., through an interaction where you did not intend or expect to interact with the data broker).
Do you have a primary source showing that "the party who supports raising wages also supports near limitless spending on the less fortunate countries"?
That doesn't help with Pylance and similar extensions. Microsoft implemented checks to verify the extension is running in VS Code, you have to manually patch them out of the bundled extension code (e.g. like this[0], though that probably doesn't work for the current versions anymore).
This criticism seems like a complete non-sequitur to me. They didn't claim that Shopify, Github and Stack Overflow scaled to millions of users with 4 engineers each. Is the implication that, because Netflix and those companies both had to hire more engineers to scale, the decision between monoliths and microservices has no impact on a 4-person team? I genuinely don't understand what you're trying to imply.
Based on my experience microservices do introduce additional fixed costs compared to monoliths (and these costs can be too expensive for small teams), so everything you've quoted makes complete sense.
In the interest of helping you understand what I was saying, the two quotes are completely contradictory (even if the base argument is correct/valid).
The first one says we shouldn't follow Netflix's example because it is a massive company with an enormous team. The second one says we should follow the example of these companies instead, while ignoring that they are also a huge company with a massive team.
So the criticism/joke stems from the logical inconsistency between the two. The fact that you stopped with microservices, using a rant about Netflix, while at the same time lauding monoliths, using companies of similar scale as examples, highlights your lack of understanding of using team scale as a reason to pursue either alternative. Dealing with such a person in management is common where they often contradict their own reasoning and pick whatever they fancy at that time. You cannot argue logically when the system changes are not based on objective standards but subjective standards, where you can be wrong for one thing but they can be right for the same thing.
That's why it seems like the person making the decisions is lost in terms of the choices they're making.
Interesting. If I only look at the lines you quoted I can see how you arrive at your interpretation of those two quotes. But if I read them in the industry context, they are a concise response against common arguments for microservices. I'll explain the line of argumentation as I understand it.
- We know that full rewrites are expensive & can kill growing companies, so it's best to start with an architecture that you can keep as you scale
- Common argument for microservices: They scale best, look at Netflix etc.
- Counter argument 1: Netflix has a large team, and microservices add fixed complexity that can kill small teams
- Counter argument 2: Monoliths can also scale (see examples)
That was my initial understanding as someone who has had these discussions before. I don't think I'm adding any arguments, my first point is pretty much universally accepted and known. The author is just assuming a certain level of industry knowledge.
The problem is the article isn’t coherent around this point, because it uses scale vaguely. If you look at the pitch the thing they focus on is _failure domain isolation_ but then the article immediately pivots to how attractive scaling is. Failure domain isolation doesn’t contribute to scale in the performance sense, it can tenuously be tied to scaling teams but that wasn’t part of the pitch.
In fact, I don’t think “scale” is ever part of the pitch of micro services. Independent scaling maybe if you have some particular hot spot. But the real pitch for micro services is and always has been about isolation. Isolating failure domains, teams and change management. That’s been the story since the Bezos letter and if the leadership didn’t understand that it’s a leadership skill issue. Not an architectural problem.
So this is a story about bad technical leadership, not a particular architecture. And if anything the initial pitch by the architect is the most technically valid leadership in the story (as poor as it is). They failed to understand the problem space but at least they identified what problem the architecture would solve. The rest of engineering leadership did the classic pointy haired boss thing of not listening and hearing what they wanted. They paid for it.
> That was my initial understanding as someone who has had these discussions before. I don't think I'm adding any arguments, my first point is pretty much universally accepted and known. The author is just assuming a certain level of industry knowledge.
Or they're just bad at communicating and likely decision-making as well. I would say you're giving the author too much credit to be honest but I get your point. It's a poorly-written article in general imo.
It's amazing what modern hardware can do when used correctly.
Consider moving to micro-services only AFTER reasonable algorithms on commodity bare metal show real capacity limits. There's still higher spec bare metal to carry said designs through a refactor / expansion based on where the performance bottlenecks are. Even absent literal micro-services there's still partitioning / sharding which can spread out many of the pain points.
Where does "large use" of LLMs in medicine exist? I'd like to stay far away from those places.
I hope you're not referring to machine learning in general, as there are worlds of differences between LLMs and other "classical" ML use cases.
reply