IBM market cap is 225B, Microsoft market cap is 2.9T. IBM literally lost its matket to Microsoft in 80s and 90s specifically because it was too focused on enterprise...
Microslop got a much bigger market capture before pivoting to B2B as its focus. It could shift because it feels its too entrenched in society, so not much is needed to maintain the safe revenue stream.
Any article about biodegradable plastics should start with advantages over cellophane/cellulose.
People have figured out how to make it a hundred years ago, it's already used for food packaging, known properties, abundant and cheap - made from trees / other plants.
The article starts as if it's some breakthrough miracle which is unheard of. I can literally just buy compostable bags for organic waste made of corn starch on Amazon. It's already a product.
Journalist demonstrate less awareness than 8B LLM. Scientist tells you about a new plastic? Ask them how it's better than what's already on the market.
It's not that the "journalist" didn't think to ask, it's that this is a PR piece sent out to media outlets from the university that did the research. Nearly all universities have a PR team that sets fluff pieces out to the media to promote the work of the university.
The person who wrote this is being paid not to ask tough and important questions around this research.
My understanding is that cellophane generally does biodegrade in most settings. Polylactic acid (those cornstarch-derived bags) mostly biodegrades in hot enough compost or (after several years) in ambient-temperature soil, but not very well in cooler water (One study: "The half-life period of degradation [of polylactic acid in artificial seawater] is 12 [days at 90° C] or 468 days [at 60° C]").
Those temperatures are certainly hard to find in nature, outside of hot springs! Even if this is an error and we are talking about 90°F/60°F, the higher temperature is pretty much constrained to the tropics, so we're talking a year+ to degrade in real conditions. It is better than centuries, but not exactly rapid?
Yeah, I imagine it's considerably slower at ambient ocean temperature. Don't throw your PLA bags in the ocean or a river. Here's a different paper:
> For example, PLA is not biodegradable in freshwater and seawater at low temperatures [32,36–39]. There are two primary reasons for this: (i) The hydrophobic nature of PLA, which does not easily absorb water [40–42]. In aqueous environments, the lack of hydrophilicity diminishes the hydrolysis process, which is crucial for the initial breakdown of PLA into smaller, more degradable fragments. (ii) Resistance to enzymatic attack; the enzymes that degrade PLA are not prevalent or active under typical freshwater and seawater conditions [39,43,44]. The microbial communities in these environments may not produce the necessary enzymes in sufficient quantities or at the required activity levels to effectively breakdown PLA. Additionally, the relatively stable and crystalline domains of PLA can further resist enzymatic degradation.
Also:
> It should be emphasized that neat PLA cannot be classified as a completely biodegradable polymer, as it generates microplastics (MPs) during biodegradation.
My father got me a second-hand computer with Am386DX-40 somewhere around 1997, IIRC. An upgrade to older 286.
It was two generations old at that time but still a lot of fun: it could run a lot of games (incl. DOOM, of course), programming (largely Turbo Pascal 7), and some word processing under Windows 3.11.
I didn't bother with Win95, though.
I've been using it up until 1999, when I finally got a then-modern computer with Windows 98. But in some ways MS-DOS felt more capable - I really knew what each file is for, what computer is doing, etc. I.e. the entire machine is fully comprehensible. You really don't get it with Windows unless you're Russinovich or something.
You can train a model with GPT-2 level of capability for $20-$100.
But, guess what, that's exactly what thousands of AI researchers have been doing for the past 5+ years. They've been training smallish models. And while these smallish models might be good for classification and whatnot, people strongly prefer big-ass frontier models for code generation.
The input attribution part is interesting though, but I do wonder to which extent that is just assigning some sort of SHAP values to the input tokens, in which case it should be pretty portable to any kind of model.
I don't see anything concerning. Mechanistic interpretability research indicates that LLM internals are inherently parallel: many features "light up" in parallel, then strongest ones "win" and contribute to the output.
I'd guess it suggests walking if a feature indicates that the question is so simple it doesn't warrant step-by-step analysis.
reply