This is cool. For on device models any plans / models that use MOE in relatively resource constrained setups (I’m thinking MBP M1 16gb ram)? I’m using LM studio but all the Gemma models (mlx) seem to crash but surprisingly managed to get gpt-oss 20b working (slow) on my mbp.
I find performance in resource constrained environments interesting.
In particular trying to find decent code models (on device backup) but also tts applications and voice to text.
We constantly are evaluating architectures trying to assess what will work well in the open ecosystem. It's quite a vibrant space and glad you have one option that works. For this model in particular we evaluated a couple of options before choosing a dense architecture of its simplicity and finetunability.
For the other Gemma models, some the smaller sizes should work on your laptop when quantized. Does Gemma 1b and 4b not work on a quantized? It should fit the memory constraints. I use Ollama on low powered devices with 8gb and less of ram and the models load.
For TTS a colleague at HuggingFace made this bedtime story generator running entirely in browser.
Be forewarned though this is not a good coding model out of the box. It likely could be trained to be an autocompletion llm, but with 32k context window and smaller sides its not going to be refactoring entire codebases like Jules/Gemini and other larger models can.
Amazed social media engineering society isn’t getting more press. They are all doing it.
I noticed this first on X - during the FarageRiots where an Asian woman asked how many people felt safe. The volume of violently racist replies was insane. As an Asian man - it made me feel, very scared about society. I felt outnumbered. It wasn’t reflective of society - as it turned out it was a mass demonstration of racial unity, by the vast majority of Britain. But not on X
On YouTube I noticed it silent deleting my comments. Nothing violent - literally a comment saying I was concerned about NHS privatisation and takeover by the US finance. Noticed the same comment get removed. No notice. No reason. No appeal. Take down. Invisible. Quick google shows lots of people experiencing same thing.
And it got me thinking - wow - imagine shaping public sentiment at mass. Making opinions that weren’t convenient to people owning social media disappear. That creates helplessness. Shapes elections.
The rage bait we see now pulls in attention, shapes conversations and defines the Overton window.
Noticed a post from Theo T3 an hour or so ago that was critical, on X, about OpenAi and the first comment was calling him an OpenAI shill. Certainly seems plausible incentives on X to fuel anti competitor sentiment and amplify useful sentiment.
This article on meta is validating the patterns I’ve seen. It’s deeply concerning. We are in an era where society is micro shaped by social media owners and their agenda.
This issue needs to be addressed. We need regulation and transparent recommendation algorithms and clear limit in targeting users.
And then there is the toxic nature of social media engineered addiction. Side bar I know but has to be said.
We need much more regulation and we need more decentralised ownership for social media companies to protect democracy.
I wholly agree with your point on X. The comeback of racism is one of the most dangerous social phenomenons in today's world. Besides sowing unsurmountable amounts of hatred, it also brings along xenophobia, misogyny/misandry and the whole likes along with it as the forerunning discriminatory practice in our world.
It's pretty bad. I've also been very interested in the non-organic way certain topics get introduced. It's often chains of non-organic posts/replies that will seed topics in a way that makes it seem like opinion 1 is proposed then someone else will come in and make some obvious fallacy in a counter-argument, then another post will respond calling them out in some inflammatory way. This kicks off a cycle of user engagement either defending or attacking one of the participants. However the entire initial chain of 3-4 back and forth is all bots just subtly guiding topics.
Wow that’s insane - didn’t realise that was happening re non-organic posts.
From a behavioural POV it seems like an obvious play. These companies and there owners have huge gains via social engineering.
There is very little transparency, accountability or regulation.
The thing that worries me is the unobvious… most people know about instagram and ++ suicide rates. What shocked me was finding out that instagram used things like waiting for people to remove photos of themselves, identify insecurity behaviour, use that to position beauty products to young girls. It seems so so unethical and predatory. Not to mention the impact on public MH when applied at scale.
Another crazy stat was something like screen time av was 4hrs/ day and av attention spans dropped from ~180s to something like ~90s.
The impact in so many areas is so bad. Blows my mind there is such a lack of regulation.
Thinking AI has the potential, at scale to social engineer without the need to bother creating content / making bots.
It was a big deal when Google started doing it 15 years ago on YouTube. They were explicitly changing weights for recommendations wrt middle eastern videos. At the time it was considered a moral thing because it was done to prevent ISIS from radicalizing people.
I remember warning people at the time they'd do it for domestic political videos, it was really frustrating how no one believed me. It's a little more frustrating that after experiencing people continue to use sites with artificial cybernetics.
In theory - CEO's have ultimate responsibility. In practice it's more complex - boards, delegation, company structures, hr etc etc remove a lot of ultimate responsibility from CEO's. The buck stop's here, unless, the CEO's decides otherwise. Carlos Tavares is a good example of this. Got away with more screwups than many senior employee's could dream of. Ditto lots of legacy autos. The board / shareholders typically have a lot of sway and delegated accountability / responsibility.
Agree with your point of distribution of responsibility and accountability. My argument was more about bosses vs engineers not particularly about CEO's. You can't let llms take decisions and blame them later if it backfires.It has to be humans to take those high impact decisions and be accountable for the results.
Your point about measuring errors is an interesting one. I think definitely CEO’s / business leaders are very good at deflecting negative responsibility and aggregating positive outcomes. Not exclusively. I know several senior leaders who are very very competent. But I think in business it is typically easier to look good, than do good. Across most domains.
I think the ambiguity part is a bit of an illusion - lots of people who make good predictions on complex things, have good, informal, decision making models. But like an llm, a lot of their minds are black boxes, even unto themselves. Therefore hard to replicate.
It’s so interesting have asked this myself a lot. So firstly - it think this would be an excellent use of AI but the barriers are;
1. Political - CEOs have significant purchasing power.
2. Obfuscation - engineering is (relatively tightly defined) but being a CEO is often more fluid and a lot of the decision making is wrapped in stuff like ‘gut’ instinct. There’s no docs for a CEO.
3. Cultural - we treat CEO’s like art and idolise their value instead of looking at them like a node, that aggregates organisational data flows. To me a CEO is a little like a model router - but with more politics.
I think there’s a huge opportunity to replace CEOs but I think like in engineering that doesn’t happen in one shot - it happens by devolving responsibilities.
I personally stepped down from the business stuff running startups and small companies because to me it feels like BS and into engineering so perhaps I’m biased.
When I ask my CEO mates they’re obviously dogmatically convinced they are irreplaceable.
But I think the devil is in the detail. I’m a relatively junior engineer and was crapping myself about ai taking every entry level job - until you get into it and realise there’s a lot more nuance. At least near term. Same for CEO’s.
I’d love a world where we can focus on engineering outcomes, not the political crap that weighs us down.
My TLDR is I think the main barrier is political, not pure engineering.
But I suppose / hope we can re-engineer the political, with effort.
Simple is good. For me - apple notes (my life todo temp), apple reminders (more persistent todos / reminders - setup kanban) and then text files + kanban for coding projects.
Todo lists of any kind in a team context usually fall down and kanban is the way forward.