> The real question is what existing language is perfect for LLMs?
I think verbosity in the language is even more important for LLMs than it is for humans. Because we can see some line like 'if x > y1.1 then ...' and relate it to the 10% of overbooking that our company uses as a business metric. But for the LLM would be way easier if it was 'if x > base overbook_limit then ...'.
For me, it doesn't make too much sense to focus on the token limit as a hard constraint. I know that for current SOTA LLMs we still have pretty small context windows, and for that reason it seems reasonable try to find a solution that optimizes the amount of information we can put into our contexts.
Besides that we have the problem of 'context priming'. We rarely create abstract software, what we generally create is a piece of software what interacts with the real world. Sometimes directly through a set of APIs and sometimes through a human that reads data from one system and uses it as input in another one. So, by using real world terminology we improve the odds for the LLM to do the right thing when we ask for a new feature.
And lastly there is the advantage of having a source code that can be audited when we need.
> As philosopher Peter Hershock observes, we don’t merely use technologies; we participate in them. With tools, we retain agency—we can choose when and how to use them. With technologies, the choice is subtler: they remake the conditions of choice itself. A pen extends communication without redefining it; social media transformed what we mean by privacy, friendship, even truth.
That doesn't feel right. I thought that several groups were against the popularization of writing through the times. Wasn't Socrates against writing because it would degrade your memory? Wasn't the church against the printing press because it allowed people to read in silence?
I'm not that well read on Hershock but I don't think this is a very good application of his tool-vs-tech framework. His view is that tools are localized and specific to a purpose, where technologies are social & institutional. So writing down a shopping list for yourself, the pen is a tool; using it to write a letter to a friend, the pen is one part of the letter-writing technology along with the infrastructure to deliver the letter, the cultural expectation that this is a thing you can even do, widespread literacy, etc.
Again I think this is a pretty narrow theory that Hershock gets some good mileage out of for what he's looking at but isn't a great fit for understanding this issue. The extremely naive "tools are technologies we have already accepted the changes from" has about as much explanatory power here. But also again I'm not a philosopher or a big Hershock proponent so maybe I've misread him.
That is perfectly on topic and you are identifying correctly flaw in the argument
Technology is neutral it’s always been neutral it will be neutral I quote Bertrand Russell on this almost every day:
“As long as war exists all new technology will be utilized for war”
You can abstract this away from “war” into anything that’s undesirable in society.
What people are dealing with now is the newest transformational technology that they can watch how utilizing it inside the current structural and economic regime of the world accelerates the already embedded destructive nature of structure and economic system we built.
I’m simply waiting for people to finally realize that, instead of blaming it on “AI” just like they’ve always blamed it on social media, TV, radio, electricity etc…
it’s like literally the oldest trope with respect to technology and humanity some people will always blame the technology when in fact it’s not…it’s the society that’s a problem
Society needs to look inward at how it victimizes itself through structural corrosion, not look for some outside person who is victimizing them
> Technology is neutral it’s always been neutral it will be neutral
I agree with a lot of what you say here but not this. People choose what to make easy and what to make more difficult with technology all the time. This does not make something neutral. Obviously something as simple as a hammer is more neutral but this doesn't extend to software systems.
> The change was first spotted by users on Reddit and confirmed in an updated Netflix support page (via Android Authority), which now states that the streaming service no longer supports casting from mobile devices to most TVs and TV-streaming devices. Users are instead directed to use the remote that came with their TV hardware and use its native Netflix app.
My guess is that adblock became too easy on smartphones, so by forcing people use the app on the TV it makes harder for people to bypass the ads.
That's pure speculation, as I don't have any subscription from netflix. But I've used this method with the HBO app and it works 90% of the time, so I'm assuming netflix has the same issue.
> My guess is that adblock became too easy on smartphones
Not within native apps. Your only option is essentially dns/hosts based on both platforms however this can also be done on the router. On Android there is ReVanced I guess. But these are almost as technical as a pihole. What is the percent of people who know of DNS based adblock but not pihole?
Edit: And DNS adblocking can be done on android tv.
Sure, but I've never had a 'standard router' with support dns blocking. I know you can do this with something like pfsense, but that's not that common.
You also have the option to put a piehole in your network. It is pretty easy if you have some technical knowledge but I would say that it is generally out of reach for the general population(non-tech folks).
But on android you just open the settings, search for 'private vpn' and paste an url. This is way easier to do for someone with no technical background. Even chatgpt should be able to correctly guide you through these steps.
Sounds probable to me... This is a great example of why I am by default anti-app unless there's a demonstrable benefit to the user (e.g. Offline mode or something). If the web version of Netflix goes away then I will never access it again. I will also never buy a "smart" TV. I leave the ball in Netflix's court.
> developers have gone away from Dedicated servers (which are actually cheaper, go figure)
It depends on how you calculate your cost. If you only include the physical infrastructure having a dedicated server is cheaper. But by having some dedicated server you loose a lot of flexibility. Needs more resources? Just scale up your ec2, and with a dedicated server there is a lot more work involved.
Do you want a 'production-ready' database? With AWS you can just click a few buttons and have a RDS ready to use. To roll out your own PG installation you need someone with a lot of knowledge(how to configure replication? backups? updates? ...).
So if you include salaries in the calculation the result changes a lot. And even if you already have some experts in your payroll by putting them to work in deploying a PG instance you won't be able to use them to build other things that may generate more value to you business than the premium you pay to AWS.
> You have to carefully review and audit every line that comes out of an LLM. You have to spend a lot of time forcing LLM's to prove that the code it wrote is correct. You should be nit-picking everything.
I'm not sure this statement is true most of the time. This kind of reasoning reminds me of the discussion around 'code correctness'. In my opinion there are very few instances where correctness is really important. Most of the time you just need something that works well enough.
Imagine you have a continuous numeric scale that goes from 'never works' to '100% formal proofs' to indicate the correctness of every piece of software. Pushing your code to the '100% formal proofs' side takes a lot of resources, that could be deployed on other places.
At least for us, every bug that makes it into a release that gets installed on a client computer costs us 100x - 1000x as much as a bug that gets caught earlier.
Sometimes getting the new capability around that bug to market faster is worth the tradeoff, because the revenue or market position from the capability with that bug is way more important to the business than the 1000x cost of the fix after distribution.
As long as you have some mechanism to catch the issues before it hits customers. Too many software companies are OK shoveling crap on customers because it's easy to fix it in the field. Yes, it's easy to fix in the field, after you've inconvenienced and wasted the time of thousands of customers.
I think the response lies in the surrounding ecosystem.
If you have a company it's easier to scale your team if you use AWS (or any other established ecosystem). It's way easier to hire 10 engineers that are competent with AWS tools than it is to hire 10 engineers that are competent with the IBM tools.
And from the individuals perspective it also make sense to bet on larger platforms. If you want to increase your odds of getting a new job, learning the AWS tools gives you a better ROI than learning the IBM tools.
I find it really odd this recent push for discussions around the development of new datacenters.
There is a plan for constructing a new high-capacity datacenter [edit: near my city]. And a lot of discussions in the media are done through an emotional tone around water and electricity usage.
The media generally frames it as if installing a new datacenter would put the neighbors in risk of not having water or electricity. I'm not arguing that a datacenter doesn't bring any problems, everything has pros and cons.
Both sides seems to be using bad faith/misleading arguments, and I thinks that's really bad because we end up with solutions and agreements that don't improve the lives of the people affected by these new developments.
One went in 1/4 mile from my home a couple years ago. I ignored the notices of development because I thought it was far enough that it wouldn't affect me, but it blocks the view of the mountains that I used to enjoy, and sometimes I can hear noise from its cooling system (I assume).
I wish I'd known what was coming, and gone to the meetings to oppose it.
Large buildings 1000 feet from you are going to have some impact, but your complaint has little to with being a data center specifically. They could have put in a large warehouse and your view gotten blocked just the same, similarly the noise from the cooling system can be managed well or poorly on any building.
usually large warehouses will appear where there’s good highway connections and lots of cheap unskilled labour. A DC might catch a lot of people “in the sticks” by surprise.
I appreciate that my view isn't the only consideration for that kind of decision, but when a new building goes up much larger than anything else in the area, and affects the skyline for thousands of people, I think that should be one of the considerations.
What I'm trying to say is that everything we build has positive and negative effects in our society. And if we want to create a better society we need to have a good understanding of these effects.
I think your article about 'Lack of water, unclean water due to data centers' is a good example of bad faith arguments. It start the article talking about someone that lost access to their private well after a datacenter was constructed. This article don't do it, but I've se people go from arguments like this (a specific water-related disruption) to 'thousands of residencies will loose access to water'.
What strikes me as odd is the fact that datacenters aren't all that special when compared to other infrastructure projects(roads, warehouses, hospitals, power plants, garbage disposal, water dams, ...) but the way we are discussing it seems unique. For every other infrastructure project the discussion seems to be 'how do we make sure that X, Y and Z won't be a problem for the society?'. But when it comes to datacenters it becomes 'datacenters are bad and we should not build them', which seems bad way to approach this issue.
> Omarchy feels like a project created by a Linux newcomer, utterly captivated by all the cool things that Linux can do, but lacking the architectural knowledge to get the basics right
I've used Omarchy over the last few months and I don't think this is a fair assessment of the project. Sure, it definitely fells hacky in some places but I don't think it's that bad.
Even though I don't fully agree with the article, I think the conclusion is right. If you already knows your way around linux, Omarchy probably won't be a good option for you in the long term.
I fully switched to linux around 2008 and never looked back. I went through most of the major distos, from Gentoo to Ubuntu. I'm not an expert, but I have a pretty good understanding of how things work under the hood.
Even with all this knowledge I stumbled upon a bug that I wasn't even sure on how to start debugging. In my desktop I have 2 monitors and when the system wakes up from sleep my secondary monitor starts up faster than my main monitor and this puts them in the wrong order, as if I had swapped them left-to-right.
This is a trivial issue, I'm sure that ChatGPT could guide me through this issue in no time. But it made me realize that if I choose to stick with Omarchy I will need re-learn a lot of things, I will need to learn about several new tools and configuration schemas. And I don't want to do it right now, that's not a good time investment for me. Especially if there are no guarantees these tools will still be relevant in 10 years.
And this is why I'll be switching back to old and boring Fedora.
I think verbosity in the language is even more important for LLMs than it is for humans. Because we can see some line like 'if x > y1.1 then ...' and relate it to the 10% of overbooking that our company uses as a business metric. But for the LLM would be way easier if it was 'if x > base overbook_limit then ...'.
For me, it doesn't make too much sense to focus on the token limit as a hard constraint. I know that for current SOTA LLMs we still have pretty small context windows, and for that reason it seems reasonable try to find a solution that optimizes the amount of information we can put into our contexts.
Besides that we have the problem of 'context priming'. We rarely create abstract software, what we generally create is a piece of software what interacts with the real world. Sometimes directly through a set of APIs and sometimes through a human that reads data from one system and uses it as input in another one. So, by using real world terminology we improve the odds for the LLM to do the right thing when we ask for a new feature.
And lastly there is the advantage of having a source code that can be audited when we need.
reply