And yet a billion people in this world are starving... Cutting back on industrial agriculture means participating in a genocide, basically.
Some geeks here are coming up with sci-fi ideas of producing food in vertical farms, or underwater, why don't we start by simply using existing techniques?
This is incorrect. If all food we produce world wide was distributed equally everywhere not a single person would starve. However it isn't distributed anywhere close to equally. In many societies food is instead wasted in large amounts.
Cutting back on industrial agriculture is sorely needed from an environmental standpoint, as well as reducing food waste. Solving starvation requires different solutions, such as improved distribution, as well as political stability.
Distribution is hard, though. Will reducing food even in countries where it is wasted solve distribution any better, or will it just hurt the people at the bottom of the distribution chain more?
Global food production is a, and I oversimplify a lot here, more of a distribution problem than it is a capacity problem. Certain countries cutting back on theor surplus has zero impact on the starving regions of the world, because as things stand now those surpluses aren't exported there anyway.
Quite the opposite, exporting said surplus, besides accute deliveries to mitigate famine, can kill local farming. There is no way small, just a bit above subsistance farming can compete with the surplus of industrial farming in, e.g., Europe. Take chicken for example, in Europe we prefer chicken breasts and legs, the wings are a far, far third place. As a result, a lot ofbthe chicken left overs, legs, wings and so in, didn't have a market in Europe. It got expoeted to Africa, with a purchasing price of close to nothing since the meat sold in Europe already covered costs, overhead and profits. With shipping being close to nothing per chicken wing, the imported food was way cheaper than locally produced food, driving a bunch of local farmers out of busimess and into poverty. And reducing local food production, increasing the risk of local famine while increasing dependency on global food markets (not really a good thing neither....).
Same principle applies for donated clothes, only now local tailors, often women, are affected.
So no, cutting back on industrial over production is by no means taking part in genocide (!) (you couldn't aim lower than that, could you?).
But wouldnt it lower the price of chicken in those places, providing more people w access to good calories although hurting the local farmers? Couldnt this be solved by a country putting up tariffs or banning importing such chicken? If they didn't do that what was their reason?
Why didn't they put tariffs in place? Well, how many governments, and people in power, did you ever hear of that worried about the poor and common folks? There are a lot of people profitting from these things, including the social conciousness of us in the developed world. And those interests, combined with a ton of money and close to no oversight, result in a system that just doesn't care about what happens to the average people.
Agreed, there should be an exemption for part-time work. Why is there a gulf between "unpaid" and "minimum wage" - what if someone wants to just ride the bike for fun and make a couple of pounds as a bonus?
What if someone is obsessed with heavy machinery, wants to try operate them for fun and maybe make a couple of pounds as a bonus?
Minimum wage laws are there to prevent exactly this scenario. It is deemed better to let inefficient enterprises that cannot pay even minimum wage to implode and support unlucky people via government programs rather inefficient enterprises.
Some companies pay minimum wage because they cannot pay more. Some companies pay minimum wage because they can not pay more. Once you allow the latter to pay less - they will, in a heartbeat. And policymakers do not want that.
We don't have a general minimum wage in Norway, because we don't need them. We have strong unions that have given us good tariffs for various industries, that acts as a minimum pay for that sector.
But there are some sectors (building, cleaning etc.) that do have a government set minimum pay, because they often would exploit foreign workers.
Actually, our foodora bicyclists a while back formed a union, striked, and got a tariff deal. So got rid of some of the problems around their gig-work.
> Poland and Hungary are shitfests, with almost complete political control of the justice system.
That's not exactly true, Poland is a country where most of the judges are basically in open rebellion against the current government, and the government can't do shit about it.
Effective allocation of resources means firing people that are not necessary for the company.
Also “people aren’t just workers” might be true in general sense, but for the company they are exactly that: workers. Their private lives are not the company’s concern. They shouldn’t be. The thing I hate the most is when my employer tries to take care of my wellbeing outside work. I consider that invasive of my privacy.
I think the point is: App Stores for iPhone and iPads are in reality one and the same. Apple tries to artificially split them, so they can avoid regulations for at least one of them.
They’re not the same. You can’t install an iPad app on an iPhone. Only because most Apps for iPad also have an iPhone version doesn’t make them the same.
Or Debian Testing really. It might not be as polished out of the box but it's a pretty solid and stable experience. Or really any other well mantained Linux distribution, just anything but Ubuntu for the desktop. On the server I get it, ufw is convenient, snaps are not a bad thing in that scenario, LTS releases and extended support are all great if you're not a RedHat shop.
It is a good idea to install security updates from unstable since they take extra time to reach testing and the security team only releases updates to unstable
Like everything on the prehistoric timeline, the dawn of agriculture keeps getting pushed back. I think that's believed to be older than 10k years ago now.
With the caveat that the intensity of the increasingly older agriculture is increasingly low.
There are clearly agricultural societies, where >90% of calories come from cultured plants, and there are clearly hunter-gatherer societies that predate them, where ~0% do. We used to believe in a relatively sharp cutoff between these, where once people learned to grow food, they quickly moved to mostly grow their food. This is no longer thought to be the truth, and there was likely a "transitional period" of many thousands of years as people very slowly hunted and gathered less and planted more.
(Why believe in a sharp transition? Because there is a lot of archeological evidence for it, so it clearly happened in lots of places. It's just that this doesn't represent people inventing agriculture, but it spreading to a new area and displacing older lifestyles, either by migrations of people or of ideas.)
The first is only hundreds of thousands, starting with anatomically modern humans and not all of our precursors who lived for millions of years before that.
Though that may not change much -- depending on your estimates, Neanderthals probably spanned 3-30 billion person-years, as little as 0.1% of modern humans. All human precursors (5-10My worth) might be margin of error on the modern humans' totals.
On the other end of things, if we plateau at around 10 billion humans, it will only take about 75 years to accumulate the next third (well, quarter) of human existence.
> They did murder an American journalist for criticising their rules in his writing
In your quest to bust the myth that they killed an American journalist you are missing the point. The point is that they did murder a journalist. Here I am going to extend this even further: they murdered another human being.
This reminds me of a great exchange in a classic movie: https://www.quotes.net/mquote/9306 The exercise to map this to our discussion is left to the reader.
this is what it means to be a sovereign state - they _can_ do this. And if the subject being murdered is a citizen of the USA and the gov't doesn't respond with something, then they are implicit.
However, if the subject is not a citizen of the USA, then the USA does not have the legal right to respond (other than to talk trash about it).
I'm not saying the murder was just - it isn't. I'm saying that there's no mental gymnastics, and there's little to limited things the US can respond with.
Definitely if they weren't running very complex services or their business tolerated an occasional maintenance window.
The particular way Kubernetes can bite you is that it makes it much easy to start with far more complex setups - not necessarily much harder than to start with a simple setup! - but then you have to maintain and operate those complex setups forever, even if you don't need them that much.
If you're growing your team and your services, having a much bigger, more complicated toolbox for people to reach into on day 1 gives you way more chance of building expensive painful tech debt.
I think it may appear so because Kubernetes promotes good practices. Do logging, do metrics, do traces. That list quickly grows and while these are good practices, there's a real cost to implement them. But I wouldn't agree that Kubernetes means building tech debt - on the contrary, if you see the tech debt, k8s makes it easier to get rid of it, but that of course takes time and if you don't do it regularly that tech debt is only gonna grow.
I just rarely see people tackling greenfield problems with the discipline to choose to do Kubernetes without also choosing to do "distributed" in a broader, complexity-multiplying way.
If not for Kubernetes (and particularly Kube + cloud offerings) I really doubt they'd do all the setup necessary to get a bunch of distributed systems/services running with such horizontal sprawl.
I'm going to diverge from sibling comments: it depends.
As the article points out, k8s may really simplify deploys for devs, while giving the autonomy. But it isn't always worth it.
Yes, until you've scaled enough that it wasn't. If you're deploying a dev or staging server or even prod for your first few thousand users then you can get by with a handful of servers and stuff. But a lot of stuff that works well on one or three servers starts working less well on a dozen servers, and it's around that point that the up-front difficulty of k8s starts to pay off with the lower long-term difficulty
Whatever crossover point might exist for Kubernetes it's not at a dozen servers, at the low end it's maybe 50. The fair comparison isn't against "yolo scp my php to /var/www", but any of the disciplined orchestration/containerization tools other than Kubernetes.
I ran ~40 servers across 3 DCs with about 1/3 of my time going to ops using salt and systemd.
The next company, we ran about 80 in one DC with one dedicated ops/infra person also embedded in the dev team + "smart hands" contracts in the DC. Today that runs in large part on Kubernetes; it's now about 150 servers and takes basically two full ops people disconnected from our dev completely, plus some unspecified but large percentage of a ~10 person "platform team", with a constant trickle of unsatisfying issues around storage, load balancing, operator compatibility, etc. Our day-to-day dev workflow has not gotten notably simpler.
No it didn't. You ended up with each site doing things differently. You'd go somewhere and they would have a magical program with a cute name written by a founder that distributed traffic, scheduled jobs and did autoscaling. It would have weird quirks and nobody understood it.
Or you wouldn't have it at all. You'd have a nice simple infra and no autoscaling and deploys would be hard and involve manually copying files.
Right up until you needed to do one of the very many things k8s implements.
For example, in multiple previous employers, we had cronjobs: you just set up a cronjob on the server, I mean, really, how hard is that to do?
And that server was a single point of failure: we can't just spin up a second server running crond, obviously, as then the job runs twice. Something would need to provide some sort of locking, then the job would need to take advantage of that, we'd need the job to be idempotent … all of which, except the last, k8s does out of the box. (And it mostly forces your hand on the last.)
Need to reboot for security patches? We just didn't do that, unless it was something like Heartbleed where it was like "okay we have to". k8s permits me to evict workloads while obeying PDB — in previous orgs, "PDBs" (hell, we didn't even have a word to describe the concept) were just tribal knowledge known only by those of us who SRE'd enough stuff to know how each service worked, and what you needed to know to stop/restart it, and then do that times waaay too many VMs. With k8s, a daemonset can just handle things generically, and automatically.
Need to deploy? Pre-k8s, that was just bespoke scripts, e.g., in something like Ansible. If a replica failed to start after deployment, did the script cease deployment? Not the first time it brought everything down, it didn't: it had to grow that feature by learning the hard way. (Although I suppose you can decide that you don't need that readiness check in k8s, but it's at least a hell of a lot easier to get off the ground with.)
Need a new VM? What are the chances that the current one actually matches the Ansible, and wasn't snowflaked? (All it takes is one dev, and one point in time, doing one custom command!)
The list of operational things that k8s supports that are common amongst "I need to serve this, in production" things goes on.
The worse part of k8s thus far has been Azure's half-aaS'd version of it. I've been pretty satisfied with GKE, but I've only recently gotten to know it and I've not pushed it quite as hard as AKS yet. So we'll see.
I've never heard the term "resource budget" used to describe this concept before. Got a link?
That'd be an odd set of words to describe it. To be clear, I'm not talking about budgeting RAM or CPU, or trying to determine do I have enough of those things. A PodDisruptionBudget describes the manner in which one is permitted to disrupt a workload: i.e., how can I take things offline?
Your bog simple HTTP REST API service, for example, might have 3 replicas, behind like a load balancer. As long as any one of those replicas is up, it will continue to serve. That's a "PodDisruptionBudget", here, "at least 1 must be available". (minAvailable: 1, in k8s's terms.)
A database that, e.g., might be using Raft, would require a majority to be alive in order to serve. That would be a minAvailable of "51%", roughly.
So, some things I can do with the webservice, I cannot do with the DB. PDBs encode that information, and since it is in actual data form, that then lets other things programmatically obey that. (E.g., I can reboot nodes while ensuring I'm not taking anything offline.)
A PDB is a good example of Kubernetes's complexity escalation. It's a problem that arises when you have dynamic, controller-driven scheduling. If you don't need that you don't need PDBs. Most situations don't need that. And most interesting cases where you want it, default PDBs don't cover it.
> A PDB is a good example of Kubernetes's complexity escalation. It's a problem that arises when you have dynamic, controller-driven scheduling. If you don't need that you don't need PDBs. Most situations don't need that.
No, and that's my point: PDBs exist always. Whether your org has a term for it, or whether you're aware of them is an entirely different matter.
We I did work comprised of services running on VMs, there is still a (now, spritual) PDB associated with that service. I cannot just take out nodes willy-nilly, or I will be the cause of the next production outage.
In practice, I was just intimately familiar with the entire architecture, out of necessity, and so I knew what actions I could and could not take. But it was not unheard of for a less-cautions or less-skilled individual to do before thinking. And it inhibits automation: automation needed to be aware of the PDB, and honestly we'd probably just hard-code the needs on a per-service basis. PDBs, as k8s structures them, solves the problem far more generically.
Sounds like a PDB isn’t a resource budget then. We were using that concept in ESX farms 20 years ago but it seems PDBs are more what more SREs would describe as minimum availability.
Because they're completely different things you're comparing. The functionality that I describe as having to have built out as part of Ansible (needing to check that the deploy succeeded, and not move on to the next VM if not) is not present in any Helm chart (as that's not the right layer / doesn't make sense), as it's part of the deployments controller's logic. Every k8s Deployment (whether from a Helm chart or not) benefits from it, and doesn't need to build it out.
> needing to check that the deploy succeeded, and not move on to the next VM if not
It's literally just waiting for a port to open and maybe check for an HTTP response, or run an arbitrary command until non-zero status; all the orch tools can do that in some way.
… there's a difference between "can do it" and "is provided."
In the case of either k8s or VMs, I supply the health check. There's no getting around that part, really.
But that's it in the case of k8s. I'm not building out the logic to do the check, or the logic to pause a deployment if a check fails: that is inherent to the deployments controller. That's not the case with Ansible/Salt/etc.¹, and I end up re-inventing portions of the deployments controller every time. (Or, far more likely, it just gets missed/ignored until the first time it causes a real problem.)
¹and that's not what these tools are targetting, so I'm not sure it's really a gap, per se.
Yep. Still doing it today. Very large scale Enterprise systems with complex multi-country/multi-organisational operational rules running 24/7. Millions of lines of code. No Kubernetes. No micro-services. No BS. It’s simple. It works. And it has worked for 30+ years.
Some geeks here are coming up with sci-fi ideas of producing food in vertical farms, or underwater, why don't we start by simply using existing techniques?