Having worked with rust in the past couple years, I can say that it hands down much better fit for LLMs than Python thanks to its explicitness and type information. This provides a lot of context for LLM to incrementally grow the codebase.
You still have to watch it, of course. But the experience is very pleasant.
You’re right. What I like doing in those cases is to review very closely the tests and the assertions. Frequently it’s even faster than looking at the SUT itself.
I heard this “review very closely” thing many times, and rarely means review very closely. Maybe 5% of developers really do this ever, and I probably overestimate it. When people send here AI generated code, it’s quite obvious that they don’t review code properly. There are videos when people recorded how we should use LLMs, and they clearly don’t do this.
Yeah. This is me. I try, but I always miss something. The sheer volume and occasional stupidity makes it difficult. Spot checking only gets you so far. Often, the code is excellent except in one or two truly awful spots where it does something crazy.
I kinda want that what you’re describing… Sort’of the promise that Apple made with their AI and didn’t deliver.
It would be amazing to have chat bot with full contextual awareness.
Until the police, or your insurance company, or your ex wife's attornies, start debriefing that AI. You dont own the AI and you wont have control over what it tells other people about you.
I feel that we’re reaching a limit to our context switching. Any further process improvements or optimizations will be bottlenecked on humans. And I don’t think AI will help here as jobs will account for that and we’ll have to do context switching on even broader and more complex scopes.
I think the limit has been exceeded. That's the primary reason everything sort of sucks now. There is no time to slow down and do things right (or better).
IMO, cyber security, for example, will have to become a government mandate with real penalties for non-compliance (like seat belts in cars were mandated) in order to force organizations to slow down, and make sure systems are built carefully and as correctly as possible to protect data.
This is in conflict with the hurtling pace of garbage in/garbage out AI generated stuff we see today.
Here in the EU cybersecurity is actually being regulated, with heavy fines to come (15 million euros or 2.5% of global turnover!), if it wasn't already. Look up the CRA and the NIS2.
Things may well reach a point elsewhere in the world finding out that some software is for sale in the European Union is itself a marker of quality, and therefore justifies some premium.
These are good developments, but it remains to be seen how much of impact they will have. Software developers will have to follow a bunch of “best practices”, but there isn’t a requirement that they are good at them. There are no fines for producing insecure software, only fines for not following the rules.
Software providers are also likely to be specifying narrow “fit for purpose” statements and short (ish) support window. If costs go up too much, people will be using “inappropriate” and/or EOL stuff because the “right thing” is too expensive.
To be clear, this is a step in the right direction but is not the panacea.
Respectfully, I think you have too much faith in the ability and general desire of individuals to protect themselves. Consider how successful scams and security breaches are. Consider, too, the unequal bargaining power between vendors and individual consumers (have you ever tried to negotiate a form contract with a megacorporation?).
We protect people because they have failed. These regulations tend to follow actual injuries; they are rarely promulgated in anticipation of them.
> Consider, too, the unequal bargaining power between vendors and individual consumers (have you ever tried to negotiate a form contract with a megacorporation?).
You don't negotiate the contents of your burger with McDonald's. If you don't like it, you go to Burger King or have a Döner Kebab.
There's plenty of tacit negotiations here.
> We protect people because they have failed. These regulations tend to follow actual injuries; they are rarely promulgated in anticipation of them.
Homeopathic medicine tend to follow actual health problems, too. That doesn't mean they are a good idea.
> You don't negotiate the contents of your burger with McDonald's. If you don't like it, you go to Burger King or have a Döner Kebab.
Not every industry is a competitive one with practically unlimited choices. Natural monopolies or industries with high barriers to entry tend to have the most leverage over their customers. Most people have only a single electricity provider, and there are only two major mobile OS vendors worth speaking of.
> Homeopathic medicine tend to follow actual health problems, too. That doesn't mean they are a good idea.
Some work; some don’t. The key is figuring out which solutions are effective and which aren’t. Nobody is proposing keeping fixes around whose costs aren’t worth the benefits to society.
If you sell the computer with the software preinstalled it would still fall under the selling a product part. So if you'd want to actually have a loophole you'd at best be selling the product without any software, and we both know how well that would go with the masses.
Maybe, but I was taking an immense amount of vitamin C as prescribed by the doc to bootstrap the healing process.
So this reveals to me two issues
1. In general side effects of the contrast agent are not communicated properly. If I knew, I might have asked - hey can you do the analysis without the agent?
2. There’s no recommendation to avoid vitamin C prior and right after the MRI, heightening the risk.
I generally agree with you, but! Video or audio calls between EU and the US still have a much higher chance of speaking up at the same time and it’s due to lag. If the latency is decreased by 33% it might be a game changer.
Updating the Bitnami images is probably a bit of a challenge. From looking at them last year, I believe that they are build around a Bitnami style/framework. They are confusing at best.
If you're Bitnami it probably made sense to do it the image the way they did, but for everyone else, it's just a massive complication.
Personally I don't understand why anyone would have opted to use the Bitnami images for most things. They are really large and complex images and in most cases you'd probably be better of building your own images instead.
My guess is that there's a very small overlap between people who want to maintain Docker images, and the people who chose to run Bitnamis images.
The Docker images are complex for the sake of the Helm charts, which sometimes need to pass down tons of parameters
These aren't just for your laptop, they're supposed to be able to run in prod
I'm still stuck with 3 bitnami charts that I keep updated by building from source, which includes also building the images, all on our private registry.
That makes some sense. I've only used Bitnami images with Docker compose or as standalone containers. In those case you're frequently better of just mounting in a configuration file, but that won't really work in Kubernetes.
I would argue that if you run Kubernetes, then you frequently already have the resource to maintain your own images.
I had not used bitnami images for.. probably 5 years at this point and they always seemed servicable in a pinch, usually used for testing and I when I brought this up recently I was also told (by k8s users) that the helm stuff is probably what actually has most people up in arms because it is very common. We're the minority who remember bitnami as a non-critical choice among many.
> Personally I don't understand why anyone would have opted to use the Bitnami images for most things.
At my previous company, we used it because of the low CVE counts. We needed to report the CVE count for every Docker image we used every month, so most of the images were from Bitnami.
There are many enterprise companies freeloading on Bitnami images, and I’m surprised it took Broadcom this long to make this change.
Wait... this whole time reading this thread, I'm racking my brain for what bitnami provided (I used to use them before docker came around. I never would have got Redmine up and going without them -- the install seemed so foreign.) that building a docker image couldn't, because surely everyone knows how to build one from scratch, right?... right?
Is all the panic because everyone is trying to avoid learning how to actually install the pieces of software (once), and their magic (free) black boxes are going away?
I recommend VS Code remote connections and docker builds via the docker extension to do rapid build-run-redo. Remember to make sure it works from scratch each time. You can automate them with Jenkins... (which came first, the Jenkins or the Jenkins Docker image?)
I also recommend Platform One. (you'll need a smart card)
I also recommend reading the particular software's documentation ;)
Thats super silly, it's so easy to make docker images... especially if you have a fast connection you can build a proper image which is production ready in a few hours.. (eg.30-40 builds)
Not OP, but in general the process goes like this:
- you pick a base image you want to use, like Alpine (small size, good security, sometimes compatibility issues) or Debian or Ubuntu LTS (medium size, okay security, good compatibility) or whatever you please
- if you want a common base image for whatever you're building, you can add some tools on top of it, configuration, CAs or maybe use a specific shell; not a must but can be nice to have and leads to layer reuse
- you build the image like you would any other, upload it wherever you please (be it Docker Hub, another registry, possibly something self-hosted like Sonatype Nexus): docker build -t "my-registry.com/base/ubuntu" -f "ubuntu.Dockerfile" . && docker push "my-registry.com/base/ubuntu"
- then, when you're building something more specific, like a Python or JDK image or whatever, you base it on the common image, like: FROM my-registry.com/base/ubuntu
- the same applies not just for language tooling and runtimes, but also for software like databases and key value stores and so on, albeit you'll need to figure out how to configure them better
- as for any software you want to build, you also base it on your common images then
Example of cleanly installing some packages on Ubuntu LTS (in this case, also doing package upgrades in the base image) when building the base image, without the package caches left over:
In general, you'll want any common base images to be as slim as possible, but on the other hand unless you're a bank having some tools for debugging are nice to have, in case you ever need to connect to the containers directly. In the end, it might look a bit like this:
upstream image --> your own common base image --> your own PostgreSQL image
upstream image --> your own common base image --> your own OpenJDK image --> your own Java application image
In general, building container images like this will lead to bigger file sizes than grabbing an upstream image (e.g. eclipse-temurin:21-jdk-noble) but layer reuse will make this a bit less of an issue (if you have the same server running multiple images) and also it can be very nice to know what's in your images and have them be built in fairly straightforwards ways. Ofc you can make it way more advanced if you need to.
In brief you need to switch the registry from (iirc) docker.io/bitnami to docker.io/bitnamilegacy. Note that as of iirc tomorrow those images will no longer be updated. So the moment there is a high or critical cve you better have a plan to use a new image and likely helm chart or send broadcom cash. The old registry will continue to have a "latest" tag but this should not be used for production.
reply