This is part of the reason deployments to production cloud environments should:
1. Only be allowed via CI/CD
2. All infra should be defined as code
3. Any deployment to production should be a delayed process that also has a human-approval step in the workflow (at least one, if not more)
(Exactly where that review step is placed depends on your organisation - culture, size, etc.)
And anyone that does need to touch production should do so from an isolated VM with temporary credentials. Developers shouldn't routinely have production access from their terminal. This last aspect is easy and cheap to set up on AWS. I presume it's also possible in Google Cloud.
Doesn't require Jira but yes, specification-first is the way to get better (albeit still not reliably good) results out of AI tools. Some people may call this "design-first" or "architecture-first". The point is really to think through what is being built before asking AI to write the implementation (i.e. code), and to review the code to make sure it matches the intended design.
Most people run into problems (with or without AI) when they write code without knowing what they're trying to create. Sometimes that's useful and fun and even necessary, to explore a problem space or toy with ideas. But eventually you have to settle on a design and implement it - or just end up with an unmaintainable mess of code (whether it's pure-human or AI-assisted mess doesn't matter lol).
I used to manually curate a whole set of .md files for specs, implementation logs, docs, etc. I operated like this for a year. In the end, I realized that I was rolling my own crappy version of Jira.
One of the key improvements for me when using Jira was that it has well defined patterns for all of these things, and Claude knows all about the various types of Jira tickets, and the patterns to use them.
Also, the spec driven approach is not enough in itself. The specs need sub-items, linked bug reports and fixes. I need comments on all of these tickets as we go with implementation decisions, commit SHAs, etc.
When I come back to some particular feature later, giving Claude the appropriate context in a way it knows how to use is super easy, and is a huge leap ahead in consistency.
I know I sound like some caveman talking about Jira here, but having Claude write and read from it really helped me out a lot.
It turns out that dumb ole Jira is an excellent "project memory" storage system for agentic coding tools.
This will probably make people laugh, but I just had claude make me one. It’s simpler than jira, but it’s good enough without the 10,000 things I don’t need.
I wouldn’t use it for work but it’s good enough to track my projects, note what’s in each release, has simple user and service account key issuance for api access, user roles and access control, project level configuration for kanban lanes and status mapping, claude can access everything via the api, simple project level document library with live preview markdown editing, etc.
I am not personally partial to Jira at all, I just already had a free account, they have a production-ready MCP, and exact Jira usage patterns are very well represented in the training data.
Have you been using Claude Code/whichever tool you use, to read and write from OpenProject directly? I do like self-hosting data like this. I used to self-host Jira back in the day.
Totally agree - not just medical software either. See replies to my other comment threads. Software engineers really don’t like the idea that they might have to show they can perform at a certain standard to be able to work as a software engineer.
Typically arguments come up:
“that’s gatekeeping” - yes, for good reason!
“Laws already exist” - yeah, and that’s not the same as professional accreditation, standards and codes of practice! Different thing, different purpose. Also the laws are a mishmash and not fit for purpose in most sectors.
We don’t blame companies selling 3D Design software or 3D printers or mortar and cement, or graph paper and pencils. When people abuse those tools and build huts or houses or bridges that fall down, we usually blame the user for not having appropriate professional qualifications, accreditation, and experience. (Very occasionally we blame bugs in simulation software tools).
AI is a tool. It’s not intelligent, and it works at a much bigger scale than bricks and mortar, but it’s still just a tool. There’s lots we can blame AI companies for, but abuse of the tool isn’t a clear-cut situation. We should blame them for misleading marketing. But we should also blame users (who are often highly intelligent - eg doctors) for using it outside their ability. Much like doctors are fed up of patients using AI to try to act like doctors, software engineers are now finding out what it’s like when clients try to use AI to act like software engineers.
I largely agree, but if a company sold cement explicitly claiming that they will replace every job in the entire construction industry, that the cement is able to plan, verify, and build on its own, without supervision, and that any layperson can now create PhD level bridges with that cement without any input from or verification by professionals, some liability would definitely fall on the company selling that cement under these pretenses.
Software engineering is looking more and more like it needs a professional body in each country, and accreditation and standards. Ie it needs to grow up and become like every other strand of engineering.
Gone should be the days of “I taught myself so now I can [design software in a professional setting / design a bridge in a professional setting].” I’m not advocating gatekeeping - if you want to build a small bridge at the end of your garden for personal use, go for it. If you want to build a bridge in your local town over a river, you’re gonna need professional accreditation. Same should be true for software engineering now.
Professional bodies act as nothing more then gatekeepers and rent seekers for things of this nature. Anyone can write software, but not everyone writes security minded software.
We already have laws in place, and certifications that help someone understand if a given organization adheres to given standards. We can argue over their validity, efficacy, or value.
The infrastructure, laws, and framework exist for this. More regulation and beaurocracy doesn't help when current state isn't enforced.
There’s a reason why many professions have professional bodies and consolidated standards - from medicine to accountancy, actuarial work, civil engineering, aerospace, electronic and electrical engineering, law, surveying, and so many more.
In most of those professions, it is a crime or a civil violation to offer services without the proper qualifications, experience and accreditation from one of the appropriate professional bodies.
We DO NOT have this in software engineering. At all. Anyone can teach themselves a bit of coding and start using it in their professional life.
Analogous to law, you can draft a contract by yourself, but if it goes wrong you have a major headache. You cannot, however, offer services as a solicitor without proper qualifications and accreditation (at least in the UK). Yet in software engineering, not only can we teach ourselves and then write small bits of software for ourselves, we can then offer professional services with no further barriers or steps.
The mishmash of laws we have around data and privacy are not professional standards, nor are they accreditation. We don’t have the framework or laws around this. And I am not aware of the USA (federal level) or Europe (or member states) or China or Russia or India or etc having this.
For example, the BCS in the UK is so weak that although it exists, exceedingly few professional software engineers are even registered with them. They have no teeth. There’s no laws covering any of this stuff. Just good-ol’ GDPR and some sector-specific laws here and there trying to keep people mildly safe.
> There’s a reason why many professions have professional bodies and consolidated standards - from medicine to accountancy, actuarial work, civil engineering, aerospace, electronic and electrical engineering, law, surveying, and so many more.
Professional bodies = gatekeeping. The existence of the body means that the thing its surrounding will be barred from others to enter.
It means financial barriers & "X years of experience required" that actual programmers rightfully decry.
Caveat: When it comes to anything that will affect physical reality, & therefore the physical safety of others, the standards & accreditations then become necessary.
NOTE ON CAVEAT: Whilst *most* software will fall under this caveat, NOT ALL WILL. (See single-player offline video games)
To create a blanket judgement for this domain is to invite the death of the hobbyist. And you, EdNutting, may get your wish, since Google's locking down Android sideloading because they're using your desires for such safety as a scapegoat for further control.
The ability to build your own tools & apps is one of the rightfully-lauded reasons why people should be able to learn about building software, WITHOUT being mandated to go to a physical building to learn.
To wall off the ability for people to learn how computers work is a major part of modern computer illiteracy that people cry & complain about, yet seem to love doing the exact actions that lead to the death of computer competency.
Professional bodies are a necessary form of gatekeeping for practicing the craft of software engineering professionally.
You are then bringing a whole host of other issues that are related in nature but not in practice:
* Locking down of Android ecosystem
* Openness of education
* Remote teaching
* Remote or online examination
etc.
Professional bodies don't wall off the ability to learn nor to tinker at home, nor even to prototype or experiment (depending on scale and industry).
You can't confuse all these issues into one thing and say "we don't want this". It's a disingenuous way to argue the matter.
You don't want some gatekeeping on who will be doing surgery on you? You do obviously, and medical malpractice is a good thing if there is a problem.
Why don't you want the software engineer building your pacemaker or your medical CRM (or any other job where your immediate security is engaged) to have the same kind of verification and consequences for their actions?
It's mostly the problem of required regulations, so no we don't want mandatory gatekeeeping on surgeons as this is for example leading to doctor shortages
It's fine to set up voluntary standards and choose surgeons you think live up to those
So we want to enable more people to be able to create for example pacemakers because of things like Linus's law, "Given enough eyeballs, all bugs are shallow". If we exclude "non-professionals" from the process of creating "professional" products, we tend to have less participation in the process of innovation and therefore get less innovation
But there is already mandatory gatekeeping of surgeons? They went to medical school for so many years, and they are liable to malpractice if they don't do their job correctly.
Engineering is the same. They sign building plans with their names and may be liable for damages caused by gross negligence.
Why shouldn't any self taught "software engineer" be liable for damages they caused due to negligence?
If we had to sign off builds of critical components (like a pacemaker to stay with the analogy), there would be way more pushback against malpractice in the development process.
Of course not all software projects require that level of rigor, but for medical stuff and I'm sure a lot of other fields, it should be mandatory to have at least one qualified engineer that is ultimately responsible.
1. 99.999999% of software is not equivalent to "doing surgery" so doesn't need gatekeeping. I work on free, open-source PDF reader SumatraPDF. What kind of authorization should I get and from whom to ship this software to people?
2. pacemakers and other medical devices have to get approval from the government. So that's covered.
medical CRM software is covered by medical privacy laws which does what you say you want (criminalizes "bad" software) but in reality is a giant set of rules, many idiotic, that make health care more expensive for no benefit at all.
Adulterated food products, shoddy construction that burns like paper or crumples in an earth quake, snake oil medicine, etc. are well attested in underdeveloped nations and in history at scales far above what we see in societies with the kinds of professional bodies we’re talking about.
That said, the reality is that this safety comes at a cost, both monetary and in terms of “gatekeeping.” And many people would be fine (on paper) increasing risk 0.05% in exchange for 20% cut in costs or allowing disruption of established entities. But those 0.05% degradations add up quickly and unexpectedly.
Equating gatekeeping of professional bodies with grifting suggests you have no experience of why we have professional bodies in medicine or accountancy or civil engineering (to give just a few examples).
I agree with that and stand by these words. If people want to call it gatekeeping, so be it. Programming, software engineering if you will, is a serious discipline, and this craze needs to stop. Software building should be regulated and properly accredited as any serious activity.
As the sibling pointed out, there are already plenty of laws about, for example, handling of personally identifiable data. Somehow there is a lack of awareness, perhaps what is needed is a couple of high-profile convictions (which can't be too far off).
One of the key functions of a professional body is to ensure all members are aware of existing and new laws, standards and codes of practice. And to ensure different grades of engineer are aware of different levels of the standards. And that sector-specific laws and standards are accredited accordingly.
High profile convictions are not a good way of dealing with this. Not in the short or long term. Sure they have an impact, and laws should be enforced, but that’s not a substitute for managing the industry properly.
Nothing would be more effective at killing open source and commercial software business that requiring everyone that writes and ships software to users, directly or indirectly (e.g. an open-source library) to have License To Program from Software Licensing Organization.
> aware of existing and new laws, standards and codes of practice
Yeah, because software business is not at all ruled by fads.
1997: you have to follow Extreme Programming (XP) or you don't get your license
2000: you now have to use XML for everything in XML or you don't get your license
2002: you now have to follow Agile or you don't get your license
2025: you now have to write everything in Rust or you don't get your license
A software engineering licensing body would require licensed individuals to understand things about security and accessibility, which would be a huge improvement. If you are responsible for a trivial security vulnerability you and the company should actually be liable for it.
Sysadmins/other adjacent roles should likely have the same requirements. An unmaintained/unsecured server can create a huge liability.
I think the problem is that the person described had no idea what they were doing even in their own professional capacity. They needed to know about patient data management, but they didn't.
The way I see it, if they didn't even realize that they are doing something they shouldn't, they wouldn't have even known they need accreditation, even if that was required. Unless we restricted access to gazillions of tools without it of course.
I think it'll work itself out over time as what AI is/isn't and what data privacy means is discussed more. I'd leave accreditation entirely out of it, because we cannot even agree on what are the actual best practices or if they matter.
There are already laws and standards in almost every country. In this particular example, the people completely ignored all the privacy and data protection laws.
>> Software engineering is looking more and more like it needs a professional body in each country, and accreditation and standards.
Doesn't help much, accounting needs accreditation and standards, but that doesn't prevent competition level of some 100 accountants per job. Only way you prevent that is by limiting numbers, like lawyers do, case when connections and nepotism matter, you basically get a hereditary aristocratic caste.
Software exists from the vendor, but it’s not open source and/or not part of Linux mainline.
Hence the effort to develop an open source (and mainlined) alternative.
Whether this is a good use of effort and/or whether you believe the vendor should be doing the Linux development or not, and/or whether they should open-source their proprietary drivers, will depend on your personal views.
So why do they need to use helicopters and a risky airlift to return the astronauts to the main vessel? Why not just use the speedboats to take them back? Seems really odd and I can’t find any reasonable explanation.
>Why not just use the speedboats to take them back?
They actually covered this in the broadcast: Helicopters are faster to get the astronauts to medical, smoother in rough seas, and there's less risk of being swamped by a rogue wave. Plus, since the astronauts might have fatigue/muscle atrophy/whatever, it complicates potential boat transfers.
The public information sheet implies that in poor weather/rough seas they would do crew recovery in the well deck, sort of like how Dragon works. [1]
From the broadcast, they made it sound like a big factor is the 2 hour program requirement to get the crew out of the capsule. Maybe they can't reliably hit that mark with a well deck recovery?
The other reason is that the capsule can splashdown far away from the ship. In this case it was close (3km or so). It can possibly fall much farther away. In which case boats would be much slower. Add in the possibility of rough seas & bad weather the helos make sense. And just to keep things simple I think they just use them no matter what. Prevent errors. Also gives a chance to rehearse and debug the full recovery process in case it’s actually really needed the next time.
Helicopter -> large boat is much easier, and much faster, than small boat -> large boat. And it's not riskier. I know the inherent risk in flight is greater, but it's also much more managed, so the actual risk is less.
All those countries are essentially American vassals. No shade to them, just stating the reality, and not really sure why we need to keep pretending. There's no shame in that. It's often the smartest move to join forces with the big guy in the block!
I've been to many of them and, unlike most Americans, when I say I've traveled the world, that also includes countries that are not in the American sphere of influence. The difference in how that plays out is obvious. I would recommend you travel more. Ideally to a country where if anything happens, uncle sams pressure won't do anything.
In countries like China, Russia, or even India, you won't find as many American products. The influence of Hollywood is much less. American styles of doing things are not necessarily the ones chosen for civic institutions. American agencies don't work as closely with their scientific enterprises as the American allies. On the other hand, they have strong armies that are not beholden to what America dictates, as evidenced by how often they end up in conflict.
As an example, the world sanctioned Russia and... nothing happened... because Russia is a real country able to build its own things. It has industrial capacity, mining capacity, and the organization to do that independently of what others think. It also has an army willing to defend it.
The countries you listed do not have these things. Their 'army' to defend the nation is a vague promise that they'll think about while they ask America to carry out their interests. American magnanimity usually means this is a safe bet.
They’re not mission-critical equipment. If they fail, nobody dies.
They’re not radiation hardened, so given enough time, they’d be expected to fail. Rebooting them might clear the issue or it might not (soft vs hard faults).
Also impossible to predict when a failure would happen, but NASA, ESA and others have data somewhere that makes them believe the risk is high enough that mission critical systems need this level of redundancy.
>>They’re not mission-critical equipment. If they fail, nobody dies.
Yes, for sure, but that's not my question - it's not a "why is this allowed" but "why isn't this causing more visible problems with the iphones themselves".
Like, do they need constant rebooting? Does this cause any noticable problems with their operation? Realistically, when would you expect a consumer grade phone to fail in these conditions?
Random bit flips due to radiation are infrequent - the stat is something like one but flip per megabyte per 40,000 data centre RAM modules per year - ie extremely uncommon, but common enough to matter at scale.
Space is a harsher environment but they’re only up there for like a week. So, if there were an incident, it would be more likely to kill the devices, but it’s not very likely to happen during the short period of time (while still being more likely than on earth’s surface).
That said, part of the point of them taking these devices up is to find out how well they perform in practice. We just don’t really know how these consumer devices perform in space.
It will be interesting to see the results when they’re published!
A lot of "space-rated" components come from consumer space, with certification that it can work in space.
IIRC the Helicopter on Mars using the same snapdragon CPU in your phone.
Also, bit flip can happen without you knowing. A flip in free ram, or in a temp file that is not needed anymore won't manifest into any error, but then, your system is not really deterministic anymore since now you rely on chance.
1. Only be allowed via CI/CD
2. All infra should be defined as code
3. Any deployment to production should be a delayed process that also has a human-approval step in the workflow (at least one, if not more)
(Exactly where that review step is placed depends on your organisation - culture, size, etc.)
And anyone that does need to touch production should do so from an isolated VM with temporary credentials. Developers shouldn't routinely have production access from their terminal. This last aspect is easy and cheap to set up on AWS. I presume it's also possible in Google Cloud.
reply