So they brewed up a bunch of ugly C macrology that enabled C programmers (or Visual Studio wizards) to define COM interfaces in header and implementation files that just happened to lay out memory in the exact same way as vtables of C++ pure virtual classes.
While C++ programs would use other ugly macros to declare actual honest-for-god C++ classes to implement COM interfaces.
And Visual Basic programmers would ... do whatever it was that Visual Basic programmers did.
We were being a well-behaved MS shop at this point. The new generation of our website was designed with a distributed component architecture being called from ASP (Active Server Pages) using DCOM. We implemented it in C++ using ATL (iirc) to facilitate the COM interfaces. The thing was, even with that help, our actual business logic was absolutely overwhelmed by all the annoying casting back-and-forth between those IUnknown values and C++ native types. It was really annoying.
This was around the timeframe of VB 5, I think, and we discovered that we could write the same logic in VB without all the annoyance. Putting aside our "serious developer" C++ elitism, we were actually productive in doing it all in VB instead of C++, and that generation of the system lasted for a bunch of years. We eventually replaced that with an ASP.Net design in C#, which was a whole lot more manageable.
I recall an internal (never released) project at IBM in the late 1980s. It was a tool for creating client-server GUI apps, programming them with the REXX language. You may remember that client-server was all the rage at the time, and REXX was IBM's favorite scripting language. IIRC, the internal name of the project was "Red October", but I can't find any reference to this online.
The tools lacked the visual GUI builder of VB, but really, that's just a detail. The rest of the framework was really quite powerful, and a GUI builder could have been added. But in true IBM fashion, they had no idea how to market something that wasn't mainframe targeted, and they killed the project. There was a fair amount of acrimony on internal forums about this at the time.
You are correct. The only reference I could find to Red October is a dead link which was never archived (or has since been removed from) @ Wayback Machine Internet Archive: https://www.ibm.com/history/innovation/red-october
Otherwise, the other links point to the game, the movie and the cyberespionage malware attack.
You are correct! I goofEd. It's hard to assimilate a human language on the fly. See that proves I am not an AI :)
It's also worth mentioning Sharpdevelop.net which also gave you the choice of VB.Net or C# and another language, can't recall what it was.
There were so many options available. So many books were written, sold and then became obsolete. I know because I bought tons of them at the now defunct Fry's Electronics and later donated them to the local public library.
But with AI advancing, focusing on programming and earning a CS degree as a future investment does not hold the same appeal that it once did. Of course, this is just my opinion.
I've seen that, but always assumed it would be a more accessible account, and not tickle the technical developer itch.
I recently read Jordan Mechner's The Making of Prince of Persia, and while it was interesting, it wasn't what I'd hoped for. It was a biography covering his life during the creation of the game, and very little about the internals and technology of it. That's fine, but for my tastes, the book that it really was, was less captivating than it could have been.
My default is to expect the same from the FoxTales book.
There is this pervasive idea that MMT promotes limitless spending and I'm not sure where it comes from
Right. The theory says you can (should?) spend until you hit the "inflation ceiling," then use taxes to drain liquidity.
But what we saw in 2020-2022 was that we hit the ceiling at 100mph. The "tax it away" solution proved to be a political fantasy. No politician is going to hike taxes on the middle class to cool down the price of eggs.
My understanding (I'm not an economist) is that MMT is currently viewed as a "fair-weather theory." It explained why we could spend during a liquidity trap, but offered no viable steering mechanism once the engine overheated.
In my mind, this puts it in the same box as Keynesianism. Both theories are politically convenient because they offer politicians an excuse to pander. But those politicians aren't willing to do what their pet theory would require once the emergent crisis has passed.
> But what we saw in 2020-2022 was that we hit the ceiling at 100mph.
Except that 2020-2022 was not (completely) about fiscal/monetary problems that could be fixed with fiscal/monetary solutions. A good portion of the spike was because of 'outside' factor(s), e.g.:
What would extra taxation do to help that? There are various types of inflation, categorized by 'root cause', and 'too much money' is not the source of all of them:
> But what we saw in 2020-2022 was that we hit the ceiling at 100mph.
The 2020-2022 inflation spike wasn't due to following MMT based spending policies though. Slamming the brakes at 100mph may certainly have bad consequences, but driving at 100mph in low visibility conditions was the problem, not braking before you hit something.
The fact is everyone knew the combination of supply chain disruptions, remote work, and the changes in spending habits would eventually produce inflation, and yet we kept pumping money in. Every economic school would have advised against that course of action. MMT only calls for spending to keep pace with economic growth, not to run the money printers as fast as you can.
Economic theories, like scientific theories, are successful if they correctly predict what will happen if you do X. If the theory of gravity predicts that you will fall to your death if you jump off a cliff, it's not a failure of the theory of gravity that it doesn't tell you how to levitate after you've already jumped.
Economic theories, like scientific theories, are successful if they correctly predict what will happen if you do X.
In addition, an economic theory can only be useful if the actions that they dictate can actually be put into practice; if nobody's going to follow what the theory tells you to do, then it's of little use at all.
This is the big problem with both MMT and Keynesianism: they're both great figleafs for politicians to wear when they want to spend in order to pander. But when the theory tells them that the situation has changed and they need to change their actions accordingly, they don't heed the need to raise taxes (MMT) or slash spending (Keynes).
Unless we really do the thing, then pointing at a given macroeconomic theory is just an excuse.
> In addition, an economic theory can only be useful if the actions that they dictate can actually be put into practice; if nobody's going to follow what the theory tells you to do, then it's of little use at all.
That is not a requirement for a theory to be useful. The fact that not using it has a cost is proof of its utility.
Remote working itself isn't inflationary, but transitioning from office to remote is. Remote workers can get high paying jobs in low cost of living areas and can generally job hop with less friction so salaries rise, spending that would have gone to commuting expenses and daytime childcare is now disposable income, people spending more of their time at home causes them to invest in larger or nicer homes. More money getting thrown at fewer items causes prices to rise.
I'd MUCH rather consider the completeness of my experience based on what I was able to experience, rather than how long I'd lived for.
Sorry for the tangent, but this is a pet peeve of mine. From my perspective, it seems like our modern quest for safety in all things has the effect of wrapping the whole world, and ourselves, in bubble wrap. The goal seems to be to extend that number as far as possible, without regard to how the life that we experience during that period is diminished by all the safeguards.
It bothers me that we've made it a mantra, telling each other "have a safe trip", or "be safe", and so on. I can't remember anyone saying "have the richest experience you can manage".
As a developer, the fact that my source code passed through a compiler - an automated tool - doesn't give the author of the compiler any claim on my executable code.
As an artist, the fact that I used, e.g., Rebelle to paint a digital painting, or that I used Lightroom (including generative AI to fill, or other ML/AI tools to de-noise and sharpen my image) in editing a photograph, doesn't give EscapeMotion, Adobe, or Topaz, any claims to my product.
Why, then, would there be any chance that use of a tool like Claude - a tool that's super-advanced to be sure, but at the end of the day operates by way of a mathematical algorithms - would confer any claims to Anthropic?
If a court later found the codebase was predominantly AI-authored and therefore not copyrightable
Is figuring out the appropriate prompts to use in directing Clause qualitatively different than using a (much) higher-level abstraction in coding? That is, there was never any talk as we climbed the abstraction layer from machine code to assembly to Fortran or C to 4GLs to Rust etc., that the assembler/compiler/IDE builder would have any ownership claim on the produced executable. In what sense can Anthropic et al assert that their tool, which just transforms our directives to some lower-level representation, creates ownership of that lower-level representation?
but the ability for the agent to build it in the first place is based off of stolen IP.
I honestly don't understand why the attitude that underlies this is so prevalent.
When I write code, what I write and how I write it is informed by having read countless source code files over my education and my career. Just as I ingest all that experience to fine-tune how my later code is written, so does the LLM from the code it's seen.
The immediate retort to that is that the LLM is looking at code that wasn't its to read. But I don't think that's a valid objection. Pretty much by definition, everything I've learned from has a copyright on it, and other than my own code on my own time, that copyright is owned by someone else. Much of the code that's built up my understanding has been protected by NDA, or even defense-department classifications: it wasn't mine in any way. But it still informs how I do all my future coding.
By analogy: I'm also an artist, especially since my retirement. My approach to photography was influenced by Ansel Adams, and countless other artists whose works I've seen displayed in museums, or in publications and online. My current approach to painting was inspired by Bob Ross and others, and the teachers who have helped me develop. I've taken pieces of what I've seen in all their work, and all of that comes out in my photos and paintings, to varying degrees.
I've taken ideas from others in code and in art, and produced something (hopefully!) different by combining those bits with my own perspective. I don't think anyone has a claim on my product because of this relationship.
Likewise, I know that many of my successors have learned from my code (heck, I led teams, wrote one book about software development!). And I hope that someday my artwork has developed to the point where there's something in it that's worth someone else's attention to assimilate. I've never for a minute - even decades before the advent of LLMs - hoped or even imagined that my work would remain locked up with me, and that the ideas would follow me to the grave.
As they say, we are all standing on the shoulders of giants. None of us would be able to achieve the tiniest fraction of what we have, without assimilating what has come before us. Through many layers of inheritance it's constantly being incorporated in subsequent works.
In a few decades at best, I'll be dead. It probably won't be very long after that when people even forget my name. But the idea that something I've done - my work in developing software systems, or in my photography and painting - will continue to have ripples through time, inspires me and gives me hope that I'll have some tiny shred of immortality beyond my personal demise.
Humans should have more legal privileges than machines, just as individuals should have more legal privileges than corporations. It's really as simple as that. I don't want to gripe around making up justifications, that's how the law should be and if it turns out not to be that, I'm going to be nettled.
I live in the UK, and most US law is based upon English common law, it's not some immutable code given to us from above. It's based upon assumptions and capabilities of the entities participating in the system at the time the law was codified. It can and should change to make more sense if those assumptions and capabilities shift massively.
I get the individual/corporation distinction, but how is a machine another tier here? It's a tool, it can't have any rights at all. The wielder has rights, and curtailing their rights depending on what tool they're using to exercise them seems strange. Potentially justifiable, but it's a different axis from the nature of the actor.
Our positions are completely compatible. People are anthropomorphizing LLMs, saying that because humans train on protected works, then it is fine for LLMs to do the same.
If they have only the rights that their human creators have, then access to them cannot be sold, in the exact same way that I cannot sell you a database that I have collected filled with copyrighted material. The "humans do training too" argument only holds if you imbue LLMs with similar rights to humans.
I am allowed to sell myself (in a very limited capacity) to others for them to exploit my training, even if that training was on protected material, which is a privilege humans should have, but machines should not.
Thing is, LLMs level of compression of training set mean that effectively, under the same rules that say you cannot sell that database filled with copyright material, the LLM is fine to sell. Because you have to be able to meaningfully trace each claim to final output (weights). For example, for some older stable diffusion model, it was calculated that each individual work addition or removal resulted in about 1-2 bits of change, meaning the same rules would qualify it as not derivative work.
However, because it is an issue with (at least historical) goals of copyright law, the common pattern that is evolving is that AI is not granted copyright of any work it generates, making it a bit of poison pill for some of the egregious ideas of corporate abuse. Not sure if the weights will be considered copyrightable either.
Under a "sweat of the brow" doctrine, the creator of a work, even if it is completely unoriginal, is entitled to have that effort and expense protected; no one else may use such a work without permission, but must instead recreate the work by independent research or effort. The classic example is a telephone directory. In a "sweat of the brow" jurisdiction, such a directory may not be copied, but instead a competitor must independently collect the information to issue a competing directory. The same rule generally applies to databases and lists of facts.
306 The Human Authorship Requirement
The U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being. The copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind.” Trade-Mark Cases, 100 U.S. 82, 94 (1879). Because copyright law is limited to “original intellectual conceptions of the author,” the Office will refuse to register a claim if it determines that a human being did not create the work. Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884). For representative examples of works that do not satisfy this requirement, see Section 313.2 below.
313.2 Works That Lack Human Authorship
As discussed in Section 306, the Copyright Act protects “original works of authorship.” 17 U.S.C. § 102(a) (emphasis added). To qualify as a work of “authorship” a work must be created by a human being. See Burrow-Giles Lithographic Co., 111 U.S. at 58. Works that do not satisfy this requirement are not copyrightable.
...
Similarly, the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author. The crucial question is “whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.”
The question is, does Claude Code fall into that category of authorship without creative input or intervention from a human author?
The prompts may be copyrightable... but the output if you don't go in and fix it up and provide that minimal amount of human originality to it? That appears to still be an open question of law in the United States.
In many of those examples, there is payment to the creator of the works that others are learning from. Authors are paid for their books, when we listen to music on the radio the musician is paid royalties, etc. When you lead a team and mentor junior engineers you're being paid for your time.
The nature of the source material matters though. Training a model on open source software seems perfectly fair - it has explicitly been released to the public, and learning from the code has never been a contested use.
IMO the questions around coding models should be seen as less about LLMs and more as a subset of the conversation about large companies driving immense profits from the work of volunteers on open-source projects, i.e. it's more about open source than AI.
Scale and the ability to generate a livelihood of your creations and/or the ability to control how what you have created is used, for instance, to demand attribution.
> When I write code, what I write and how I write it is informed by having read countless source code files over my education and my career. Just as I ingest all that experience to fine-tune how my later code is written, so does the LLM from the code it's seen.
You are presumably human. We have granted humans specific exemptions in copyright law. We have not granted that to LLMs. Why are we so eager to?
What's special about LLMs in your argument? When I was an edgy teenager in the 90s, I'd argue that it's not piracy because the DivX representation of the movie isn't bit-for-bit identical to the Hollywood master or whatever. If your reasoning works for LLMs as the tools, surely it also works for video compression.
I'm not sure where in our lawbooks there are laws that specifically target humans to the exclusion of human-operated tools.
There's also a TON of irony here. What an about face it is, for the community at large* to switch from "information wants to be free, we support copyleft and FOSS" to leaning so heavily on an incredibly conservative reading of IP law.
> I'm not sure where in our lawbooks there are laws that specifically target humans to the exclusion of human-operated tools.
If we take the point of view that LLMs are tools (I agree), then people need to be absolutely certain that these tools don't contain (compressed) representations of copyrighted works.
People seem not to want to do that. And they argue that the LLMs have "learned" or "been inspired" by the copyrighted works, which is OK for humans.
This is the problem. People can't even agree on which of two mutually exclusive defenses to appeal to! Are LLMs tools which we have to ensure aren't used to reproduce copyrighted work without permission, or are they entities that can be granted exemptions like humans can? It can't be both!
> There's also a TON of irony here. What an about face it is, for the community at large* to switch from "information wants to be free, we support copyleft and FOSS" to leaning so heavily on an incredibly conservative reading of IP law.
True. While IP-owning companies like Microsoft now say "it's online, so we can use it".
It's bizarre.
I'll tell you what: I'll drop my conservative stance in defense if FOSS when Windows and the latest Hollywood movie are "fair use" for consumption by whatever LLM I cook up.
If we take the point of view that LLMs are tools (I agree), then people need to be absolutely certain that these tools don't contain (compressed) representations of copyrighted works.
I've pointed out elsewhere in this thread that this is the opposite of how the real world works.
In actual fact, people who need software built hire a tool (e.g., a software developer like me) to build it for them. That tool - me or you - has inside it a tremendous library of copyrighted works represented. I've worked on enough different projects over the decades that the next CRUD function, or rule-driven data-entry tool, or whatever, that I build is going to draw very significantly from the last ones I built. And those last ones were copyrighted, with those rights held by my employer at the time, and maybe even protected by NDA or defense-style classifications.
Is your position that this is OK so long as it's stuff that I can keep in my squishy brain, but the moment that mechanism moves to silicon, it somehow becomes fundamentally different?
The other major argument I see in this thread is that for LLMs it's different because there's a third party who is aggregating the data, and selling me (or my employer) use of that tool. But this doesn't change the overall picture at all. It just adds one more layer of dereferencing into it. The addition of that middleman hasn't altered the moral landscape: how is hiring me, along with what's in my memory, different from hiring the combination of me plus a helper to supplement my memory? There's an aspect of scale, I suppose. With that helper I can achieve greater quantities, but it's not changing the story in a qualitative way.
> In actual fact, people who need software built hire a tool (e.g., a software developer like me) to build it for them. That tool - me or you - has inside it a tremendous library of copyrighted works represented.
Humans are distinct from tools, both ethically (to most people) and legally. You may not see it this way, but it is the majority opinion and the stance of the law in most jurisdictions. The rest of your paragraph falls apart without considering humans as tools.
(Incidentally: you can own tools. I don't think you want to open that door…)
> Is your position that this is OK so long as it's stuff that I can keep in my squishy brain, but the moment that mechanism moves to silicon, it somehow becomes fundamentally different?
Yes. We, humans, structured our laws because we consider ourselves and our squishy brains special.
This is, for example, why you don't get charged with murder for terminating a computer program. We, the humans, have decided that the right not to be terminated only applies to humans (and other animals, but then because we grant them that protection).
> I'm not sure where in our lawbooks there are laws that specifically target humans to the exclusion of human-operated tools.
It doesn't need to. Laws are for humans.
Laws don't give rights to chainsaws. Or lawnmowers. Or kitchen knives, hammers, screwdrivers, and spades.
You can't use any of those to commit a crime and then claim that the law specifically did not exclude those tools.
Why are you seemingly in favour of carving out an exemption for LLMs?
Laws are for humans.
Arguing that the law did not specifically address "intentionally killing a person by tickling them till they died" means that you found a loophole which can be used to kill people is...
Because that allows us to create useful tools that we didn't have before. For me it feels like a carpenter going from a hand-saw to an electrical saw. Still requires the skills of a good carpenter, but faster and easier.
… so a bunch of people just decided that rights we granted to humans also apply to their tools? Without any discussion? This isn't how anything is supposed to work when it comes to common rules!
The common rules are so because we agree on them. On principle, in this case, we do not agree what the rule should be here and it's in a way unprecedented. We'll soon converge to a societal agreement. I hope society abstaining itself from tools will not be the answer.
The attitude is derived from a general animus many have towards AI companies. They resent the efficacy of AI because it devalues individual expertise.
I can't imagine it really justifiable to say that training off data is the same as "stealing", when that same claim, that learned information that a person could retain and reproduce constitutes copyright infringement is the subject of many dystopian narratives, like this one, where once your brain is uploaded to the cloud you have to pay royalties based on every media product you remember.
Part of how AI works is that it's just really complicated compression, you can get AI to write out Harry Potter novels word for word with the right prompting.
When it picks out a rare bit of code, it will be simply copying that code, illegally, and presenting it without attribution or any licenses which is in fact breaking the law but AI companies are too important for the law to apply to them.
There's been instances where models have spat out comments in code that mention original authors, etc., effectively outing itself as a copyright thief.
There's nothing anyone can do about it, but the suspicion is that the big companies have taken everyone's code on GitHub, without consent, and trained on it.
And now are spitting out big chunks of copyrighted code and presented it as somehow transformed even though all they've actually done is change a few variable names.
It is copyright theft, but because programmers are little people, not Disney, we don't have any recourse.
And now are spitting out big chunks of copyrighted code and presented it as somehow transformed even though all they've actually done is change a few variable names.
It's pretty likely that I've done the same thing. I mean, I've written enough CRUD functions in my life, for example, that in all likelihood I'm regurgitating stuff that's a copy, for all practical purposes, of stuff I've done before as work-for-hire for my employer. I'm not stealing intentionally or consciously, but it seems quite likely that it's happening. And that's probably true for many of you, at least that have been in the industry for a while.
> There's nothing anyone can do about it, but the suspicion is that the big companies have taken everyone's code on GitHub, without consent, and trained on it.
I asked agent X what is the source of training data it generated code from, it couldn’t say. Then I asked why the code implementation is exactly the same as the output of agent Y. It said they were trained on the same ‘high-quality library’, and still couldn’t say which one.
So I guess that’s fine because everyone is doing it.
You asked a machine that makes things up when it doesn't know the answer a question that it has no way of knowing the answer to. I don't know why you bothered to relay its response.
> The comment is relevant to the suspicion that THE software is using (distributing) some OSS code without attribution.
The accusations in the comment are relevant.
Framing it as a conversation with an LLM and showing its responses, when that LLM does not have access to the answer and is fully making up a response, is irrelevant and distracting.
When I write fizzbuzz do I owe royalties to the inventor of fizzbuzz? Is my brain copyright thieving because I can write out the song lyrics from memory?
For another human being to look at my open source code, learn from it, get inspired by it, appreciate what I did, and let it influence their own creativity would bring me joy. That's why I open sourced it in the first place.
Few people ever actually read open source code, but I'd like to think on the rare occasions they do, they share a connection with the author. I know when I read somebody else's code, for me to understand it I have to be thinking about the problem the same way they were when they wrote it. I feel empathy with them and can sometimes picture the struggle, backtracking, and eureka moments they went through to come up with their solution.
Somehow I don't get the same warm fuzzy feelings about a machine powered by investor money ingesting my work automatically, in milliseconds, and coldly compressing it down to a few nudges on a few weights out of trillions of parameters. All so the machine can produce outputs on-demand for lazy users who will never know of me or appreciate my little contribution, and ultimately for the financial benefit of some billionaires who see me as an obsolete waste of space.
We're moving into the 'industrial age of software'. You exact issue, of bespoke, well thought out and well-crafted code is one that craftsmen felt at the beginning of the industrial age. Now, parts are designed and churned out by machines that no one sees or cares about (generally speaking). This is where we are going with software, and production at a truly industrial scale has its place.
And so does well-crafted bespoke software.
The engineers who built the foundation for the industrial expansion of our forefathers went through the same exact thing we're going through now. They look at what existed, and use it to inform their efforts. This is what LLMs do.
I'm not attempting to moralize here, just comment on the parallels. Do I agree that a craftman's work is consumed by the juggernauts and no second thought is given? No. I think its a shame. But I also think the output will never match the artisans that practice now. By the very nature of the machines we employ, we cannot match the skill or thought that goes into bespoke code.
It is not even about quality. In fact with an LLM following my orders I can create higher quality code than I ever did before. I always was operating within a budget whether it was defined by the # of hours my customers were willing to pay for, or the # of hours I was personally willing to invest in a side project. This budget manifested in the form of cut features, limited test coverage, limited documentation, and so on. So given the same budget or even a slightly reduced budget I can actually make higher quality software with slop superpowers.
If I spend 2 hours designing the domain model, 1 hour slopping out a rough implementation, and 5 hours polishing it with a combo of handwritten and vibed refactorings, I will get a better result than if I spent 8 hours writing everything by hand.
So my point is not that vibe software is lower quality, as my experience has shown the opposite. It is simply that the spirit of sharing my work was done with the idea that I was sharing it with others who toiled in the same craft, not sharing for consumption by machine. Not that I ever contributed anything very important to the open source world, that anybody depended on. Just personal projects I thought were neat or educational.
In hindsight I would probably still have open sourced what I did, because I think it's valuable to have on record that I competently programmed stuff before AI even existed, like pre-atomic steel. But I don't know if I will open source any personal code going forward.
====
To put it more succinctly: if somebody "ripped off" my open source code in 2018, I wasn't mad about that. Even if they didn't bother to attribute me, well, at least they saw my stuff, had a human brain cell light up appreciating it, and thought it was worth stealing. I'm flattered. But with LLMs my work can be reappropriated without a single human ever directly knowing or caring about it.
Well put. I agree wholeheartedly with your sentiment.
Maybe this is me just being angry at the new world that's being created, but the beauty of the open source ecosystem was humans giving away things they found useful in the hope that other humans could find them useful too. Having a machine take all of that and regurgitate it somewhere else without that connection (for profit, no less) feels like a betrayal of that open source ethos.
Now in the back of my mind I worry that everything I open source will be scooped up by corporations to make them more rich and more powerful, so I end up not publishing anything (not that it was of any value). I suspect I'm not alone in feeling that way.
You’re not a product that was created by other human beings based on someone else’s IP.
It turns out that's false. We know that genes are patentable; remember back during the Human Genome Project, when there was such a rush to patent them? So genes are IP. (This seems bizarre to me, since they're patenting something that was found just sitting there, but this is what the system says right now.)
Well, two other humans (aka mom and dad) did create me, based on those patentable genes (and most likely including some genes that were, in fact, patented).
I'm not sure what to conclude from all of that, but I do think that it invalidates your argument.
It's a little more complicated, and I would argue that the court got it wrong, but you cannot patent a gene as it exists and rests in nature. You can patent the cDNA (reverse-transcribed mRNA) genetic code after intron removal, which they argue is not a natural thing, but I think they misunderstood the science, really the triviality of the "invention".
And yeah, I'm a big fan, too. I still have the CDs for it, and it still runs in Windows 11!
reply