Working in a field where we're required to define things with mathematical precision, it's frustrating listening to lawyers and judges flail about trying to figure out how to interpret laws.
I don't want to claim that the law is a trivial matter and that it could be replaced with software, but I do think our legal system could be greatly enhanced with some ideas in math and software development. There are always going to be ambiguity in legal matters, but that doesn't mean we can't greatly reduce that ambiguity with good tools.
I know this is a really tall order, but one idea in particular I'd like to see is the introduction of some sort of unit testing or model checking during the legislative process where legal scenarios are parameterized and enumerated to act as a guide for legislators, lawyers and judges. I'd love to see something like Alloy (http://alloy.mit.edu/alloy/) be implemented in a way that non-programmers could use to model check things like bylaws by guiding the user through ambiguous and conflicting scenarios. I'm not suggesting we try to make laws satisfiable. What I am suggesting is that we model them in a way that highlight and distinguish the laws that are poorly crafted from those which are clearly interpreted.
Legal ambiguity is really expensive weight on our society. Not only does this sort of ruling create investment uncertainty, it benefits incumbents who can buy armies of lawyers and intimidate competitors with drawn out lawsuits. Our legal system should be the great equalizer, but it will never be as long as we let judges and lawyers convince us to resign that our current system "is as good as it gets".
Subsection (a) says that copyright extends to "original works of authorship." Subsection (b) says that copyright does not extend to "any idea, procedure, process, system, method of operation, concept, principle, or discovery."
Where does an "original work of authorship" end and where does "method of operation" begin? How would you propose to define those concepts in a way that is less ambiguous? That is, without arbitrarily throwing away the distinction the law is trying to make just because you can't model it?
I don't think this provision is as unambiguous as you make it out to be if different judges are coming to different conclusions. It makes sense that different judges will come to different conclusions because it may not be entirely clear to them if a very specific collection of function signatures is an original work of authorship.
I think being able to test cases pre-emptively has a tremendous value because it forces one to pre-emptively consider and define things. By opening up such a system, it might allow software associations to work with legislators to pre-emptively clarify test cases and definitions by creating a process where legislators can define things more specifically so it could be used as a guide for judges ruling in esoteric areas.
I don't imagine we'd ever be able to create a satisfiable legal system, but I do think we could concoct a system that highlights ambiguity and legal risk so that it prompts individuals and associations to get legislators to clarify messy laws.
Like I said, this is a tall order, but I think we should be thinking about systems like this.
A modelable law would have defined that distinction ahead of time, and when that fails, adjust accordingly. Those terms are incredibly vague to begin with, as well.
This particular case could have been anticipated at least as far back as the 1970s.
I mean it was at least possible, and (while fantasizing about hypothetical legal systems) would be desirable if laws could be updated with new information rather than reinterpreted through layers of precedent.
Is that desirable? To me, the existing model embodies a much-valued computer science concept: lazy processing. That allows you to avoid resolving hypotheticals that never actually come up, and when you're forced to resolve a question of law, it lets you do so based on concrete application of the law instead of a mere hypothetical.
On the other hand, maybe a less charitable analogy would be prototyping in a dynamic language. Thinking through edge cases only when running on some input causes an exception to be thrown.
While lazy evaluation is desirable, I do think our present legal system evaluates things too lazily. I'd gladly trade some of the current lazy evaluation for greater up-front certainty of outcome.
Relying extensively on the courts to lazily evaluate ambiguous laws biases outcomes toward those who have capital. Since I do not belong to the capital class, I would prefer less ambiguous laws.
Economists have a term "economic imperialism" which means applying economics outside its traditional domains - in areas such as economic analysis of law or say family relations. I think you've just suggested "software imperialism"...
However, while it's an interesting idea, how about we deal with the ambiguities of things like programming languages (this program runs differently in these two different environments) and computer programs in general ("things stopped working after the update" - that wouldn't happen if programs depended on a formally defined and verified set of things from their environment)? And maybe tackle more complex things like precisely defining the line between flirting and sexual harassment later?
I prefer the term "interdisciplinary research". Specialization of labor will lead us to a societal dead end if we just build impenetrable silos of epistemology. We have to interact to progress.
An article currently trending on the front page about "C in practice" (https://news.ycombinator.com/item?id=9799069) helps make my point (that we barely understand the dark corners of our tools, so perhaps venturing into other territory is a bit premature.)
(Also - the extent to which almost every real-life programming language makes simple things unbelievably complicated sounds like a good example of "impenetrable silos of epistemology"... "language lawyer" is an idiom for a reason. And BTW we have waaaaay less excuses for the huge barriers to understanding that we continuously erect around our work since our subject matter is not nearly as inherently fuzzy as the stuff lawyers, lawmakers and judges deal with.)
The biggest problem is that there is no concept of DRY in law. No single source of laws about $X. It's layers upon layers of conflicting code and it becomes a maze.
Legal precedent is the last thing we would want to mirror in software... Imagine if every resolved bug from every software project, every hacky workaround, was implicitly included in your code, rather than fixing bugs upstream at the source.
(BTW, "DRY" is an inadequate reprise of refactoring. That's always bothered me. "Don't repeat yourself" is one big repeat of an earlier and more clearly and rigorously defined idea.)
I think everyone should be able to program in the same way that everyone should be literate, and should have a grasp of basic mathematics too for that measure.
But the people who write software professionally must be held to a higher standard, just as we expect so much more from professional writers (but we expect everybody to be able to read street signs and write shopping lists.)
I don't wish to be provocative but anyone who can't understand factoring shouldn't be writing software. DRY is an oxymoron.
Because that whole logical positivism thing worked out so well!
Putting flippancy to one side, the issues here will not be solved by any form of notation. The difficulty lies with concepts enshrined in law that will forever be soft and in need of reinterpretation as the context changes.
I have honestly no skin in this game. I just pointed to a project I'd heard of which seems relevant to this discussion.
It would be nice if lawmakers used some tools from code - eg. some kind of version control for laws which are being batted between different houses and have amendments added and removed. And perhaps the language could be a little more formalized.
Here's what the project's executive director has to say on the matter[1]:
"One technical problem with Computational Law, familiar to many individual with legal training, is due to the open texture of laws. Consider a municipal regulation stating "No vehicles in the park". On first blush this is fine, but it is really quite problematic. Just what constitutes a vehicle? Is a bicycle a vehicle? What about a skateboard? How about roller skates? What about a baby stroller? A horse? A repair vehicle? For that matter, what is the park? At what altitude does it end? If a helicopter hovers at 10 feet, is that a violation? What if it flies over at 100 feet?
The resolution of this problem is to limit the application of Computational Law to those cases where such issues can be externalized or marginalized. We allow human users to make judgments about such open texture concepts in entering data or we avoid regulatory applications where such concepts abound.
A different sort of challenge to Computational Law stems from the fact that not all legal reasoning is deductive. Edwina Rissland [Rissland et al.] notes that, "Law is not a matter of simply applying rules to facts via modus ponens"; and, when regarding the broad application of AI techniques to law, this is certainly true. The rules that apply to a real-world situation, as well as even the facts themselves, may be open to interpretation, and many legal decisions are made through case-based reasoning, bypassing explicit reasoning about laws and statutes. The general problem of open texture when interpreting rules, along with the parallel problem of running out of rules to apply when resolving terms, presents significant obstacles to implementable automated rule-based reasoning."
> What I am suggesting is that we model them in a way that highlight and distinguish the laws that are poorly crafted from those which are clearly interpreted.
There are as many elements in the set of laws that are clearly interpreted as there are programs with zero bugs. This is not an analogy. These are direct expressions of the same root cause.
I don't mean to be rude but how can you be so foolish? Have you met humans?
There is no algorithm for "distinguish[ing] the laws that are poorly crafted from those which are clearly interpreted." Law is an imperfect tool for managing human wickedness. No symbolic perfection can force a person to be good. The essence of the problem is that people just like to be cussed bastards. Math won't help.
I never said not to work to make law better, that's very important and needful. I am pointing out that logic and reason have limits in the messy human legal sphere. It's strange to me that this needs pointing out but there you are.
I suspect what we will see happen is the District Court will rule that it was fair use, it will go to the Federal Circuit again and then the Federal Circuit will overturn the fair use claim. As the Federal Circuit is incredibly pro-IP, I suspect they will be unpersuaded by any claim otherwise.
At that point, if SCOTUS does not take up an appeal, the computer science world will have a big problem. If APIs are copyrighted and reimplementing libraries violates copyright, then:
What of all the projects that have implemented LIBC, and used glibc's API and headers? e.g.: Is musl now GPL?
What of any OS or project that has implemented a POSIX layer? (Is this the decision SCO needed to finally sue someone for Linux finally violating SCO's copyright?)
What of any project that implements x-plat APIs? As another comment pointed out, Microsoft's Project Islandwood re-implements iOS APIs. But Xamarin, Cordova and other projects all ingest APIs and reconfigure them to make them accessible on multiple platforms. Are they contaminated by copyright now?
If the Federal Circuit decision stands, software might have finally encountered the lump it can't digest as it eats the world.
If the Federal Circuit decision stands, software might have finally encountered the lump it can't digest as it eats the world.
† In the United States.
The apparent legal position following this result seems crazy to me. Even in the world of intellectual property law, there has been some recognition that the interests of interoperability and ability to communicate may outweigh the benefits of monopoly protection as an incentive to create and share. The US, for example, takes this view in declining copyright protection on the design of typefaces (as distinct from font files that contain software describing those designs in one specific way). And in Europe, good luck getting a software patent that blocks anyone from using your file format or communications protocol.
Given that programming to an interface is essentially the way we enable interoperability between different software products, allowing the protection of such interfaces to stand seems absurdly counter-productive. This is particularly true if the protection holds retrospectively for APIs that were previously believed not to be protected in that way by those who chose to write code against them, which would presumably be the case here.
If the US allows this position to remain, it could cripple the US software development industry very quickly if the lawyers start throwing their weight around. In practice, as often happens under the US legal system, I expect it would be particularly damaging to the little guy, while the big players would all cross-license their respective portfolios somehow to make the issue go away.
> Microsoft's Project Islandwood re-implements iOS APIs. But Xamarin, Cordova and other projects all ingest APIs and reconfigure them to make them accessible on multiple platforms. Are they contaminated by copyright now?
Which is funny, because Microsoft played a big role in helping Oracle win this one:
Additionally, there are the APIs that no one thought too much about the copyright of. Who owns the Javascript API, for example? The W3C, or the first browser vendor to implement each component on a component-by-component basis?
AOL, perhaps? They own whatever parts of Netscape haven't been sold or transferred, and since API copyright is an intellectual property that no one knew existed then the rights to the original JS API would fall to them.
It makes me wonder why moving away from Java is not an higher priority for the Android Team.
According to the AOSP tracker, their current efforts at the language level seem to center on the integration of Java 8.
Sure, as far as languages go, I there are many tasks more beneficial to the platform than changing its language, but the legal issues they are encountering sound like another good motivation.
No way Google can move Android completely away from Java until long after the legal situation is resolved one way or another. There's just too much third-party code.
The AOSP tracker only discloses their public efforts. I wouldn't be surprised if Google has been working on a parallel Android project to replace their use of Java with another language - perhaps one that they developed. In fact, I believe it would be irresponsible of them not to have a Plan B in place should the situation become any more untenable.
IIRC they have both java 6 7 & 8 configurations in the new jack & jill toolchain.
I don't know if it necessarily means that the integrality of Java 7 & 8 features will be ported.
It supports java 7 syntax. Invoke dynamic you mentioned is a jvm feature, which dalvic isn't one. I think the lack of nio is an annoyance that can be fixed tho.
I'd be interested in hearing about the process of returning to the District Court (or really any previous court) to defend on a fair use argument. Was that case not closed? Was it left pending until the question of copyright itself was addressed by the appeals court?
Google's defense for the copyright phase was that the APIs couldn't be copyrighted and that even if we entertained the idea that they could be, Google's use would be fair use. The District Court, having agreed with Google on the copyrightability aspect, came to no conclusion on the fair use issue. (Fair use is an affirmative defense to copyright infringement, which means it acknowledges infringement but requires no penalty/reward.)
When the Federal Circuit court overturned District Court's decision on the copyrightability of SSO, they remanded the issue of fair use back to the lower courts. After all, both Google and Oracle's goal through all this is not an academic exercise in establishing the general boundaries of copyright; their question is, "Are we/they going to have to pay in this case?".
Does the ISO standardization process include a copyright license or assignment? Copyright had pretty specific rules over who owns a work. Assigning your copyright to someone else requires certain explicit steps.
It looks like you have to buy a copy of the standard to get a license to use it. That's just from reading their website, but I too cannot find a definite answer.
Did they though? At the time wasn't Java supposed to be "open source"? By that logic many companies who used (and use) Java have made the same mistake.
I would say it's Google's executives and engineers who have dropped the ball if the moment Oracle won the case the last time, they didn't start planning and working on stripping Java out of Android. If they still have a "let's wait and see" attitude about it for another 2-3 years or how long these Court battles will take to settle, then THAT would be a major fail on their part. Because by then they would have to pay who knows how many billions of dollars and it would take yet a few more years (5+?) to do the whole transition.
Sun did not allow anyone to use GPL Java (SE) on mobile devices ('field of use' restrictions). For that, you had to license J2ME from Sun under different conditions. If you ever had the misfortune of targeting J2ME, you'd know why it was a no-brainer for Google. I shudder to imagine writing Android apps using J2ME.
It is prudent to note that Google didn't write its own implementation from scratch: they took advantage of Apache Harmony (by erstwhile Sun frenemy: IBM). This is the same Apache Harmony that Sun repeatedly refused to provide the TCK for
For the same reason it supports choosing permissive licenses like BSD in some instances. I.e., it advances the ideals of free software better than if you were to take the opposite stance.
In my opinion implementing someone else's API should be allowed under the fair use provision of the copyright law. This is needed so that whoever came up with the API does not hoard applications developed for the API and stifle innovation and competition at the platform level, as Microsoft did for many years with the Win32 API.
Apple and microsoft have had a cross-licensing deal for decades which allows them to use each other's ip as long as they don't create an outright clone of the other guy's product.
Regardless, if the rumors are true apple is about to release a surface tablet clone (iPad pro), so they might not be so trigger happy out of fear of retaliation.
How exactly is the rumored iPad pro a Surface clone? Additionally, without the optional and expensive keyboard, could you not call the Surface an iPad clone by your definition?
Not really, they'll pay off Apple. Who it will really hurt is start ups that don't have the money to license. If APIs are held to be copyrightable and not subject to fair use then this will lead to reduced innovation in the US. A similar ruling in Germany went the other way and stated that APIs are functional and hence not copyrightable.
Your history is incorrect, though. Apple was authorized by Xerox PARC to use their technology in exchange for allowing Xerox to buy 100,000 shares in Apple before their IPO.
I would use the word "concepts" rather than technology, although I'm sure there was some of that as well. The general UI, though, came from Douglas Engelbart's late '60s oN-Line System (NLS), the subject of the Mother of All Demos. Pretty much everything, windows, the mouse, just no graphics.
It was much copied, e.g. I'm think the Lisp Machine had the same basic GUI working by the time work on the Apple Lisa started, it was certainly reliable and somewhat polished as of the fall of 1979/1980.
If you're a student of this sort of history, it's a bit sobering how little "new" stuff we're really doing, it was all conceptualized and generally at least prototyped by the end of the '60s.
This leaves in place the ruling of the Federal Circuit, which ruled that the Java APIs in this case were copyrightable, though it left open the possibility that Google had a fair use defense.
SQL was initially developed at IBM by Donald D. Chamberlin and Raymond F. Boyce in the early 1970s.[14] This version, initially called SEQUEL (Structured English QUEry Language), was designed to manipulate and retrieve data stored in IBM's original quasi-relational database management system, System R, which a group at IBM San Jose Research Laboratory had developed during the 1970s.
In the late 1970s, Relational Software, Inc. (now Oracle Corporation) saw the potential of the concepts described by Codd, Chamberlin, and Boyce, and developed their own SQL-based RDBMS with aspirations of selling it to the U.S. Navy, Central Intelligence Agency, and other U.S. government agencies. In June 1979, Relational Software, Inc. introduced the first commercially available implementation of SQL, Oracle V2 (Version2) for VAX computers.
Does IBM have a case to go after Oracle for using the SQL API SSO in their database products?
Regardless of whether SQL is an ISO standard, doesn't IBM still own the copyright to SQL or were they forced to give it up as part of the condition to becoming an ISO standard?
I don't know what to say, this feels so wrong (APIs being copyrighted). If any tech company deserves hate it's Oracle without a doubt. I just hope common sense prevails at the end and Google wins.
I'm disappointed if only because the oral argument for a case like this may have been fascinating - what do the supreme court justices have to say about things computer science? It might turn out like the violent video game ban case from a few years back, when Justice Kagan quipped that she (or rather, he clerks) were familiar with games such as Mortal Kombat. Perhaps she, or her clerks, may also have some experience with Java?
The Oracle v. Google copyright case hinged on two things. The first was about the copyrightability of APIs (in legal jargon, "structure, sequence and organization"). The second was a function called rangeCheck, which was:
private static void rangeCheck(int arrayLen, int fromIndex, int toIndex {
if (fromIndex > toIndex)
throw new IllegalArgumentException("fromIndex(" + fromIndex +
") > toIndex(" + toIndex+")");
if (fromIndex < 0)
throw new ArrayIndexOutOfBoundsException(fromIndex);
if (toIndex > arrayLen)
throw new ArrayIndexOutOfBoundsException(toIndex);
}
The function is obviously trivial, is something which could be pretty easily recalled exactly from memory, and could be easily recreated by accident; given the constraints on what it's supposed to do, there aren't many degrees of freedom. At the district court level, Judge Alsup ruled that copying of this function was "de minimis". At the Federal Circuit level, Justices O'Malley, Plager and Taranto overturned this part of the district court ruling (https://www.docketalarm.com/cases/US_Court_of_Appeals_Federa...).
On the issue of "structure, sequence, and organization", Google copied the class and method names from Java, so that their reimplementation of Java, Dalvik, would be interoperable with existing Java programs and Java programmers' habits. Judge Alsup ruled that is a "command structure, a system or method of operation" and not copyrightable 17 USC 102(b) (https://www.law.cornell.edu/uscode/text/17/102) (http://www.groklaw.net/pdf3/OraGoogle-1202.pdf). This, too, was overturned by the district court.
There is a fairly key difference here between Judge Alsup and the District judges. Part-way through the trial, Alsup revealed that he has actually programmed; that he understood what was going on, and would not be easily fooled. At the time, this being revealed was a dramatic event.
The District judges, however, hold no such distinction. It is no surprise, then, that it was them, and not Alsup, who made a ruling that to actual programmers is prima facie unreasonable and dangerous. They are outsiders. If you read their opinion (https://www.docketalarm.com/cases/US_Court_of_Appeals_Federa...) you can see little errors in terminology creep in to reveal their lack of background and understanding. And, lacking that understanding, I don't see how they can be seen as having moral authority. They only have legal authority, and the ability to do damage with it.
The snippet of code you used was written by Joshua Bloch (author of Effective Java and a then employee of Sun). When Joshua was working for Google he implemented the rangeCheck method for Android from scratch, but since he was the original author for the Sun Java implementation he naturally did it the same way.
Why would Google need the SSO of those 37 Java API's for their VM to work? Wasn't the relevance of those 37 API's just to ensure source code compatibility with existing Java libraries?
"In certain appellate situations, where the old form of petition for review is prescribed, the name of the petitioner -- the party seeking review -- goes first, whether the petitioner is plaintiff or defendant.
Thus in the familiar "certiorari" ... where the Court is responding to the request of the party seeking review."
I don't want to claim that the law is a trivial matter and that it could be replaced with software, but I do think our legal system could be greatly enhanced with some ideas in math and software development. There are always going to be ambiguity in legal matters, but that doesn't mean we can't greatly reduce that ambiguity with good tools.
I know this is a really tall order, but one idea in particular I'd like to see is the introduction of some sort of unit testing or model checking during the legislative process where legal scenarios are parameterized and enumerated to act as a guide for legislators, lawyers and judges. I'd love to see something like Alloy (http://alloy.mit.edu/alloy/) be implemented in a way that non-programmers could use to model check things like bylaws by guiding the user through ambiguous and conflicting scenarios. I'm not suggesting we try to make laws satisfiable. What I am suggesting is that we model them in a way that highlight and distinguish the laws that are poorly crafted from those which are clearly interpreted.
Legal ambiguity is really expensive weight on our society. Not only does this sort of ruling create investment uncertainty, it benefits incumbents who can buy armies of lawyers and intimidate competitors with drawn out lawsuits. Our legal system should be the great equalizer, but it will never be as long as we let judges and lawyers convince us to resign that our current system "is as good as it gets".