I don't think it changes much about licensing in particular. People are going on about how since the AI was trained on this code, that makes it a derivative work. But it must be borne in mind that AI training doesn't usually lead to memorizing the training data, but rather learning the general patterns of it. In the case of source code, it learns how to write systems and algorithms in general, not a particular function. If you then describe an interface to it, it is applying general principles to implement that interface. Its ability to succeed in this depends primarily on the complexity of the task. If you give it the interfaces of a closed source and open sourced project of similar complexity, it will have a relatively equal time of implementing them.
Even prior to this, relatively simple projects licensed under share alike licenses were in danger of being cloned under either proprietary or more permissive licenses. This project in particular was spared, basically because the LGPL is permissive enough that it was always easier to just comply with the license terms. A full on GPLed project like GCC isn't in danger of an AI being able to clone it anytime soon. Nevermind that it was already cloned under a more permissive license by human coders.
Go was trying to be a better c++. In c++ there are infinity different constructors and that was too complicated, so they made a language with only one constructor. Go isn't the way it is because nobody knew any better, it's because they deliberately chose to avoid adding things that they thought weren't beneficial enough to justify their complexity.
I they wanted error handling they would have thought for a bit then picked a good enough solution. "We only want 'X but perfect'" is the same as "we don't want X".
They don't need to be services. You can - and many projects do - structure your code as a set of loosely coupled modules. Each module has a responsibility or set of responsibilities. They communicate with each other via well defined interfaces. For exposing code like this to an LLM, you would have them make a change to one or sometimes two modules, with access to the interface docs of all the other modules. The disadvantage of this compared to microservices is that if a module crashes it will take the entire process down with it, you can't move a module onto a different machine or create multiple instances of it as easily, etc. The advantage is that communication is done via function calls, which are simpler and more efficient than rpc.
> I think this gestures at a more general point - we're still focusing on how to integrate LLMs into existing dev tooling paradigms.
This is what we should be doing. This for a couple reasons. For one thing, humans don't have an entire codebase "in context" at a time. We should be recognizing that the limitations of an AI mirror the limitations of a person, and hence can have similar solutions. For another, the limitations of today's LLMs will not be the limitations of tomorrow's LLMs. Redesigning our code to suit today's limitations will only cause us trouble down the road.
I don't know what the topography of houston is like, but here in toronto, a few hundred meters would move you from the bottom of a deep river valley to the top of it. I would imagine they made sure they could get insurance before building and wouldn't have picked any place with a significant risk.
The topography of Houston is that everywhere is a few hundred meters from a flood zone. You are exactly right; the area did not even come closer to flooding during Harvey and is a good 30ft higher than the flood zone OP is referencing.
They demonstrated that swift's c++ interop isn't good enough, but does it follow that rust's is better? Genuinely asking, as I don't have experience with that. I would imagine that if they rejected it for that reason originally they forsaw even more severe issues.
Even prior to this, relatively simple projects licensed under share alike licenses were in danger of being cloned under either proprietary or more permissive licenses. This project in particular was spared, basically because the LGPL is permissive enough that it was always easier to just comply with the license terms. A full on GPLed project like GCC isn't in danger of an AI being able to clone it anytime soon. Nevermind that it was already cloned under a more permissive license by human coders.
reply