Clang can target windows just fine afaik, although I'm sure the whole process could be improved.
That said, as long as windows is the bigger more profitable market I wouldnt expect a switch, unless the dev tooling situation becomes dramatically better on linux
> In addition, success is generally pretty well-defined. Everyone wants correct, performant, bug-free, secure code.
I feel like these are often not well defined? "Its not a bug it's a feature", "premature optimization is the root of all evil", etc
In different contexts, "performant enough" means different things. Similarly, many times I've seen different teams within a company have differing opinions on "correctness"
"life of the program" might imply it needs to begin life at program start. But it can be allocated at runtime, like an example in the list shows. So its rather "lives until the end of the program", but it doesnt need to start life at the start of the program
Not who you were asking but for me about 1800 hours of study+srs+reading+listening for simple shows (not "tons of idioms". Or at least not ones Im not already familiar with). This was for japanese, european languages should be easier.
Speaking as an argentinian, every time I hear about someone using crypto in that way its to avoid taxes, which seems legally murky/gray (if not directly illegal, but not currently prosecuted) to me.
Of course, but there are some oddities in tool use compared to other industries. At my job we use Perforce for version control for example, which I think is more common in the game industry than other solutions for whatever reason. Naturally everyone here hates it.
> Perforce for version control for example, which I think is more common in the game industry than other solutions for whatever reason.
The last game I worked on was like 80gb built. The perforce depot was many terabytes large, not something you want to have on every person's workstation. Games companies use Perforce for a very good reason.
But not everybody here has to try and manage many GB or even TB of assets in their VCS. I wager game company build/dev engineers know what they are doing in picking Perforce.
> Entity–component–system (ECS) is a software architectural pattern mostly used in video game development for the representation of game world objects. An ECS comprises entities composed from components of data, with systems which operate on the components.
> Entity: An entity represents a general-purpose object. In a game engine context, for example, every coarse game object is represented as an entity. Usually, it only consists of a unique id. Implementations typically use a plain integer for this
> Common ECS approaches are highly compatible with, and are often combined with, data-oriented design techniques. Data for all instances of a component are contiguously stored together in physical memory, enabling efficient memory access for systems which operate over many entities.
> History
> In 1998, Thief: The Dark Project pioneered an ECS.
So, according to wikipedia:
- An entity is typically just a numeric unique id
- Components are typically physically contiguous (i.e an array)
- Their history began with Thief pioneering them in 1998
I was rather expecting code examples, so that we could deconstruct the language primitives being used for the implementation from a CS language semantics point of view.
> I think his definition of OO is different to what we've got used to. Perhaps his definition needs a different name.
I've seen "OOP" used to mean different things. For example, sometimes it's said about a language, and sometimes it's unrelated to language features and simply about the "style" or design/architecture/organization of a codebase (Some people say some C codebases are "object oriented", usually because they use either vtables or function pointers, or/and because they use opaque handles).
Even when talking about "OOP as a programming language descriptor", I've seen it used to mean different things. For example, a lot of people say rust is not object-oriented. But rust lets you define data types, and lets you define methods on data types, and has a language feature to let you create a pointer+vtable construct based on what can reasonably be called an interface (A "trait" in rust). The "only" things it's lacking are either ergonomics or inheritance, or possibly a culture of OOP. So one definition of "OOP" could be "A programming language that has inheritance as a language feature". But some people disagree with that, even when using it as a descriptor of programming languages. They might think it's actually about message passing, or encapsulation, or a combination, etc etc.
And when talking about "style"/design, it can also mean different things. In the talk this post is about, the speaker mentions "compile time hierarchies of encapsulation that match the domain model". I've seen teachers in university teach OOP as a way of modelling the "real world", and say that inheritance should be a semantic "is-a" relationship. I think that's the sort of thing the talk is about. But like I mentioned above, some people disagree and think an OOP codebase does not need to be a compile time hierarchy that represents the domain model, it can be used simply as a mechanism for polymorphism or as a way of code reuse.
Anyways, what I mean to say is that I don't think arguing about the specifics of what "OOP" means in the abstract very useful, and that since in this particular piece the author took the time to explicitly call out what they mean that we should probably stick to that.
If my memory isn't failing me, that was part of the reason rust went with a postfix notation for their async keyword ("thing().await") instead of the more common syntax ("await thing()")
Yep, and that itself was similar to the rationale for introducing `?` as a postfix operator where the `try!(...)` macro had previously been used. In retrospect, it's kind of funny to look back and see how controversial that was at the time, because despite there being plenty of criticism of the async ecosystem in once, the postfix `.await` might be the one thing that seems to consistently be praised by people needing to use it. People might not like using async, but when we do use it, it seems like we're pretty happy with the syntax for `.await`.
I'm a bit confused. What does any of this have to do with the central thesis of the talk? ("Compile time hierarchies of encapsulation that match the domain model were a mistake")
I understand that OOP is a somewhat diluted term nowdays, meaning different things to different people and in different contexts/communities, but the author spent more than enough time clarifying in excruciating detail what he was talking about.
That said, as long as windows is the bigger more profitable market I wouldnt expect a switch, unless the dev tooling situation becomes dramatically better on linux