Crawford's work is worthy of study, as is the causation for why he experienced external failure. It embodies the "simulationist" aesthetic of game design: given enough modelled parameters, something emergent and interesting will happen. This was a trend of the 20th century: computers were new and interesting, and simulations did work when you asked them to solve physics problems and plan logistics. Why wouldn't it work for narrative?
But then you play the games, and they're all so opaque. You have no idea what's going on, and the responses to your actions are so hard to grasp. But if you do figure it out, the model usually collapses into a linear, repeatable strategy and the illusion of depth disappears. You can see this happening from the start, with Gossip. Instead of noticing that his game didn't communicate and looking for points of accessibility, he plunged further forward into computer modelling. The failure is one of verisimilitude: The model is similar to a grounded truth on paper, but it's uninteresting to behold because it doesn't lead to a coherent whole. It just reflects the designer's thoughts on "this is how the world should work", which is something that can be found in any comments section.
Often, when Crawford lectured, he would go into evo-psych theories to build his claims: that is, he was confident that the answers he already accepted about the world and society were the correct ones, and the games were a matter of illustration. He was likewise confident that a shooting game would be less thoughtful than a turn-based strategy game because the moment-to-moment decisions were less complex, and the goal should be to portray completeness in the details.
I think he's aware of some of this, but he's a stubborn guy.
This is evident in his description of programming in his later years:
Time and time again I would send my friend Dave Walker an email declaring that Javascript (or something else) was utterly broken, incapable of executing the simplest program without errors. Dave would ask to see the source code and I would present it to him with detailed notes proving that my code was perfect and Javascript was broken. He’d call me, we’d discuss it, and eventually he’d say something like, “Where did you terminate the loop beginning at line 563?” There would be a long silence, followed by the tiniest “Oh” from me. I’d thank him for his help and hang up. A week later, I’d be fuming again about another fundamental flaw in Javascript.
Many of us are stubborn and will work hard and long, without much positive external feedback, under the assumption that our vision is correct and the audience, if one even exists, is wrong. Much fundamental progress has been made this way: Faraday, Einstein, Jobs, etc. But of course many times one simply is wrong and refusing to see it means throwing years away, and whatever else with it (money, relationships, etc.). It's a hard balance, especially for the monomaniacal without much interest in balance. Finding out how to make solid (public, peer-reviewed, evidence-based, whatever) incremental progress towards the paradigm shift seems to be the way if one can manage.
That quote about JavaScript is... huh. I do not understand how you can even begin coming to the conclusion of "JavaScript [is] utterly broken, incapable of executing the simplest programs without errors" when obviously, JavaScript (which I do not like, by the way) is productively used on a large scale (even back then), and constantly under scrutiny from programmers, computer scientists, language designers... it's just baffling.
It reminds me of when I was around 10 years old or so, maybe slightly older, and playing around with Turbo C (or maybe Turbo C++) on DOS. I must have gotten something very basic about pointers (which were new to me at the time) wrong, probably having declared a char* pointer but not actually allocated any memory, leaving it entirely uninitialized, and my string manipulation failed in weird and interesting ways (since this was on DOS without memory protection, you wouldn't get a program crash like a segmentation fault very easily, instead you'd often see "more interesting" corruption).
Hilariously, at the time I concluded that the string functions of Turbo C(++) must be broken and moved away "string.h" so I wouldn't use it. But even then I shortly after realized how insane I was: Borland could never sell Turbo C(++) if the functions behind the string.h API were actually broken, and it became clear that my code must be buggy instead. And remember, I was 10 years old or so, otherwise I don't think I would have come to that weird conclusion in the first place.
Nowadays, I do live in this very tiny niche where I actually encounter not only compiler bugs, but actual hardware/CPU bugs, but even then I need a lot of experiments and evidence for myself that that's what I'm actually hitting...
Sure, but “JavaScript [is] utterly broken, incapable of executing the simplest programs without errors” is a bit much. I find it hard to believe that even when I’m completely out of touch, I’d say that about a language that people are obviously productive in (as much as I hate JS myself).
Sometimes when I play a point n click adventure and I am stuck for hours on a puzzle I tend to think: I've tried everything... surely there must be some kind of bug for why I am not proceeding.
Only to then realize (after reading the walkthrough) that there was indeed a way.
I think it's human nature to find (rather search) blame not only in yourself but everywhere else... anyhow, since the author is reflective we should be forgiving as well.
Other languages have problems, but before some basic libraries (jQuery/Underscore) and language enhancements (Typescript/Coffeescript), it was arguably quite simplistic, and parts of the language were straight up anachronistic.
If you've ever been unfortunate enough to have to wrangle a VB script routine, it was (less bad) like that. If not, I would go find some assembly code and teach it yourself, and then imagine that instead of side effects in registers there were random effects on your code/visual state.
And like assembly code, you could now imagine that the same code might behave wildly different on different machines in different browsers.
So a bit of "old man"isms, but also I imagine his JavaScript was tainted by the early days. It's better in some ways now, worse in different ways, I don't mean to say that is the worst or the best, just to offer perspective on where it came from.
I’m well aware of all of those things (I program modern assembly for a living, and witnessed the evolution of JS), but the quote was “JavaScript [is] utterly broken, incapable of executing the simplest programs without errors”, which is a bit more extreme than what you’re describing.
It’s a quality I’ve run into with a couple people: young or old, once they’ve ossified into thinking they are Better and Smarter than everyone else, they stop being curious and simply start mandating their wild “truths”
I’m sure we’ve all done it at one time or another, but repeated as habit without learning seems to speak of a certain kind of personality.
> He was likewise confident that a shooting game would be less thoughtful than a turn-based strategy game because the moment-to-moment decisions were less complex
Sounds like a classic example of Moravec’s paradox:
It’s not that a shooting (or action, or heck talk to competitive fighting game players) game has less decisions for the player to make, it’s that the decisions being made are all subconscious decisions about movements and difficult to put into words.
1. Assembly coding within a REPL. Forth supports "load-and-store" without the additional bookkeeping steps of assembly. Once the program works, it can be incrementally rewritten into the assembly if needed, or used to bootstrap something else. Historically this is probably the single biggest usage, because the language works as a blunt instrument for that within the standard wordsets. Lots of programs on the early micros shipped with code that was developed with Forth, but with the Forth interpreter discarded at the last step; and where there is novel hardware and novel applications, Forth tends to come up as the bootstrap.
2. Minimal-dependencies coding. For the same reason that it's a good bootstrapping tool, Forth ends up being portable by assuming nothing. While different Forth systems are all subtly incompatible, the runtime model is small enough to wrangle into doing what you want. Stack machine VMs basically are "Forth with more sandbox and less human-readability".
3. "Big ideas" coding. The "human-readable stack machine" aspect means it's a useful substrate for language design - being programmable, you can shift the imperative interpreter model in the direction of new syntax and new general-purpose data structures, while still retaining a way to drop all the way down to assembly - the biggest downside is that this doesn't let you easily introduce existing library code, so bootstrapping from within Forth would take a long time and you would most likely get stuck on trivial string processing. But Forth as the second of a two-step process where you "compile to Forth" using something more batteries-included is actually pretty reasonable as an alternative to generating a binary or designing an original VM.
You don't even have to look to geopolitical analogies. It's an everyday thing, all the way down to basic "exclusive club" gatekeeping.
There's a longstanding tendency across financial systems historically to use the law to bar access to the "real" products for various reasons that happen to favor the incumbent elite. Instead, if you get any access, it's the version mediated by a middleman of some kind. There is often a rationalization in play, but the effective control over societal outcomes is the same.
Want to found a disruptive company in 16th century Europe? You had better have a royal charter.
Maybe it's the 19th century and you have a great invention: "Patent fees for England alone amounted to £100-£120 ($585) or approximately four times per capita income in 1860." [0]
You're a laborer in 1900, and you've pooled a little nest egg you want to use to trade stocks? You can't afford the real stuff, so you will have to play in a bucket shop.
You're a middle-class Black person in the 1950's US and you want to own a home or start a business? Redlining ensures that you won't get a good deal or your neighborhood of choice, neither will you get a loan from the major banks(at least, not one on reasonable terms).
And so I have to conclude that the whole basis of the debt system is always subject to some form of gatekeeping, at some point, and that's what has drawn people back to precious metal exchange over centuries, despite its limits. We've been through a long period where debt worked really well, because our economies experienced industrial growth patterns and could coexist within a stable framework(some world wars and interventions notwithstanding). That does not mean it's better or forever.
The same kind of framework is in the process of being enforced in cryptocurrency; cypherpunk-friendly privacy coins that have some adherence to Bitcoin's original spirit like Monero or ZCash have been delisted from most exchanges through regulatory pressure, while defanged "blockchain economy" tokens have stones-throw availability and heavy promotion. Meanwhile a substantial number of token exchange services will play games with your ability to withdraw to keys you own.
But I think that's going to be about as hopeless an endeavor as stopping music piracy was; it's abundantly clear that we're headed towards a long term trend of breakdown in "trust me" debt economies and their model of operation, even if some of the leaks get plugged in the near term in the way that Spotify "solved" piracy[1]; what "trust me" now results in at Internet scale is increasingly sophisticated ransomware hacking. So, while debt and lending itself could still exist and be a rewarding venture, tokens lacking credible mechanisms to back their fundamental value and consensus are going to wash out.
(I also think the El Salvador plan is a stunt - a way of marketing the country with a side of personal benefit - albeit one that could become consequential in surprising, unpredictable ways, in the way Bitcoin has been generally.)
This is a great comment. Great examples of gatekeeping. I think you're right on the breakdown of "trust me" models. I also see a problem with fundamental value in the crypto space.
And I switched from a mild crypto believer to a metal stacker a couple of years ago.
I think one of the next big avalanches will be some more issues with crypto exchanges. There's a lot of people in the crypto space without the technical knowledge to properly secure their assets. I was one of the people who got burned by Mt. Gox. It wasn't everything, I had multiple wallets under my own control, but it was enough to hurt. And that wasn't a one time event.
What he calls "world construction" involves the development of a rubric custom to the problem.
This creates a faster feedback loop inside of the larger, noisier one. Your feedback is now guided around the question of "what makes the rubric itself better?" This can be done on principle, with limited access to external information. Philosophical thinking is eminently suited to this style of problem, but it can be supplemented with short-term empirical studies that add some falsifying points and narrow your cone of uncertainty.
At the end you've generated a list of yes/no questions forming the rubric of whether the course of action is likely to succeed. It can be turned into a ranking score, or a pass-fail threshold.
If you're frustrated by the idea of just making it up on principle, that's a frustration with philosophy itself; it rarely "works" until you accept some pragmatic premises around what is "good" or "true". The point of having a large number of questions, using a wide variety of perspectives, is that they test the overall coherency of the premise. Something can work fine from one perspective, and then completely fail in another. When that happens, it's a good sign that you have more to improve.
It's quite an important life skill to practice. It's easy to go along with the crowd, but this is a way of breaking away from it.
I've gone down roads similar to this. Long story short - the architecture solves for a lower priority class of problem, w/r to games, so it doesn't pay a great dividend, and you add a combination of boilerplate and dynamism that slows down development.
Your top issue in the runtime game loop is always with concurrency and synchronization logic - e.g. A spawns before B, if A's hitbox overlaps with B, is the first frame that a collision event occurs the frame of spawning or one frame after? That's the kind of issue that is hard to catch, occurs not often, and often has some kind of catastrophic impact if handled wrongly. But the actual effect of the event is usually a one-liner like "set a stun timer" - there is nothing to test with respect to the event itself! The perceived behavior is intimately coupled to when its processing occurs and when the effects are "felt" elsewhere in the loop - everything's tied to some kind of clock, whether it's the CPU clock, the rendered frame, turn-taking, or an abstracted timer. These kinds of bugs are a matter of bad specification, rather than bad implementation, so they resist automated testing mightily.
The most straightforward solution is, failing pure functions, to write more inline code(there is a John Carmack posting on inline code that I often use as a reference point). Enforce a static order of events as often as possible. Then debugging is always a matter of "does A happen before B?" It's there in the source code, and you don't need tooling to spot the issue.
The other part of this is, how do you load and initialize the scene? And that's a data problem that does call for more complex dependency management - but again, most games will aim to solve it statically in the build process of the game's assets, and reduce the amount of game state being serialized to save games, reducing the complexity surface of everything related to saves(versioning, corruption, etc). With a roguelike there is more of an impetus to build a lot of dynamic assets(dungeon maps, item placements etc.) which leads to a larger serialization footprint. But ultimately the focus of all of this is on getting the data to a place where you can bring it back up and run queries on it, and that's the kind of thing where you could theoretically use SQLite and have a very flexible runtime data model with a robust query system - but fully exploiting it wouldn't have the level of performance that's expected for a game.
Now, where can your system make sense? Where the game loop is actually dynamic in its function - i.e. modding APIs. But this tends to be a thing you approach gradually and grudgingly, because modders aren't any better at solving concurrency bugs and they are less incentivized to play nice with other mods, so they will always default to hacking in something that stomps the state, creating intermittent race conditions. So in practice you are likely to just have specific feature points where an API can exist(e.g. add a new "on hit" behavior that conditionally changes the one-liner), and those might impose some generalized concurrency logic.
The other thing that might help is to have a language that actually understands that you want to do this decoupling and has the tooling built in to do constraint logic programming and enforce the "musts" and "cannots" at source level. I don't know of a language that really addresses this well for the use case of game loops - it entails having a whole general-purpose language already and then also this other feature. Big project.
I've been taking the approach instead of aiming to develop "little languages" that compose well for certain kinds of features - e.g. instead of programming a finite state machine by hand for each type of NPC, devise a subcategory of state machines that I could describe as a one-liner, with chunks of fixed-function behavior and a bit of programmability. Instead of a universal graphics system, have various programmable painter systems that can manipulate cursors or selections to describe an image. The concurrency stays mostly static, but the little languages drive the dynamic behavior, and because they are small, they are easy to provide some tooling for.
Thanks for the detailed evaluation. I'll start by reiterating that the project is a typical tile-based roguelike, so some of the concerns you mention in the second paragraph don't apply. Everything runs sequentially and deterministically - though the actual order of execution may not be apparent from the code itself. I mitigate it to an extent by adding introspection features, like e.g. code that dumps PlantUML graphs showing the actual order of execution of event handlers, or their relationship with events (e.g. which handlers can send what subsequent events).
I'll also add that this is an experimental hobby project, used to explore various programming techniques and architecture ideas, so I don't care about most constraints under which commercial game studios operate.
> The perceived behavior is intimately coupled to when its processing occurs and when the effects are "felt" elsewhere in the loop - everything's tied to some kind of clock, whether it's the CPU clock, the rendered frame, turn-taking, or an abstracted timer. These kinds of bugs are a matter of bad specification, rather than bad implementation, so they resist automated testing mightily.
Since day one of the project, the core feature was to be able to run headless automated gameplay tests. That is, input and output are isolated by design. Every "game feature" (GF) I develop comes with automated tests; each such test starts up a minimal game core with fake (or null) input and output, the GF under test, and all GFs on which it depends, and then executes faked scenarios. So far, at least for minor things, it works out OK. I expect I might hit a wall when there are enough interacting GFs that I won't be able to correctly map desired scenarios to actual event execution orders. We'll see what happens when I reach that point.
> that's the kind of thing where you could theoretically use SQLite and have a very flexible runtime data model with a robust query system - but fully exploiting it wouldn't have the level of performance that's expected for a game.
Funny you should mention that.
The other big weird thing about this project is that it uses SQLite for runtime game state. That is, entities are database rows, components are database tables, and the canonical gameplay state at any given point is stored in an in-memory SQLite database. This makes saving/loading a non-issue - I just use SQLite's Backup API to dump the game state to disk, and then read it back.
Performance-wise, I tested this approach extensively up front, by timing artificial reads and writes in expected patterns, including simulating a situation in which I pull map and entities data in a given range to render them on screen. SQLite turned out to be much faster than I expected. On my machine, I could easily get 60FPS out of that with minimum optimization work - but it did consume most of the frame time. Given that I'm writing a ASCII-style, turn(ish) roguelike, I don't actually need to query all that data 60 times per second, so this is quite acceptable performance - but I wouldn't try that with a real-time game.
> The other thing that might help is to have a language that actually understands that you want to do this decoupling and has the tooling built in to do constraint logic programming and enforce the "musts" and "cannots" at source level. I don't know of a language that really addresses this well for the use case of game loops - it entails having a whole general-purpose language already and then also this other feature. Big project.
Or a Lisp project. While I currently do constraint resolution at runtime, it's not hard to move it to compile time. I just didn't bother with it yet. Nice thing about Common Lisp is that the distinction between "compilation/loading" and "runtime" is somewhat arbitrary - any code I can execute in the latter, I can execute in the former. If I have a function that resolves constraints on some data structure and returns a sequence, and that data structure can be completely known at compile time, it's trivial to have the function execute during compilation instead.
> I've been taking the approach instead of aiming to develop "little languages" that compose well for certain kinds of features
I'm interested in learning more about the languages you developed - e.g. how your FSMs are encoded, and what that "programmable painter system" looks like. In my project, I do little languages too (in fact, the aforementioned "game features" are a DSL themselves) - Lisp makes it very easy to just create new DSLs on the fly, and to some extent they inherit the tooling used to power the "host" language.
Sounds like you may be getting close to an ideal result, at least for this project! :) Nice on the use of SQLite - I agree that it's right in the ballpark of usability if you're just occasionally editing or doing simple turn-taking.
When you create gameplay tests, one of the major limitations is in testing data. Many games end up with "playground" levels that validate the major game mechanics because they have no easier way of specifying what is, in essence, a data bug like "jump height is too short to cross gap". Now, of course you can engineer some kind of test, but it starts to become either a reiteration of the data (useless) or an AI programming problem that could be inverted into "give me the set of values that have solutions fitting these constraints" (which then isn't really a "test" but a redefinition of the medium, in the same way that a procedural level is a "solution" for a valid level).
It's this latter point that forms the basis of many of the "little languages". If you hardcode the constraints, then more of the data resides in a sweet spot by default and the runtime is dealing with less generality, so it also becomes easier to validate. One of my favorite examples of this is the light style language in Quake 1: https://quakewiki.org/wiki/lightstyle
It's just a short character string that sequences some brightness changes in a linear scale at a fixed rate. So it's "data," but it's not data encoded in something bulky like a bunch of floating point values. It's of precisely the granularity demanded by the problem, and much easier to edit as a result.
A short step up from that is something like MML: https://en.wikipedia.org/wiki/Music_Macro_Language - now there is a mostly-trivial parsing step involved, but again, it's "to the point" - it assumes features around scale and rhythm that allow it to be compact. You can actually do better than MML by encoding an assumption of "playing in key" and "key change" - then you can barf nearly any sequence of scale degrees into the keyboard and it'll be inoffensive, if not great music. Likewise, you could define rhythm in terms of rhythmic textures over time - sparse, held, arpeggiated, etc. - and so not really have to define the music note by note, making it easy to add new arrangements.
With AI, a similar thing can apply - define a tighter structure and the simpler thing falls out. A lot of game AI FSMs will follow a general pattern of "run this sequenced behavior unless something of a higher priority interrupts it". So encode the sequence, then hardcode the interruption modes, then figure out if they need to be parameterized into e.g. multiple sequences, if they need to retain a memory scratchpad and resume, etc. A lot of the headache of generalizing AI is in discovering needs for new scratchpads, if just to do something like a cooldown timer on a behavior or to retain a target destination. It means that your memory allocation per entity is dependent on how smart they have to be, which depends on the AI's program. It's not so bad if you are in something as dynamic as a Lisp, but problematic in the typical usages of ECS where part of the point is to systematize memory allocation.
With painting what you're looking for is a structuring metaphor for classes of images. Most systems of illustration have structuring metaphors of some kind specifically for defining proportions - they start with simple ratios and primitive shapes, and then use those as the construction lines for more detailed elements which subdivide the shapes again with another set of ratios. This is the conceptual basis of the common "6-8 heads of height" tip used in figure drawing - and there are systems of figure drawing which get really specific about what shapes to draw and how. If I encode such a system, I therefore have a method of automatic illustration that starts not with the actual "drawing" of anything, but with a proportion specification creating construction lines, which are then an input to a styling system that defines how to connect the lines or superimpose other shapes. Something I've been experimenting with to get those lines is a system that works by interpolation of coordinate transforms that aggregate a Cartesian and polar system together - e.g. I want to say "interpolate along this Cartesian grid, after it's been rotated 45 degrees". It can also perform interpolation between two entirely different coordinate systems(e.g. the same grid at two different scales). I haven't touched it in a while, but it generates interesting abstract animations, and I have a vision for turning that into a system for specifying character mannequins, textures, etc. Right now it's too complex to be a good one-liner system, but I could get there by building tighter abstractions on it in the same way as the music system.
My main thing this year has been a binary format that lets me break away from text encodings as the base medium, and instead have more precise, richer data types as the base cell type. This has gone through a lot of iteration to test various things I might want to encode and forms I could encode them in. The key thing I've hit on is to encode with a lot of "slack" in the system - each "cell" of data is 16 bytes; half of that is a header that contains information about how to render it, its ID in a listing of user named types, bitflags defined by the type, a "feature" value(an enumeration defined by the type), and a version field which could be used for various editing features. The other half is a value, which could be a 64-bit value, 8 bytes, a string fragment, etc. - the rendering information field indicates what it is in those general terms, but the definite meaning is named by the user type. The goal is to use this as a groundwork to define the little languages further - rather than relying on "just text" and sophisticated parsing, the parse is trivialized by being able to define richer symbols - and then I can provide more sophisticated editing and visualization more easily. Of course, I'm placing a bet on either having an general-purpose editor for it that's worthwhile, or being able to define custom editors that trivialize editing, neither of which might pan out; there's a case for either "just text" or "just spreadsheets" still beating my system. But I'd like to try it, since I think this way of structuring the bits is likely to be more long-run sustainable.
I don't think I can respond properly and ask the many more questions I have in a HN thread that's already this aged, so if you'd like to talk more, feel free to hit me up (my contact details are in my profile).
Implement a MIDI 1.0 sequencer and get it to play back SMF files with a sine wave synth - you can have it output a WAV file, or learn an API to do realtime rendering. It's not a large spec, there are lots of old documents on how the protocol functions in practice, and then once you start getting it working you'll get results instantly(lots of SMF files around to test with) but will want more features, better synthesis; complications start to arise and you will then pick up a lot of knowledge by doing.
"Cash and carry" grocery outlets were a 20th century innovation. [0] Before that, the norm was to have a line of credit with the business, and in many cases, to accept their terms for delivery. Cash transactions anonymize, since settlement is done at the counter. You don't have to assess the buyer, you just need to verify the bills and change are real.
However, that isn't the entire story. While there were businesses using cash before this, they faced difficulties with accounting, supply logistics, and other elements that made it hard to conceive of something like a "supermarket", carrying a vast variety and quantity of goods on a daily basis. So then we have to look at all the pieces that fell into place to make it possible.
The automobile decentralized access to mobility, making carry-away a real possibility for more people, and thus making it possible to apply cash-and-carry in more places. Supporting elements like cash registers and refrigeration were becoming mature enough to support new forms of retail and allow more parts of the transaction to be delegated to local outlets and low-wage employees. The inter-war years really saw a whole set of technological innovations that were used in combinations to propel social changes and different categories of business(e.g. fast food), many of them decentralized in some respects but centralized in others - supermarket chains, as opposed to local markets.
These are the kinds of changes that are hardest to assess in full; when you decentralize one thing, centralization is "squeezed" into other parts of the economy, it seems. The obvious example for this phenomenon is Amazon, leveraging an apparently decentralizing mechanism(online retail - premised on an internet with sufficient bandwidth and security to list goods and take payments) into becoming the world's largest retailer. So it's centralized on one axis, but decentralized on others - a buyer no longer has to go to a particular physical location to purchase something, when all of it can be delivered to the doorstep.
1. Towns revise their taxation and zoning laws so that more classes of business are permitted in residences. They also start issuing more forms of local credit(the technical means to do so are only getting better), excluding big-box participation and restarting the cycle of capital accumulation locally.
2. Costs are lower and incentives are now aligned for more small businesses to survive in marginal areas.
3. Big-box stores increasingly become commodified and unbundled, themselves; the shift from Main Street to Wal-Mart to Amazon is one of the warehouse turning into a store and then back into a warehouse. The services of shipping logistics and delivery become less of a centralized process. Now the local businesses are using the big-box to their benefit.
The reason why Wal-Mart succeeds is ultimately premised on policies that let capital centralize itself according to a national and global framework. But that's only one way of "seeing" the economy, since following that policy, as we know, creates a mix of expensive star cities and dying no-hope towns. It's improbable that the future will simply be a restatement of the post-1970 trends, given what we know about history - something will change.
Use value is easy to assign to an NFT post-facto. That hasn't been really done in the current market(which is, of course, in the midst of a bubble), but:
* Tokens can become tickets to events
* Tokens can become options on commissioned work
* Tokens can become signs of membership
Because the token is guaranteed to be unique, and you can track ownership, there's a fluidity to this that lets you do away with contractual mechanisms. You can reuse the same tokens many times or announce that it will expire(for your use case).
Edit: And platforms can't really own it if it's on a public chain, too. You just copy the chain(see: BinancePunks copying CryptoPunks). So there's that.
Speculation is enabling to non-coercive frameworks. If I say a token has a use value that I will use to supply goods and services, you may want to acquire that token. If I can also control issuance and supply of that token, then, short of direct threats of violence or enslavement, you must negotiate with me or with a marketplace.
Most of the ill in the world has something to do with gatekeeping replacing speculative activity.
But then you play the games, and they're all so opaque. You have no idea what's going on, and the responses to your actions are so hard to grasp. But if you do figure it out, the model usually collapses into a linear, repeatable strategy and the illusion of depth disappears. You can see this happening from the start, with Gossip. Instead of noticing that his game didn't communicate and looking for points of accessibility, he plunged further forward into computer modelling. The failure is one of verisimilitude: The model is similar to a grounded truth on paper, but it's uninteresting to behold because it doesn't lead to a coherent whole. It just reflects the designer's thoughts on "this is how the world should work", which is something that can be found in any comments section.
Often, when Crawford lectured, he would go into evo-psych theories to build his claims: that is, he was confident that the answers he already accepted about the world and society were the correct ones, and the games were a matter of illustration. He was likewise confident that a shooting game would be less thoughtful than a turn-based strategy game because the moment-to-moment decisions were less complex, and the goal should be to portray completeness in the details.
I think he's aware of some of this, but he's a stubborn guy.