Yes! I have struggled with this concern for a while now.
I think that humans are the ultimate arbiters of quality for humans.
Nothing that is non-human can make that determination because anything that is not human (an LLM) is at best just a really good model. Since all models are necessarily wrong by definition, they can never get it right all the time. When determining what’s good for humans, it is only us that can figure that out.
I also came back to ZAMM when wrestling this question. There must be something there if we have all independently coming to similar conclusions.
My Massachusetts home has both radiant floor heating (water pipes in floor) and baseboard water pipe heating (separate part of the house). My son’s New York home has radiators driven by hot water. I can’t recall a home where heating was vented air and not circulating hot water in North East USA.
At the risk of being labeled a kook or an idiot, I photographed drones flying over my suburb of Boston neighborhood a few weeks ago. This was about 6am, definitely drones not regular aircraft. I assumed it was something flying out of Hanscom or the city mapping streets. And yes I took photos not video, sorry.
It was an odd sight at that hour of the day. I wanted pictures as something to talk about with friends. I really wish I had been astute enough to take video.
yah did better than I'd have, friend. I'm sure I'd have just been all like "oh lookie here, a drone flying around at 6am, unusual and cool!" and then gotten on with my day, ha!
For the experts in the house: would it be possible for this to wrap around, so the highest and lowest level join? A torus of GoLs? Can such a thing, assuming it exists, have a finite number of layers? Just curious, this is amazing.
At a very high level, this is simulating the Game of life at the appropriate resolution level based on an algorithm whose input parameters are the zoom level and your location in the space, similar to a fractal pattern. So I'm not sure what you mean by a "torus" of GoLs.
Have you heard of the thought experiment where a 2D plane is finitely sized instead of infinitely sized, but as you travel within the 2D plane, if you get to one edge, you wrap around to the opposite edge? Even though its size is finite, you can pick a direction and go that way forever.
If you sit in 3D space and look at this finite 2D plane, then it looks like (say) a rectangle. That's displeasing because from inside the plane, it's continuous, but from the outside, it looks discontinuous.
One way to get rid of that annoyance is to map the plane onto a torus. Then it looks continuous from the outside. It's no longer flat, which is un-plane-like, but you can't have everything in life.
Anyway, the zoom level of game of life is an infinite number line. But what if it repeats and can be represented as something that wraps around? You could think of it as a line segment that you wrap around into a circle (to join the two ends). So the same concept as mapping the plane onto a torus, except 1D instead of 2D.
Well... if we wrap around without meddling with time, it will be impossible. On the lowest level cells are switching much more frequently than on the highest level.
But if we allow ourselves to bend time, then probably... It would be like you zoom out and go back in time. My guess it is possible, but I still cannot wrap my head around infinite spatial dimensions of the field. We need infinity for that because without it will hit the problem of different cell count on the lowest level and the highest (which are represent the same sequences of states of GoL). I see no possible problems to buils such a "torus" but to be sure one needs to really prove it.
I guess what they mean is that the "zoom" would be cyclic, e.g. at the same time the game zoomed at 1x would look like the game zoomed at 101x, zoomed at 2x would look like 102x etc...
Totally different thing, but this makes me think of looping procedural animations, which are achieved by sampling noise on a circle (or walls of a cylinder in a 3d noise space).
To dispose of this briefly, the cardinality of computable functions is ℵ₀, and Life is a computable function. Although the parent question is underspecified (some good guesses as to what specifically is meant in sibling comments), no variation on that question could increase the ℵ number of the result.
I believe the comment above is talking about the game running as a simulation within another instance of the game. Each instance of the game has a state. Game G1 has state S1, game G2 has state S2, etc. I think they're asking if there's any S1 that can be chosen so that S2 = S1.
You might need 2 or more layers, so whatever number of layers you need (if this is possible, you'd have some sequence infinite sequence (moving through layers) like (S1, S1, S1, ...) or (S1, S2, S1, S2, ...) or (S1, S2, S3, S1, S2, S3, ...).
The length of the repeating pattern (subsequence) is going to be a positive integer, so I don't think real numbers are relevant, if that's the question.
Yes but the question is whether there are repeating non empty patterns. I don't think you've answered this? I don't know but I think that's a fascinating question.
Does anyone know what the copyright status of LLM generated content is? That is, if I feed a NYT article into GPT4 and say, summarize this article, and then publish that summary, is there argument or precedent that says that is or is not copyright infringement? Asking for a friend.
Typically if you ask a chatbot to "summarize" something, it will paraphrase the original closely enough that it would be considered plagiarism and copyright infringement. To avoid that, it's required to distill the relevant ideas contained in the text, and expound on them in a way that's not dependent on how the text itself was expressed, structured or organized. You would need to tell the model to do this over multiple steps, and then derive a rephrased article without looking at the original at all. (Which is not really possible if the article was in the AI's training set, as is the case here.)
Or the filter could be the other way, failing to litigate an AI to decelerate it's progress, and a risk of augmenting the underlying society too much too quickly.
Technically, you just send a request to OpenAI and they are the ones who feed it into GPT4. Although I'd argue this is irrelevant to your question, the law works in mysterious way so perhaps it carries some importance.
You get it. It’s about value. Keep you eye on that north star and you won’t go wrong.
Whose value? How do I value? Can I reconcile disparite value? Yep, those are the right questions.
For me, I read this and want to give a shout out to Zen and the Art of Motorcycle Maintenance, but that’s just me.
I enjoyed the read, thank you.
reply