I don't like the Netflix CGI slop movie style filter look it gives everything. But that is a more general trend in tv and movies that I just can't stand.
I do think this will eventually be a major part of the graphics pipeline, but I hope it will be limited and masked to things like hair, which is almost impossible to get right in real-time rendering.
without discussing copyright, I don't believe any of this is copied. Which I think should be the argument that actually matters.
I downloaded both 6.0 and 7.0 and based on only a light comparison of a few key files, nothing would suggest to me that 7.0 was copied from 6.0, especially for a 41x faster implementation.
It is a lot more organized and readable in my armature opinion, and the code is about 1/10th the size.
Small local models are the only thing that still have that magic feeling to me. While large models are still useful and impressive, it makes more sense that they are happening on a giant super computer in a datacenter somewhere. But all the intelligence and capability that can run on my mid level gaming PC is astonishing to me.
Anonymous account unmasking represents a new threat to anonymity.
not just this technique with llms, but the earlier text similarity one.
But I think it would be generally easier to counter in the same way.
Use an llm or heuristics to pose as someone else.
not only do you erase your traces, you add false positives in to the system which reduces the overall effectiveness of these techniques in the future.
A bit of poisoning the well.
I hope eventually an easy to use tool, with maybe a small local llm, can make it easy enough to do this, so that any future deanonymization attacks would be too untrustworthy to rely on
I am curious on how you would algorithmically find the optimal solution for this kind of problem for much bigger grids.
I wanted to do some seed finding in Factorio for the same exact problem using the generated map images, but never found a good solution that was fast enough.
The site uses Answer Set Programming with the Clingo engine to compute the optimal solutions for smaller grids. Maximizing grids like this is probably NP-hard.
Note that traditional SAT and SMT solvers are quite inefficient at computing flood-fills.
The ASP specifications it uses to compute optimal solutions are surprisingly short and readable, and look like:
#const budget=11.
horse(4,4).
cell(0,0).
boundary(0,0).
cell(0,1).
boundary(0,1).
% ...truncated for brevity...
cell(3,1).
water(3,1).
% ...
% Adjacent cells (4-way connectivity)
adj(R,C, R+1,C) :- cell(R,C), cell(R+1,C).
adj(R,C, R-1,C) :- cell(R,C), cell(R-1,C).
adj(R,C, R,C+1) :- cell(R,C), cell(R,C+1).
adj(R,C, R,C-1) :- cell(R,C), cell(R,C-1).
% Walkable = not water
walkable(R,C) :- cell(R,C), not water(R,C).
% Choice: place wall on any walkable cell except horse and cherries
{ wall(R,C) } :- walkable(R,C), not horse(R,C), not cherry(R,C).
% Budget constraint (native counting - no bit-blasting!)
:- #count { R,C : wall(R,C) } > budget.
% Reachability from horse (z = enclosed/reachable cells)
z(R,C) :- horse(R,C).
z(R2,C2) :- z(R1,C1), adj(R1,C1, R2,C2), walkable(R2,C2), not wall( R2,C2).
% Horse cannot reach boundary (would escape)
:- z(R,C), boundary(R,C).
% Maximize enclosed area (cherries worth +3 bonus = 4 total)
#maximize { 4,R,C : z(R,C), cherry(R,C) ; 1,R,C : z(R,C), not cherry( R,C) }.
% Only output wall positions
#show wall/2.
Im over 35 years of age. I have 15+ years of programming experience. And I generally consider myself as someone who has good breadth of tech in general. Yet, this is the first time in my life I've heard of ASP. And gosh. I was completely blown away by this as I read more about it and went through some examples (https://github.com/domoritz/clingo-wasm/blob/main/examples/e...)
Therefore, like a good little llm bitch that I have become recently, I straight away went to chatgpt/sonnet/gemini and asked them to compile me a list of more such "whatever this is known as". And holy cow!! This is a whole new world.
My ask to HN community: any good book recommendations related to "such stuff"? Not those research kinds as I don't have enough brain cells for it. But, a little easier and practical ones?
Things I don't like include that it's more dense, doesn't use clingo examples (mostly math-style examples so you kind of have to translate them in your head), and while the proofs of how grounding works are interesting, the explanations are kind of short and don't always have the intuition I want.
I still think this is the authoritative reference.
The "how to build your own ASP system" paper is a good breakdown of how to integrate ASP into other projects:
The pre-machine-learning formulations of AI focused on symbolic reasoning through the dual problems of search and logic. Many problems can be reduced to enumerating legal steps, and SAT/SMT/ASP and related systems can churn through those in a highly optimized and genetic manner.
> algorithmically find the optimal solution for this kind of problem for much bigger grids.
Great, now I've been double nerd-sniped - once for the thing itself and another for 'What would an optimiser for this look like? Graph cuts? SAT/SMT? [AC]SP?'
I'd bet it's NP-hard. The standard reduction to a flow problem only tells you if a cut exists (by min-cut max-flow duality), but here we want the cut of size at most N that maximizes enclosed area.
The Leetcode version of this is "find articulation points", which is just a DFS, but it's less general than what is presented here.
I think it's NP hard, maybe from Sparsest Cut. But you could probably find the min-cut and then iterate by adding capacity on edges in the min cut until you find a cut of the right size. (if the desired cut-size is close to the min cut size at least).
It's NP-hard from Minimum s–t Cut with at least k Vertices. That's the edge version, but since the grid graph is 4-regular(-ish), the problem is trivially convertible to the vertex version.
That conclusion may be too hasty. If min cut with k vertices is NP-hard on arbitrary graphs, that doesn't automatically mean that that applies to a 2D grid too.
Is NP hardness proven for just planar graphs? Those are closer to the 2D grid, but still slightly more general. All I could find was a reduction to densest k subgraphs, but Wikipedia tells me that whether that problem is NP hard for planar graphs is an open question.
To be clear, I would be very surprised if the problem turns out to be _not_ NP hard, but there is no trivial equivalence to min cut in general graphs to show that it is.
I agree, that is a good point. Although it is (induced) subgraphs of 2D grids, which gets you a bit closer to the planar case (albeit with bounded degree).
It might be polytime on planar graphs, but that would be surprising.
Constraint programming seems to be a fitting approach. Input would be number of walls, and the location of lakes.
The decision variables would be the positions of walls.
In order to encode the horse being enclosed, additional variables for whether horse can reach a given square can be given. Finally, constraints for reachability and that edges cannot be reached should ensure correctness.
it was very easy to support the portal mechanism when the entire problem is mapped as a network flow optimization. i could just simply add the portal coordinates together with the neighbors.
I don't believe this works in general. If you have a set of tiles that connect to neither the horse nor to an exit, they can still keep each other reachable in this formulation.
Yes, this is the major challenge with solving them with SAT. You can make your solver check and reject these horseless pockets (incrementally rejecting solutions with new clauses), which might be the easiest method, since you might need iteration for maximizing anyways (bare SAT doesn't do "maximize"). To correctly track the flood-fill flow from the horse, you generally need a constraint like reachable(x,y,t) = reachable(nx,ny,t-1) ^ walkable(x,y), and reachable(x,y,0)=is_horse_cell, which adds N^2 additional variables to each cell.
You can more precisely track flows and do maximization with ILP, but that often loses conflict-driven clause learning advantages.
Good point. I don't think the puzzles do this and if they would, I would run a pre-solve pass over the puzzle first to flood fill such horseless pockets up with water, no?
It's not quite that easy. For the simplest example, look at https://enclose.horse/play/dlctud, where the naive solution will waste two walls to fence in the large area. Obviously, you can construct puzzles that have lots of these "bait" areas.
Like the other comment suggested, running a loop where you keep adding constraints that eliminate invalid solutions will probably work for any puzzle that a human would want to solve.
However, I think that you do not need 'time' based variables in the form of
reachable(x,y,t) = reachable(nx,ny,t-1)
Enforcing connectivity through single-commodity flows is IMO better to enforce flood fill (also introduces additional variables but is typically easier to solve with CP heuristics):
I always wondered if anybody has calculated how much of our global heating could be attributed, if any at all, to every electronic device, server, and engine outputting heat as a byproduct.
I have the feeling that particular energy output does not so much, really. For example this plant in the image is about 700x400m and when multiplied with the suns peak output you already get a potential energy of 280MW. And this site almost triples that. The sun shines practically everywhere, though.
Humans produce about 20TW globally at this time (ChatGPT), while the sun adds about 174000TW of energy to the earth.
I guess you could argue that our waste heat does something, but I think the greenhouse gases that trap this enormous energy more effectively have a far bigger effect.
I think that works out to 0.01%? There's some hand-waving around solar radiation in the atmosphere vs. on the surface and double counting some that goes to solar power, but the number looks smaller than the variation in solar output over the solar cycle.
Negligible, almost invisible. Now the emissions (CO2) coming from the gas/oil/coal energy generation so you can run your device in the first place, that's very high.
It’s 0%. Solar radiation is about 1.4kW per square meter.
We use a fraction of the sun’s total energy output each year, orders of magnitude more energy are in sunlight radiating onto the Earth than in the heat rejected from buildings with air conditioning.
I did a quick alalysis and it actually matched the ~1.5 degree celcius rise pretty accurately. It required a bunch of incorrect simplifying assumptions, but it was still interesting how close it comes.
Estimated energy production from all combustion and nuclear from the industrial revolution onwards, assumed that heat was dumped into the atmosphere evenly at once, calculate temperature rise based on atmosphere makeup. Ignores the impact of some of that heat getting sinked into the ground and ocean, and the increased thermal radiation out to space over that period. In general, heat flows from the ground and ocean into the atmosphere instead of the other way around, and the rise in thermal radiation isn't that large.
On the other hand, this isn't something that the smart professionals ever talk about when discussing climate change, so I'm sure that the napkin math working out so close to explaining the whole effect has to be missing something.
We use ~20 TW, while solar radiation is ~500 PW and just the heating from global warming alone is 460TW (that is, how much heat is being accumulated as increased Earth temperature).
Well the math is correct, the methodology has obvious flaws some of which I pointed out. If you took all the energy that has been released by humanity burning things since the industrial revolution and dumped it into a closed system consisting of just the atmosphere, it would rise by about 1.5 C.
The discussion thread (and original question) you are participating in is about heat being rejected to the atmosphere through vapor-compression refrigeration or evaporative cooling, not CO2 or emissions from combustion. Reread the top level comment.
The amount of heat rejected to the atmosphere from electronic devices is negligible.
Me personally I use it to upload files (from my phone or sometimes other random devices) to a folder I call "void". "void" can be used by everyone to upload something but only I can read it. I also have some public pics, vids and music files here and there. It's so convenient & easy to handle, honestly a better alternative to other cloud providers in some ways in my opinion.
Not sure if its still an issue, but companies were buying popular web extensions, then auto updating malware/spyware into them. I haven't heard much about this in a while, but I think chrome still forces auto updates for extensions, so I would expect this to be the biggest vector for scraping walled data now.
I do think this will eventually be a major part of the graphics pipeline, but I hope it will be limited and masked to things like hair, which is almost impossible to get right in real-time rendering.