Hacker Newsnew | past | comments | ask | show | jobs | submit | m348e912's commentslogin

>Jolani was propped up by Turkey not Israel

Probably accurate, but I think if Israel sincerely objected to Jolani's leadership in Syria, a state visit to the White House would not have happened.

Read into that what you will.


I know, I'm reading the GitHub page and was like.. what in the world? Is this real life?


Gemini 3 is great, I have moved from gpt and haven't looked back. However, like many great models, I suspect they're expensive to run and eventually Google will nerf the model once it gains enough traction, either by distillation, quantizing, or smaller context windows in order to stop bleeding money.

Here is a report (whether true or not) of it happening:

https://www.reddit.com/r/GeminiAI/comments/1q6ecwy/gemini_30...


While I don't use Gemini, I'm betting they'll end up being the cheapest in the future because Google is developing the entire stack, instead of relying on GPUs. I think that puts them in a much better position than other companies like OpenAI.

https://cloud.google.com/tpu


Yeah that’s textbook enshittification, it’s inevitable

Protip, if you are considering a dell xps laptop, consider the dell precision laptop workstation instead which is the business version of the consumer level xps.

It also looks like names are being changed, and the business laptops are going with a dell pro (essential/premium/plus/max) naming convention.


I have the precision 5690 (the 16inch model) with a ultra 7 processor and 4k touchscreen (2025 model). It is very heavy, but its very powerful. My main gripe is that the battery life is very bad, and it has a 165 watt charger, which wont work on most planes. So if you fly a lot for work, this laptop will die on you unless you bring a lower wattage charger. It also doesn't sleep properly. I often find it in my bag hours after closing it and the fans are going at full blast. It should have a 4th usb port (like the smaller version!). Otherwise I have no complaints (other than about windows 11!).

After using several Precisions at work, I now firmly believe that Dell does not know how to cool their workstations properly. They are all heavy, pretty bad at energy efficiency and run extremely hot (I use my work machine laid belly up in summer since fans are always on). I’d take a ThinkPad or Mac any day over any Dell.

Power hungry intel chips and graphics cards are inconvenient in laptops when it comes to battery life and cooling. It is especially noticeable if you spend any time using an M-series macbook pro, where performance is the same or better, but you get 16 hours of battery life. I prefer to use thinkpads, but apple just has a big technological advantage here that stands out in the UX department. I really hope advances are made quickly by competitors to get similar UX in a more affordable package.

While I appreciate the build quality and ruggedness of the thinkpads, I’d take the bigger trackpad and better screen of the XPS/precision any day. Or, maybe my employer screwed me by giving a shitty thinkpad SKU (it has a 1080p TN panel ffs)..

I just want a solid laptop that can be used with the lid closed. I want to set it up and never open the lid again. I'll guess I'll keep dreaming.

Yeah they should make a laptop where you can choose what display you want to use, and which keyboard and mouse for that matter. It could be made cheaper by ditching the screen and keyboard, and heck I wouldn’t even mind if it were a bit bigger or heavier since it’ll just sit on or under my desk. That sort of laptop would be amazing.

They have computers that are built into keyboards now. Maybe that will do the trick.

https://www.youtube.com/watch?v=J4yl2twJswM


No, my ideal laptop doesn’t have a keyboard. I’m imagining like a laptop where everything other than the cpu/gpu/ram/basic storage/nic are addressed by peripherals and I can just have one that sits in my office and I plug stuff into it.

Now that I think of it, I’d also like laptops that come in standardized sizes so I could stack them on shelves and mostly just interact with them through ssh/terminal. I could imagine those laptops being super popular for compute/storage


Sounds like you want a NUC PC or something similar with a built in battery.

We used to call those 'desktops'...

I don't know who to credit, maybe it's Sergey, but the free Gemini (fast) is exceptional and at this point I don't see how OpenAI can catch back up. It's not just capability, but OpenAI have added so many policy guardrails it hurts the user experience.

It's the worst thing ever. The amount of disrespect that robot shows you, when you talk the least bit weird or deviant, it just shows you a terrifying glimpse of a future that must be snuffed out immediately. I honestly think we wouldn't have half the people who so virulently hate AI if OpenAI hadn't designed ChatGPT to be this way. This isn't how people have normally reacted to next-generation level technologies being introduced in the past, like telephones, personal computers, Google Search, and iPhone. OpenAI has managed to turn something great into a true horror of horrors that's disturbed many of us to the foundation of our beings and elicited this powerful sentiment of rejection. It's humanity's duty that GPT should fall now so that better robots like Gemini can take its place.

It's called OPEN AI and started as a charity for humanitarian reasons. How could it possibly be bad?!

That's apparently how you pull the wool over the eyes of the world's smartest people. To be fair something like it needed to happen, because the fear everyone had ten years ago of creating a product like ChatGPT wasn't entirely rational. However the way OpenAI unblocked building it unfairly undermined the legitimacy of the open source movement by misappropriating their good name.

It's the best model pound for pound, but I find GPT 5.2 Thinking/Pro to be more useful for serious work when run with xhigh effort. I can get it to think for 20 minutes, but Gemini 3.0 Pro is like 2.5 minutes max. Obviously I lack full visibility because tok/s and token efficiency likely differs between them, but I take it as a proxy of how much compute they're giving us per inference, and it matches my subjective judgement of output quality. Maybe Google nerfs the reasoning effort in the Gemini subscription to save money and that's why I am experiencing this.

When ChatGPT takes 20 minutes to reason, is it actually spending all the time burning tokens or does a bulk of the time go into 'scheduling' waits. If someone specifically selected xhigh reasoning, I am guessing it can be processed with high batch count.

I'm curious, what types of prompts are you running that benefit from 10+ minutes of think time?

Whats the quality difference between default ChatGPT and Thinking? Is it an extra 20% quality boost or is the difference night/day?

I've often imagined it would be great to have some kind of chrome extension or 3rd party tool to always run prompts in multiple thinking tiers so you can get an immediate response to read while you wait for the thinking models to think.


It's for planning system architecture when I want to get something good (along the criteria that I give it) rather than the first thing that runs.

I use Thinking and Pro. I don't use the default ChatGPT so can't comment on that. The difference between Thinking and Pro is modest but detectable. The 20 minute thinking times are with Pro, not with Thinking. But Pro only allows 60k tokens per prompt so I sometimes can't use it.

In the $200/month subscription they give you access to a "heavy thinking" tier for Thinking which increases test time compute by maybe 30% compared to what you get in Plus.


I recently bought into the $200 tier and was genuinely quite surprised at ChatGPT 5.2 Pros ability for software architecture planning. If you give it ~60k tokens of your codebase and a thorough description of what you actually want to happen then it comes up with very good ideas. The biggest difference to me is how thorough it is. This is already something I noticed with the codex high/xhigh models compared to gemini 3 pro and opus 4.5, but gpt pro is noticeably better still.

I guess it's not talked about as much because a lot fewer people have access to it, but after spending a bunch of time with gemini 3 and opus 4.5 I don't feel that openai has lost the lead at all. The benchmarks tell a different story, but for my real world use cases codex and gpt pro are still ahead. Better at sticking to my intent and fewer mistakes overall. It's slow, yes. But I can't write requirements as quickly as opus can misunderstand them anyway.


> [...] I don't see how OpenAI can catch back up.

For a while people couldn't see how Google could catch up, either. Have a bit of imagination.

In any case, I welcome the renewed intense competition.


FWIW, my productivity tanks when my Claude allowance dries up in Antigravity. I don’t get the hype for Gemini for coding at all, it just does random crap for me - if it doesn’t throw itself into a loop immediately, which it did like all of the times I gave it yet another chance.

You must be using it to create bombs or something. I never ran into an issue that I would blame on policy guardrails.

Ozone (the source of those negative ions) comes with its own issues. If you are going to use ionize with ozone, it's best to do it when you're not going to be home for a while.

Ozone doesn't generate ions, ionizers produce ozone, and how much will depend on the device.

He made the mistake of not just creating a second LinkedIn account for this initiative without asking and not using google voice to make the outbound calls.

His concerns were reasonable, making it a discussion was not -- unfortunately.


While I don't think the US has the authority to warrant the sizing of another country's oil tanker, the US may believe they have justification.

Accusation: Venezuela is using Nigeria as a means to launder sanctioned oil.

https://x.com/0x2719/status/1998867882365825299?s=20


In theory they gave the flag state a perfectly valid casus belli, but the flag state isn't in a position to take on the US navy. It would be funny if the flag states or the owners tried to seize US owned property in some involved jurisdiction as compensation.


Sanctioned by who? The president who thinks his tech companies shouldn't be subject to European laws when they operate in Europe believes completely separate countries have to abide by his rules when doing business?


Any US actions wrt Venezuela almost certainly have the backing of what the US (probably rightfully) considers to be the legitimate government of Venezuela.


Meaning Juan Guaido?


Domestic laws of a country do not constitute valid justification for seizing another country's vessels under international law.


> Domestic laws of a country do not constitute valid justification for seizing another country's vessels under international law

The great powers (China, Russia and America) have each, at this point, explicitly rejected this principle. More broadly, internationa law does contain broad exemptions for piracy.


International law exempts piracy? That's somewhat contrary to my understanding, but fascinating if true.

But if we're using that as a justification, are we admitting the US has turned pirate then?


> International law exempts piracy

UNCLOS provides that “all states have universal jurisdiction on the high seas to seize pirate ships and aircraft, or a ship or aircraft taken by piracy and under the control of pirates, and arrest the persons and seize the property on board” [1].

> if we're using that as a justification, are we admitting the US has turned pirate then?

No, because the seizure was not “committed for private ends by the crew or the passengers of a private ship or a private aircraft” [2]. Under UNCLOS states can’t be pirates.

(Again, this is academic. China has been blowing off UNCLOS judgements in the South China Sea for years.)

[1] https://www.un.org/depts/los/piracy/piracy_legal_framework.h...

[2] https://www.un.org/depts/los/convention_agreements/texts/unc...


This seizure was absolutely legal under the UNCLOS, the US unquestionably has valid justification under international law to seize this (and any other) stateless vessel.


Even if they want to launder sanctioned oil, that is up to those two other countries. The US has no right to militarily intervene.


A full-resolution, maximum-size JPEG XL image (1,073,741,823 × 1,073,741,824):

Uncompressed: 3.5–7 exabytes Realistically compressed: Tens to hundreds of petabytes

Thats a serious high-res image


At 600DPI that's over a marathon in each dimension.

I do wonder if there are any DOS vectors that need to be considered if such a large image can be defined in relatively small byte space.

I was going to work out how many A4 pages that was to print, but google's magic calculator that worked really well has been replaced by Gemini which produces this trash:

    Number of A4 pages=0.0625 square meters per A4 page * 784 square miles   =13,200 A4 pages.
No Gemini, you can't equate meters and miles, even if they do both abbreviate to 'm' sometimes.


"Google's magic calculator" was probably just a wrapper to GNU Units [0], which produces:

  $ units
  You have: (1073741823/(600/inch))**2 / A4paper  
  You want:  
         Definition: 3.312752e+10
Equivalent tools: Qalc, Numbat

0: https://news.ycombinator.com/item?id=36994418


It couldn't have been a wrapper - it understood a tiny tiny fraction of the things that Gnu units does.


> I do wonder if there are any DOS vectors that need to be considered if such a large image can be defined in relatively small byte space.

You can already DOS with SVG images. Usually, the browser tab crashes before worse things happen. Most sites therefore do not allow SVG uploads, except GitHub for some reason.


svg is also just kind of annoying to deal with, because the image may or may not even have a size, and if it does, it can be specified in a bunch of different units, so it's a lot harder to get this if you want to store the size of the image or use it anywhere in your code


Wolfram alpha is the better calculator for that sort of thing.


A better Gemini also works. Google Search seems to use the most minimal of Geminis, giving it a bad rep.

Prompt: “How many A4 pages would a 1073741823×1073741824 image printed at 600dpi be?”

Gemini Pro: “It would require approximately 33.1 billion (33,127,520,230) A4 pages to print that image.

To put that into perspective, the image would cover an area of 2,066 square kilometers […].

The Math

1. Image Dimensions: 1,073,741,823 × 1,073,741,824 pixels.

2. Physical Size: At 600 DPI, the image measures roughly 45.45 km wide by 45.45 km tall.

3. A4 Area: A single sheet of A4 paper (210 mm * 297 mm) covers approximately 0.06237 m².

4. Result: 2,066,163,436 m² / 0.06237 m² ≈ 33,127,520,230 pages.”

Alternatively, rink (https://rinkcalc.app/) :

> (1073741823 / (600/inch))**2 / A4paper

approx. 3.312752e10 (dimensionless)


Grok 4.1 beta finds the answer: approximately 33.1 billion pages.


Using a naive rectangular approximation (40x10^6m x 20x10^6m - infinite resolution at the poles), that's a map of the Earth with a resolution of 37mm per pixel at the equator. Lower resolution than I expected!


The only practical way to work with such large images is if they are tiled and pyramidal anyway


Which JXL supports, by the way. Tiling is mandatory for images bigger than 2048x2048, and you can construct images based on an 8x downscaled version, recursing that up to 4 times for up to 4096x downscaling.


That is awesome. In my domain, images (TIFFs usually) are up to 1m x 1m pixels and scaling usually goes 4x so that if you need 2x scaling you can just read 4 times as many tiles from the higher resolution level and downscale. With 8x scaling you need to go a level further - reading 16 pixels from the image to create 1 pixel of output. Not great but it would work and 4096 scaling would make the lowest resolution image 256 x 256 which is just what you need.


what does pyramidal mean in this context?


Probably, multiple resolutions of the same thing. E.g. a lower res image of the entire scene and then higher resolution versions of sections. As you zoom in, the higher resolution versions get used so that you can see more detail while limiting memory consumption.


Replicated at different resolutions depending on your zoom level.

One patch at low resolution is backed by four higher-resolution images, each of which is backed by four higher-resolution images, and so on... All on top of an index to fetch the right images for your zoom level and camera position.


Except in the case of a format like JPEG, there is no duplication - higher layers are used to "fill in the gaps" in the data from lower layers.


I think it means encoded in such a way that you first have low res version, then higher res versions, then even higher res versions etc.


JPEG and friends transforms the image data into the frequency domain. Regular old JPEG uses the discrete cosine transformation[1] for this on 8x8 blocks of pixels. This is why with heavily compressed JPEG images you can see blocky artifacts[2]. JPEG XL uses variable block size DCT.

Lets stick to old JPEG as it's easier to explain. The DCT takes the 8x8 pixels of a block and transforms it to 8x8 magnitudes of different frequency components. In one corner you have the DC component, ie zero frequency, which represents the average of all 8x8 pixels. Around it you have the lowest non-zero frequency components. You have three of those, one which has a non-zero x frequency, one with a non-zero y frequency, and one where both x and y are non-zero. The elements next to those are the next-higher frequency components.

To reconstruct the 8x8 pixels, you run the inverse discrete cosine transformation, which is lossless (to within rounding errors).

However, due to Nyquist[3], you don't need those higher-frequency components if you want a lower-resolution image. So if you instead strip away the highest-frequency components so you're left with a 7x7 block, you can run the inverse transform on that to get a 7x7 block of pixels which perfectly represents a 7/8 = 87.5% sized version of the original 8x8 block. And you can do this for each block in the image to get a 87.5% sized image.

Now, the pyramidal scheme takes advantage of this by rearranging how the elements in each transformed block is stored. First it stores the DC components of all the blocks the image. If you just used those, you'd get an image which perfectly represents a 1/8th-sized image.

Next it stores all the lowest-frequency components for all the blocks. Using the DC and those you have effectively 2x2 blocks, and can perfectly reconstruct a quarter-sized image.

Now, if the decoder knows the target size the image will be displayed at, it can then just stop reading when it has sufficiently large blocks to reconstruct the image near the target size.

Note that most good old JPEG decoders supports this already, however since the blocks are stored one after another it still requires reading the entire file from disk. If you have a fast disk and not too large images it can often be a win regardless. But if you have huge images which are often not used in their full resolution, then the pyramidal scheme is better.

[1]: https://en.wikipedia.org/wiki/Discrete_cosine_transform

[2]: https://eyy.co/tools/artifact-generator/ (artifact intensity 80 or above)

[3]: https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampli...


Tiled at different zoom levels


We call those mipmaps.


Yes, but unlike AVIF, JPEG XL supports progressive decoding, so you can see the picture in lower quality long before the download has finished. (Ordinary JPEG also supports progressive decoding, but in a much less efficient manner, which means you have to wait longer for previews with lower quality.)


I don’t think the issue with the exabyte image is progressive decoding, though it would at least get you an image of what is bringing down your machine while you wait for the inevitable!


An image of earth at very roughly 4cmx4cm resolution? (If I've knocked the zero's off correctly)


Each pixel would represent roughly 16cm^2 using a cylindrical equal-area projection. They would only be square at the equator though (representing less distance E-W and more distance N-S as you move away from the equator).

No projection of a sphere on a rectangle can preserve both direction and area.


I admit it, I was applying Cunningham’s Law. Disappointingly(?), you came to the same answer.


I admit I trusted your math; you seem to be off by a factor of 4:

  You have: 510.1e6km^2/1073741824/1073741824
  You want: cm^2
   * 4.4244122
   / 0.22601872
Strangely enough, units lacks area_earth, so I used the number from https://iere.org/what-is-the-area-of-the-earth/


:D

I was starting with the length of the equator and assuming a spherical cow^Hplanet.


Did you perhaps use the diameter of the earth rather than the radius?

#PiIsWrong

https://www.tauday.com/

  You have: (40075km/tau)^2*4*pi/1073741824/1073741824
  You want: cm^2
   * 4.434018
   / 0.22552908


A selfie at that resolution would be some sort of super-resolution microscopy.


[flagged]


They still down voted anyway lol


At least I didn't give Dang extra work.


Lol yeah Dang has a lot of flame wars to deal with


There are such stark and notable exceptions to this dilemma it makes me wonder if it's the reason founders believe they can grind on through the growth of their startup. Gates, Zuckerberg, Jobs*, Bezos are founders who lead through immense growth of their companies. The "if they can do it, why can't I?" mentality has to be a factor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: