Graphics have been a blind spot for me for pretty much my entire career. I more or less failed upward into where I am now (which ended up being a lot of data and distributed stuff). I do enjoy doing what I do and I think I'm reasonably good at it so it's hardly a "bad" thing, but I (like I think a lot of people here) got into programming because I wanted to make games.
Outside of playing with OpenGL as a teenager to make a planet orbit around a sun, a bad space invaders clone in Flash where you shoot a bird pooping on you, a really crappy Breakout clone with Racket, and the occasional experiments with Vulkan and Metal, I never really have fulfilled the dream of being the next John Carmack or Tim Sweeney.
Every time I try and learn Vulkan I end up getting confused and annoyed about how much code I need to write and give up. I suspect it's because I don't really understand the fundamentals well enough, and as a result jumping into Vulkan I end up metaphorically "drinking from a firehose". I certainly hope this doesn't happen, but if I manage to become unemployed again maybe that could be a good excuse to finally buckle down and try and learn this.
I concur; just last month I started with `wgpu` (the Rust bindings for WebGPU) after exclusively using OpenGL (since 2000, I think? via Delphi 2). Feels a bit verbose at first (with all the pipelines/bindings setup), but once you have your first working example, it's smooth sailing from there. I kind of liked (discontinued) `glium`, but this is better.
Yeah you're not the first one to mention that to me. I'll probably try WebGPU or wgpu next time I decide to learn graphics. I'd probably have more fun with it than Vulkan.
I feel the same. I was trying to make some "art" with shaders.
I was inspired by Zbrush and Maya, but I don't think I can learn what is necessary to build even a small clone of these gigantic pieces of software, unless I work with this on a day to day basis.
The performance of Zbrush is so insane... it is mesmerizing. I don't think I can go deep into this while treading university.
> Every time I try and learn Vulkan I end up getting confused and annoyed about how much code I need to write and give up.
Vulkan isn't meant for beginners. It's a lot more verbose even if you know the fundamentals. Modern OpenGL would be good enough. If you have to use Vulkan, maybe use one of the libraries built on top of it (I use SDL3 for example). You still have freedom doing whatever you want with shaders and leave most of resource management to those libraries.
Vulkan isn't a graphics API, it's a low level GPU API. Graphics just happens to be one of the functions that GPUs can handle. That can help understand why Vulkan is the way it is.
I have a hot take. Modern computer graphics is very complicated, and it's best to build up fundamentals rather than diving off the deep end into Vulkan, which is really geared at engine professionals who want to shave every last microsecond off their frame-times. Vulkan and D3D12 are great, they provide very fine-grained host-device synchronisation mechanisms that can be used to their maximum by seasoned engine programmers. At the same time a newbie can easily get bogged down by the sheer verbosity, and don't even get me started on how annoying the initial setup boilerplate is, which can be extremely daunting for someone just starting out.
GPUs expose a completely different programming memory model, and the issue I would say is conflating computer graphics with GPU programming. The two are obviously related, don't get me wrong, but they can and do diverge quite significantly at times. This is more true recently with the push towards GPGPU, where GPUs now combine several different coprocessors beyond just the shader cores, and can be programmed with something like a dozen different APIs.
I would instead suggest:
1) Implement a CPU rasteriser, with just two stages: a primitive assembler, and a rasteriser.
2) Implement a CPU ray tracer.
These can be extended in many, many ways that will keep you sufficiently occupied trying to maximise performance and features. In fact to even achieve some basic correctness will require quite a degree of complexity: the primitive assembler will of course need frustum- and back-face culling (and these will mean re-triangulating some primitives). The rasteriser will need z-buffering. The ray-tracer will need lighting, shadow, and camera intersection algorithms for different primitives, accounting for floating-point divergence; spheres, planes, and triangles can all be individually optimised.
Try adding various anti-aliasing algorithms to the rasteriser. Add shading; begin with flat, then extend to per-vertex to per-fragment. Try adding a tessellator where the level of detail is controlled by camera distance. Add in early discard instead of the usual z-buffering.
To the basic Whitted CPU ray tracer, add BRDFs; add microfacet theory, add subsurface scattering, caustics, photon mapping/light transport, and work towards a general global illumination implementation. Add denoising algorithms. And of course, implement and use acceleration data structures for faster intersection lookups; there are many.
Working on all of these will frankly give you a more detailed and intimate understanding of how GPUs work and why they have been developed a certain way, rather than programming with something like Vulkan, spending time filling in struct after struct.
After this, feel free to explore any one of the two more 'basic' graphics APIs: OpenGL 4.6, or D3D11. shadertoy.com and shaderacademy.com are great resources to understand fragment shaders. There are again several widespread shader languages, though most of industry uses HLSL. GLSL can be simpler, but HLSL is definitely more flexible.
At this point, explore more complicated scenarios: deferred rendering, pre- and post-processing for things like ambient occlusion, mirrors, temporal anti-aliasing, render-to-texture for lighting and shadows, etc. This is video-game focused; you could go another direction by exploring 2D UIs, text rendering, compositing, and more.
As for why I recommend starting with CPUs, only to end up back with GPUs again, and one may ask: 'hey, who uses CPUs any more for graphics?' Let me answer: WARP[1] and LLVMpipe[2] are both production-quality software rasterisers; frequently loaded during remote desktop sessions. In fact 'rasteriser' is an understatement: they expose full-fledged software implementations of D3D10/11 and OpenGL/Vulkan devices respectively. And naturally, most film renderers still run on the CPU, due to their improved floating-point precision; films can't really get away with the ephemeral smudging of video games. Also, CPU cores are quite cheap nowadays, so it's not unusual to see a render farm of a million+ cores chewing away at a complex Pixar or Dreamworks frame.
4 years ago I tackled exactly those courses (raytracer[0] first, then CPU rasterizer[1]) to learn the basics. And then, yes, I picked up a lib that's a thin wrapper around OpenGL (macroquad) and learned the basics of shaders.
So far this has been enough to build my prototype of a multiplayer Noita-like, with radiance-cascades-powered lighting. Still haven't learned Vulkan or WebGPU properly, though am now considering porting my game to the latter to get some modern niceties.
I had the 2019 Macbook Pro i9, so I think a function to determine thermal throttling could be written very simply:
function isThermalThrottling() {
return true;
}
Seriously, I loved that computer for the most part but I was a little annoyed that I paid a lot of money for the i9 CPU just to get worse performance than the i7.
I am writing this comment from a 2019 i9. I have to charge it from the right hand ports. I think that is dumb, but it did solve the issue. I have no idea how I came to that conclusion (i almost certainly read about it somewhere), but there were certainly a couple of weeks where it was driving me crazy.
A dumb thing for sure. I still like macos better than windows and I'm heavily invested in a production workflow with logic. Moving to linux would be my next move, but after making that dumb change it's quite a functional machine.
It was so much of a problem that at work we added a check that you were charging from the right ports to our internal doctor script (think like `brew doctor`).
I help out with an emulation community. Any time anyone with a 2019 MBP comes in with issues, I stop them from giving any more details and just have them check this first.
FWIW, I replaced that MacBook with a Thinkpad (AMD Edition) about a year, and i have been extremely with it. Not only was it one of the easiest Linux installs I have ever had, but the hardware feels solid, the keyboard (while not one of the legendary classic Thinkpad keyboards) is nice to type on, the 4K screen looks nice, and everything just feels well built and snappy.
Outside of the terrible speakers, it is a nearly perfect computer. I don’t really mind a crappy speaker on a laptop since it usually lives on mute and when I need decent-enough quality audio I will plug in headphones or Bluetooth to a speaker, but YMMV.
Still, if this computer ever breaks then I will likely buy another thinkpad.
> I have to charge it from the right hand ports. I think that is dumb, but it did solve the issue.
I _had to_ do this for a while (around 2023, I think, not that it matters), but I no longer have to. I don't know what has changed, unfortunately; I haven't reinstalled anything, and I can't say I have uninstalled anything either. It's really weird...
I had the 2019 MacBook Pro i9 too. Applying thermal pads to the VRM module fixed the throttling problem. It was like a brand new computer, the one I thought I was purchasing.
After buying it, I regularly had throttling issues. Nothing seemed to help. I tried all the recommended tricks but wasn’t getting anywhere. (I had two external monitors and using adobe creative suite back then.)
I came across some forum posts about the VRM module not being able to cool and that was making the system think it was overheating. It sounded reasonable so I decided to give it a shot. I got the thermal pads, carefully opened the laptop and applied a few layers to the various components to make sure it was touching the case (this is what makes this mod work properly, heat transfer to the aluminum case.)
It worked like magic, it felt like a brand new computer after months of bad experiences.
Only side effect was that bottom case gets really hot (that heat needs to go somewhere) so I couldn’t comfortably use it on my lap. But I never regretted doing the mod for a second!
I retired what was my then favorite computer for an M3 MacBook Air with 24gb ram and couldn’t be happier. I still have fond memories of my 16 inch 2019 MacBook Pro, if you still use that computer, please do yourself a favor and at least look into adding thermal pads to the VRM modules.
I did the same mod on my 2019 MBP 16, and I got two more years of useful life out of it in exchange for nearly burning myself a few times when watching a movie or running some kind of CPU-intensive task with the laptop touching my body (even through a t-shirt or jeans!). Eventually, the airflow from the cooling fans started to weaken and the display flex cable started to get a bit squirrely on me. I'm on a 14" M-series MBP now, and it's freakishly quiet and efficient.
I think it was just any laptop with i9 just bad CPU for a laptop.
I had dell and swapped it as soon as was possible to i7.
I think they just made those only because they knew there will be enough people thinking “bigger better CPU == better laptop” - and yeah seems like I am not the only one that got caught with that. But I also trusted that someone there did any testing…
There is one pretty epic stackoverflow post that AI searches probably won't find.
It turned out a lot of the thermal throttling (kernel_task usage) on the i9 macbook pros went away if you plugged in the power on the right side, instead of the left.
Why turned out to be a chip (thunderbolt I think) that didn't have enough cooling.
One could also put quiet cabinet fans under the i9 Macbook Pro to make it draw air away and run markedly faster. [1]
That, combined with the keyboard failures, meant I could put a wireless keyboard on top of the laptop keyboard and use it, with fans underneath.
This is how I took it to the Apple store for warranty repair, which made a point of it having a pretty massive design flaw.
I asked them this if it was a feature of all macbooks when you buy the fastest ones to have to cool it the right way and not use the keyboard in case there might be dust in the air. The laptop was way too overspecced.
The i9 laptops were made far too thin to cool themselves appropriately in any situation, full stop.
All to say, if I could detect which USB-C port was pulling power in, on those laptops, I could guess pretty accurately what will happen on that i9.
It's sad to know the M4 Max Macbook Pro has the same issue, even though the laptop is thicker. Doesn't make me want to upgrade anymore.
I had that problem as well. Especially when connected to two external monitors.
I did not love the machine and the M1 Max was such a big upgrade because of that and I could upgrade to the m3 max later and give my m1 to somebody else. Both apple silicon machines are going strong for a long time I guess.
I had that laptop and it is the worst computer I have ever owned. As soon as you booted the fans would start spinning. There were sometimes kernel panics when plugging or unplugging Thunderbolt devices.
I have an M1 Max MBP now and it has been absolutely perfect.
My favorite and most painful issue was a bug in USB charging. Sometimes it would fail to charge from my monitor (USB-C) yet it would believe it’s connected. The battery would eventually run to zero and the machine would shutoff without warning. No low battery warning would be shown because it believed it was charging however it was not. Resolved with my M3.
Also fun with that generation is that you can’t plug in a dead laptop and start using it right away. Takes about ten minutes of charging before you can power it on.
Also fun, it would not establish power delivery with my monitor in this state. I’d have to plug it in with a regular charger to bootstrap it. Also resolved with my M3.
Now that it’s aged, the super capacitor for the clock no longer holds charge and the time is usually wrong on cold boot. I wish that was serviceable.
The laptop it was replacing was a terrible Asus computer that literally started falling apart and delaminating after about six months. It felt like it would break a bit more every time I touched it. Asus themselves was wholly unwilling to do anything about it, and they acted like I had been juggling with the damn thing when all it ever did was live on my desk or next to my bed.
It was the third Asus computer I had owned that broke way earlier than it had any right to, and I swore a blood oath that I will not buy another Asus product.
Point is, considering how terrible that laptop was, “annoying thermal throttling” was still a considerable upgrade, so I loved it in spite of it.
The thermal solution for the last generation of Intel MacBook Pros was very bad.
When the Apple Silicon models were released, everybody attributed the lower fan noise improvements entirely to the new chip, but the newer chassis had a much better thermal solution too.
I could run my M1 MacBook Pro at similar power draws to my Intel MacBook Pro and the M1 would be very quiet while the Intel sounded like a hair dryer.
I had that one with 64gb: got a new one twice but could not get it to act normal. It just got so incredibly hot, it was uncomfortable. It was one of my worst hardware purchases.
Mine was also 64GB. I was working at Apple at the time and I had a discount that I wanted to maximize, so I went onto the shop website and maxed out literally every option available to me. Total damage ended up being like 4 grand.
I don't remember it being super uncomfortably on my lap, though I almost exclusively wear very thick jeans so maybe I was somewhat shielded. It did get super hot.
Despite that, I did actually like the laptop a lot; it had a nice heavy weight to it, which was actually good for me since it felt very firm when I typed on it on my desk, and the keyboard was nice to type on. It benefited from low expectations because the laptop it was replacing was an Asus that was such a piece of shit that I swore a blood oath that I would never give Asus money ever again, which I have not broken.
I had a Mid-2015 MacBook Pro with an i7 and I still ran the fans at full speed at all times (actually above full speed). I had a couple fan failures due to that, but otherwise I think it was worth it.
Yeah, a decade or so ago, I was constantly looking for GUIs to drive ffmpeg, but eventually I kind of realized I was spending more time playing with GUIs compared to just learning the basics of ffmpeg.
I will admit that I still do need to occasionally look up specific stuff, but for the most part I can do most of the common cases from memory.
I find time accuracy to be ridiculously interesting, and I have had to talk myself out of buying those a used atomic clock to play with [1]. I think precision time is very cool, and a small part of me wants to create the most overly engineered wall-clock using a Raspberry Pi or something to have sub-microsecond level accuracy.
Sadly, they're generally just a bit too expensive for me to justify it as a toy.
I don't work in trading (though not for lack of trying on my end), so most of the stuff I work on has been a lot more about "logical clocks", which are cool in their own right, but I have always wondered how much more efficient we could be if we had nanosecond-level precision to guarantee that locks are almost always uncontested.
[1] I'm not talking about those clocks that radio to Colorado or Greenwich, I mean the relatively small ones that you can buy that run locally.
You should probably hate Google too, but I think a lot of Palantir hate comes from (well deserved) hatred for Peter Thiel, who has injected himself directly into conservative politics.
Billionaires buying their way into the political system should be hated implicitly, no matter their political affiliation.
I removed it with devtools, so surely there is a dozen of work-arounds, but, still, it just weird that a page that is supposed to show a calendar, doesn't show a calendar.
The website isn't the calendar... The print is, so if you Ctrl p, you can see what you'll get, that's not a workaround it's the purpose of the website, I guess I'm confused how you're confused lol
I so want to like these vibe coding agents, and sometimes I do, but it really does kind of suck the joy out of things.
What I was hoping would be that I could effectively farm out work to my metaphorical AI intern while I get to focus on fun and/or interesting work. Sometimes that is what happens and it makes me very happy when it does. A lot of the time, however, it generates code that is wrong, or incomplete (while claiming it is complete), and so I end up having to babysit the code, either by further prompting or just editing the code.
And then it makes a lot of software engineering become "type prompt, sit and wait a minute, look at the code, repeat", which means I'm decidedly not focusing the fun part of the project and instead I'm just larping as a manager who backseat codes.
A friend of mine said that he likes to do this backwards: he writes a lot of the code himself and then he uses Claude Code to debug and automate writing tedious stuff like unit tests, and I think that might make it a little less mind numbing.
Also, very tangential, and maybe my prompting game isn't completely on point here, but Codex seems decidedly bad at concurrent code [1]. I was working on some lock-free data store stuff, and Codex really wanted to add a bunch of lock files that were wholly unnecessary. Oh, and it kept trying to add mutexes into Rust, no matter how many times I tell it I don't want locks and it should use one-shot channels instead. To be fair, when I went and fixed the functions myself in a few spots and then told it to use that as an example, it did get a little better.
[1] I think this particular case is because it's trained on example code from Github and most code involving concurrency uses locks (incorrectly or at least sub-optimally). I guess this particular problem may be more of the fault of American universities teaching concurrent programming incorrectly at the undergrad level.
I find it useful to let one agent come up with a plan after a review and another agent implementing the plan. For example, Gemini reviewing the code, codex writing a plan and then Claude code implementing it
What about the reverse, after Claude code implements it, let Gemini/Codex do a code review for bugs and architecture revisions? I found it is important to prompt to only make absolutely minimal changes to the working code, or unwanted code clobbering will happen.
The Java streams are cool and I like them, but they're not a replacement for a functional type system or a functional language.
`map` is a lot more than a fancy for-loop for lists and arrays; it's about abstracting away the entire idea of context. Java streams aren't a substitute for what you have in Haskell.
Outside of playing with OpenGL as a teenager to make a planet orbit around a sun, a bad space invaders clone in Flash where you shoot a bird pooping on you, a really crappy Breakout clone with Racket, and the occasional experiments with Vulkan and Metal, I never really have fulfilled the dream of being the next John Carmack or Tim Sweeney.
Every time I try and learn Vulkan I end up getting confused and annoyed about how much code I need to write and give up. I suspect it's because I don't really understand the fundamentals well enough, and as a result jumping into Vulkan I end up metaphorically "drinking from a firehose". I certainly hope this doesn't happen, but if I manage to become unemployed again maybe that could be a good excuse to finally buckle down and try and learn this.
reply