I also never really liked Calibri for professional stuff. Maybe I'm just a victim of conditionig, but Calibri always had a bit of a "web page" vibe, not official document vibe.
I personally think that Computer Modern/Latin Modern from LaTeX looks a lot better than Times New Roman. I wish they'd standardize on that but it might not be included in Microsoft Office, so I guess Times New Roman it is.
I don't know a ton about Swift, but it does feel like for a lot of apps (especially outside of the gaming and video encoding world), you can almost treat CPU power as infinite and exclusively focus on reducing latency.
Obviously I'm not saying you throw out big O notation or stop benchmarking, but it does seem like eliminating an extra network call from your pipeline is likely to have a much higher ROI than nearly any amount of CPU optimization has; people forget how unbelievably slow the network actually is compared to CPU cache and even system memory. I think the advent of these async-first frameworks and languages like Node.js and Vert.x and Tokio is sort of the industry acknowledgement of this.
We all learn all these fun CPU optimization tricks in school, and it's all for not because anything we do in CPU land is probably going to be undone by a lazy engineer making superfluous calls to postgres.
The answer to that would very much be: "it depends".
Yes, of course, network I/O > local I/O > most things you'll do on your CPU. But regardless, the answer is always to measure performance (through benchmarking or telemetry), find your bottlenecks, then act upon them.
I recall a case in Firefox in which we were bitten by a O(n^2) algorithm running at startup, where n was the number of tabs to restore, another in which several threads were fighting each other to load components of Firefox and ended up hammering the I/O subsystem, but also cases of executable too large, data not fitting in the CPU cache, Windows requiring a disk access to normalize paths, etc.
I worked on a resource-intensive android app for some years and it had a good perfomannce boost after implementing parallelization. But mostly for old shitty devices
Some of this is because you’re leaning on the system to be fast. A simple async call does a lot of stuff for you. If it was implemented by people who treated CPU power as if it was infinite, it would slow you down a lot. Since it was carefully built to be fast, you can write your stuff in a straightforward manner. (This isn’t a criticism. I work in lower levels of the stack, and I consider a big part of the job to be making it so people working higher up have to think about this stuff as little as possible. I solve these problems so they can solve the user’s problem.)
It’s also very context dependent. If your code is on the critical path for animations, it’s not too hard to be too slow. Especially since standards are higher. You’re now expected to draw a frame in 8ms on many devices. You could write some straightforward code that decodes JSON to extract some base64 to decompress a zip to retrieve a JPEG and completely blow out your 8ms if you manage to forget about caching that and end up doing it every frame.
Yeah, fair. I never found poll/select/epoll or the Java NIO Selector to be terribly hard to use, but even those are fairly high-level compared to how these things are implemented in the kernel.
Right, and consider how many transformations happen to the data between the network call and the screen. In a modern app it's likely coming in as raw bytes, going through a JSON decoder (possibly with a detour through a native string type), likely getting marshaled into hash tables and arrays before being shoved into more specific model types, then pass that data along to a fully Unicode-aware text renderer that does high quality vector graphics.... There's a lot in there that could be incredibly slow. But since it's not, we can write a few lines of code to make all of this happen and not worry about optimization.
Something I just started doing yesterday, and I'm hoping it catches on, is that I've been writing the spec for what I want in TLA+/PlusCal at a pretty high level, and then I tell Codex implement exactly to the spec. I tell it to not deviate from the spec at all, and be as uncreative as possible.
Since it sticks pretty close to the spec and since TLA+ is about modifying state, the code it generates is pretty ugly, but ugly-and-correct code beats beautiful code that's not verified.
It's not perfect; something that naively adheres to a spec is rarely optimized, and I've had to go in and replace stuff with Tokio or Mio or optimize a loop because the resulting code is too slow to be useful, and sometimes the code is just too ugly for me to put up with so I need to rewrite it, but the amount of time to do that is generally considerably lower than if I were doing the translation myself entirely.
The reason I started doing this: the stuff I've been experimenting with lately has been lock-free data structures, and I guess what I am doing is novel enough that Codex does not really appear to generate what I want; it will still use locks and lock files and when I complain it will do the traditional "You're absolutely right", and then proceed to do everything with locks anyway.
In a sense, this is close to the ideal case that I actually wanted: I can focus on the high-level mathey logic while I let my metaphorical AI intern deal with the minutia of actually writing the code. Not that I don't derive any enjoyment out of writing Rust or something, but the code is mostly an implementation detail to me. This way, I'm kind of doing what I'm supposed to be doing, which is "formally specify first, write code second".
For the first time I might be able to make a case for TLA+ to be used in a workplace. I've been trying for the last nine years, with managers that will constantly say "they'll look into it".
Interesting, just the other day I tried asking if iterating in haskell or prolog wouldn't help both converging speed and token use. I wish there was a group to study how to do proper engineering with LLMs without losing the modeling / verification aspect.
You might find success with having the LLM contribute to the spec itself. It suddenly started to work with the most recent frontier models, to the point that economics of writing then shifted due to turn getting 10-100x cheaper to get right.
I agree. It's not like we're ever going to get to a state where we say "oh wow, all potential work is done, there's literally nothing left to do".
Like pretty much every technical innovation in history, when we have access to more tools, we just figure out how to solve bigger problems. People might have felt bad for horse breeders who lots out when planes, trains, and automobiles became ubiquitous, but people adapted around it. Now people can work and travel around the world, and there are industries around all these things. It's generally applied to parallelism, but I think it applies here: https://en.wikipedia.org/wiki/Gustafson%27s_law
While I've had my issues with the "vibe coding" performance right now, ultimately if I can get something to handle the boring and tedious parts of programming, then that frees up time for me to focus on stuff that I find more fun or interesting, or at the very least it frees me up to work on more complicated problems instead of spending half a day writing and deploying yet another "move stuff from one Kafka topic to another Kafka topic" program.
I don't believe in the rapture (or really anything in Christianity), but why would the idea of the rapture being a recent idea change anything? Why would it being suggested 2000 years ago suddenly make it more likely to be true?
If the idea goes back to the first century, then it is more likely Jesus or his disciples or Paul knew the idea or believed or taught it, regardless of what was recorded in the New Testament. Since, it doesn't, it is much less likely to be a valid teaching of Jesus.
Often the issue is that Christians tend to treat the books of the Bible as being univocal, that is, that the authors all had the same ideas and believed the same things. Upon close scrutiny, it becomes obvious that they didn't all believe the same things. This means taking verses out of context from different books and trying to make them agree is a poor way to understand each authors unique message.
There are millions of people who believe that God's chosen profit was Joseph Smith, about two-hundred years ago. Certainly they don't think that having close proximity to Jesus is an important factor.
Ultimately the more ridiculous and unbelievable the beliefs are the stronger your faith must be to keep them. It is a paradox of religion. I believe this is the underlying force that pushes people out to the fringes of their religion: the need to prove themselves the better man/woman by showing how strong their faith is.
Many years ago, I decided to reinvent the `blink` tag, because the monsters who make browsers removed support for it.
I didn't know you could just make up tags, but I figured I'd give it a shot, and with a bit of jquery glue and playing with visibility settings, I was able to fix browsers and bring back the glorious blinking. I was surprised you could just do that; I would have assumed that the types of tags are final.
I thought about open sourcing it, but it was seriously like ten lines of code and I suspect that there are dozens of things that did the same thing I did.
> because the monsters who make browsers removed support for it
Most browsers never implemented it in the first place. Safari, Chrome, IE and Edge never had it. In terms of current browser names, it was only Firefox and Opera that ever had it, until 2013.
Huh, I would have sworn that Internet Explorer had the blink tag at one point, but I think my parents had Netscape and then Mozilla pretty early so maybe that's what I'm confusing it with.
Regardless, I stand by my comment. Monsters! I want my browser to be obnoxious.
In theory, in 1996 Netscape and Microsoft agreed to kill <blink> and <marquee> <https://www.w3.org/People/Raggett/book4/ch02.html>, but although they were kept out of the spec, neither removed its implementation, and then IE dominated the browser market, and <marquee> became popular enough that the remaining parties were bullied into shipping it (Netscape in 2002, Presto in 2003, no idea about the KHTML/WebKit timeline), and so ultimately it was put into the HTML Standard.
I think I remember reading articles about how to implement blink in IE using behaviors, some IE only thing that didn't take hold(?), Maybe this was around IE5.
Oh those times. IE accepted <table><tr><td><tr></table> whereas Nestscape demanded <table><tr><td></td></tr></table> and would just not render anything else - just blank grey
Humans loved it, when they had to type all this by hand, because missing /td would not kill your page.
Permissiveness won out.
I also remember the day JavaScript hit the net.
and all those "chat rooms" that did <meta refresh> to look live suddenly had no defence against this
Oh those times. <script>document.createElement("table").appendChild(document.createElement("table"))</script> would crash IE, and some similar stupidities could even cause a BSOD as late as Windows 98.
(I think that was one such incantation, but if it wasn’t quite that it was close.)
Flash was fun but it was never built for being responsive and handling desktop and mobile in one app. Everything was basically fixed layout. Adding in that responsiveness would have probably killed the "easy" part.
Although Flash really sucked as a technology, it did inspire a lot of visual artistry on the web. Half of the cool stuff you saw on StumbleUpon was made with Flash by people who weren't proficient with JS/CSS, which weren’t capable enough to achieve the same results anyway.
Good riddance. I once had the honor of being featured together with many other artist on an HP website. It was implemented in Flash though, meaning it existed as a smallish rectangle in the middle of a website; within that rectangle you could click through to browse the exhibition one artwork at a time. This entailed that your path through the Flash app was not connected to the browser's address bar and exhibits did not get a URL of their own. When you wanted to direct others to your piece the only way was by giving them a "Japanese visitor's address", as in "go to this well-known named point (the domain name), from there walk west and when you see a tall black building, turn right and take the third alley to your left, I'm living in the fifth house down that alley".
I just kind of feel like removing it makes the internet less fun. 90's internet was basically a playground for geeky people to make things purely for fun, with basically no ambitions of making any money; people would host their own terrible web pages. My first real introduction to "programming" (other than making a turtle walk around) was when I was nine years old and bought "Make Your Own Web Page : A Guide for Kids" from my school, and this was something a nine year old kid could do because the web was easy and fun to program for. There weren't a billion JavaScript frameworks, CSS was new (if it was even supported), everything was done with tags and I loved it.
Yeah, the sites would be ugly and kind of obnoxious, but there was, for want of a better word, a "purity" to it. It was decidedly uncynical; websites weren't being written to satisfy a corporation like they all are now. You had low-res tiling backgrounds, a shitty midi of the X-files theme playing on a bunch of sites, icons bragging about how the website was written in Notepad, and lots and lots of animated GIFs.
I feel like the removal of blink is just a symptom of the web becoming more boring. Instead of everyone making their own website and personalizing it, now there's like ten websites, and they all look like they were designed by a corporation to satisfy shareholders.
Before blogging was called "blogging," people just wrote what they wrote about whatever they wanted to, however they did that (vi? pico? notepad? netscape communicator's HTML editor? MS frontpage? sure!), uploaded it to their ISP under ~/public_html/index.html or similar [or hosted it on their own computer behind a dialup modem], and that was that.
Visibility was gained with web rings (the more specialized, the better -- usually), occasional keyword hits from the primitive search engines that were available, and (with only a little bit of luck necessary) inclusion on Yahoo's manually-curated index.
And that was good enough. There was no ad revenue to chase, nor any expectation that it'd ever be wildly popular. No custom domains, no Wordpress hosts, no money to spend and none expected in return. No CSS, no frames, no client-side busywork like JS or even imagemaps.
Just paragraphical text, a blinking header, blue links that turned purple once clicked, and the occasional image or table. Simple markup, rendered simply.
Finish it up a grainy low-res static gif of a cat (that your friend with a scanner helped make from a 4x6 photograph), some more links to other folks' own simple pages, a little bright green hit counter at the bottom that was included from some far-flung corner of the Internet, a Netscape Now button, and let it ride.
I've been trying to migrate back to command-line-only applications to get a facsimile of this.
I don't think that command-line tools are better in any kind of "objective" sense, but I find that if you live primarily within tmux + neovim (and maybe Codex/Claude if you want to be super cool), then it's much easier to not be distracted by the rest of the world.
Nowadays, when I do work I will have a full screen terminal window open. I have an utterly gigantic 85" 8K TV as my "monitor" and I will have an ungodly number of tmux splits, but importantly I don't think those splits are distracting from actually doing work. At some point I will figure out how to get the dbt Cloud `preview` functionality working locally and I think I can avoid the vast majority of any of my work requiring a browser.
Sometimes it does kind of feel like I'm just being a hipster by using a lot of tools that have existed since antiquity, but I think they do a good job at not being distracting.
Forgive the messy desk. I wish I could say it's atypical, but it's not. I always have a ton of projects going on concurrently and as a result it's easy for stuff to pile up. I'll probably clean it this week.
My work computer isn't plugged in so I'm afraid you'll have to use your imagination for the million tmux splits.
I do think it's a lot about personality, though I gotta say that I don't really think it should be like that.
My dad had a manager (who was a VP) that he privately nicknamed "VPGPT", because despite being a very polite and personable guy he pretty much knew nothing about the engineering he was ostensibly managing, and basically just spoke in truisms that sounded kind of meaningful unless you do any kind of analysis on them.
I'm not saying that AI would necessarily be "better", but I do kind of hate how people who are utterly incapable of anything even approaching "technical" end up being the ones making technical decisions.
I don't even think we need ChatGPT or anything for this. Instead, just create an n8n job that runs nightly that sends a company-wide email that says "we are continuing to strive to implement AI into our application". Maybe add a thing talking about how share price going down is actually a blessing in disguise, depending on how the market is doing, obviously.
Don't steal this idea it's mine I'm going to sell it for a million dollars.
I must be a weird CEO because I’ve lost count of the number of times I’ve had to explain to people why shoving AI into our application will only make it worse not better.
Some CEOs are better than others, but I think a lot of CEOs, especially for BigCos, don't really know what's actually happening in their company so instead of actually contributing to anything, they just defer to buzzwords that they think the shareholders want to hear.
You know, I never even considered doing that but it makes sense; whatever overhead that's incurred by doing that static byte pattern is still almost certainly minuscule compared to the overhead of something like a garbage collector.
IMO the tradeoff that is important here is a few microseconds of time sanitizing the memory saves the millions of dollars of headache when memory unsafe languages fail (which happens regularly)
I agree. I almost feel like this should be like a flag in `free`. Like if you pass in 1 or something as a second argument (or maybe a `free_safe` function or something), it will automatically `memset` whatever it's freeing with 0's, and then do the normal freeing.
Alternatively, just make free do that by default, adding a fast_and_furious_free which doesn't do it, for the few hotspots where that tiny bit of performance is actually needed.
The default case should be the safe correct one, even if it “breaks” backward compatibility. Without it, we will forever be saddled with the design mistakes of the past.
Non-deterministic latency is a drawback, but garbage collection is not inherently slower than manual memory management/reference counting/etc. Depending on the usage pattern it can be faster. It's a set of trade-offs
I personally think that Computer Modern/Latin Modern from LaTeX looks a lot better than Times New Roman. I wish they'd standardize on that but it might not be included in Microsoft Office, so I guess Times New Roman it is.
reply