Hacker Newsnew | past | comments | ask | show | jobs | submit | dr_zoidberg's commentslogin

Your suspicion could have easily been cleared by reading the paper.

If you're short on time: the paper reads a bit dry, but falls in the norm for academic writing. The github repo shows work over months on 2024 (leading up to the release of 3.13) and some rush on Dec 2025 to Jan 2026, probably to wrap things up on the release of this paper. All commits on the repo are from the author, but I didn't look through the code to inspect if there was some Copilot intervention.

[0] https://github.com/Joseda8/profiler


If we go by Microsofts 2020 account of 1 billion devices running Windows 10 [0], and assume all those are running some kind of electron app (or multiple?) you easily get your gigawatt by just saving 1 watt across each device (on average). I suspect you'd probably go higher than 1 gigawatt, but I'm not sure as far as making another order of magnitude. I also think the noisy fan on my notebook begs to differ and maybe the 10 GW mark could be doable...

[0] https://news.microsoft.com/apac/2020/03/17/windows-10-poweri...


There are 30,000 different x-platform GUI frameworks and they all share one attribute: (1) they look embarrassingly bad compared to Electron or Native apps and they mostly (2) are terrible to program for.

I feel like I never wasting my time when I learn how to do things with the web platform because it turns out the app I made for desktop and tablet works on my VR headset. Sure if you are going to pay me 2x the market rate and it is a sure thing you might interest me in learning Swift and how to write iOS apps but I am not going to do it for a personal project or even a moneymaking project where I am taking some financial risk no way. The price of learning how to write apps for Android is that I have to also learn how to write apps for iOS and write apps for Windows and write apps for MacOS and decide what's the least-bad widget set for Linux and learn to program for it to.

Every time I do a shoot-out of Electron alternatives Electron wins and it is not even close -- the only real competitor is a plain ordinary web application with or without PWA features.


> Every time I do a shoot-out of Electron alternatives Electron wins and it is not even close

Only if you're ok with giving your users a badly performing application. If you actually care about the user experience, then Electron loses and it's not even close.


Name something specific. Note for two x-platform UI toolkits I have some familiarity with:

Python + tkinter == about the same size as electron

Java + JavaFX == about the same size as electron

Sure there are people who write little applets for software developers that are 20k Win32 applications still but that is really out of the mainstream.


Many times this. Native path is the path of infinite churn, ALL the time. With web you might find some framework bro who takes pride in knowing all the intricacies of React hooks who'll grill you for not dreaming in React/Vue/framework of the day, but fundamental web skills (JS/HTML/CSS) are universal. And you can pretty much apply them on any platform:

- iOS? React Native, Ionic, Web app via Safari

- Android? Same thing

- Mac, Windows, Linux – Tauri, Electron, serve it yourself

Native? Oh boy, here we fucking go: you've spent last decade honing your Android skills? Too bad, son, time to learn Android jerkpad. XML, styles, Java? What's that, gramps? You didn't hear that everything is Kotlin now? Dagger? That's so 2025, it's Hilt/Metro/Koin now. Oh wow, you learned Compose on Android? Man, was your brain frozen for 50 years? It's KMM now, oh wait, KMM is rebranded! It's KMP now! Haha, you think you know Compost? We're going to release half baked Compost multiplatform now, which is kinda the same, but not quite. Shitty toolchain and performance worse than Electron? Can't fucking hear you over jet engine sounds of my laptop exhaust, get on my level, boy!


Qt does exist. It's not difficult.


Qt costs serious money if you go commercial. That might not be important for a hobby project, but lowers the enthusiasm for using the stack since the big players won't use it unless other considerations compel them.


Depends on the modules and features you use, or where you're deploying, otherwise it's free if you can adhere to the LGPL. Just make it so users can drop in their own Qt libs.


QT only costs money if you want access to their custom tooling or insist on static linking. We're comparing to electron here. Why do you need to static link? And why can't you write QML in your text editor of choice and get on with life?


Some widgets and modules, like Qt Charts (or Graphs, I forget), are dual GPL and commercially licensed, so it's a bit more complicated than that. You also need a commercial license for automotive and embedded deployments.

Right but it's a perfectly functional (even remarkably feature complete) UI toolkit without the copyleft addons.

> You also need a commercial license for automotive and embedded deployments.

How does that work? The LGPL (really any OSI license) isn't compatible with additional usage restrictions.


You generally can't adhere to the LGPL in automotive or embedded deployments: the user can't link their own Qt libs in their auto/embedded device.

Slint has a similar license


> You generally can't adhere to the LGPL in automotive

"Can't" or "won't"? The UI process is not usually the part that need certification.

> Slint has a similar license

Indeed, but Slint's open source license is the GPL and not the LGPL. And its more permissive license is made for desktop apps and explicitly forbid embedded (so automotive)


I'm guessing some parts of code are needed to make it run on those platforms and aren't LGPL.

I'm sure microsoft and slack have sufficient funds for a commercial Qt license.


...which is the same as Flutter. Both don't use native UI toolkits (though Qt doesn't use Skia, I'll give you that (Flutter has Impeller engine in the works)). And Qt has much worse developer experience and costs money.


Qt costs money if you for some reason insist on static linking AND use all the fancy components, the core stuff is all LGPL.

Anyway it does look native and it is way faster than electron, which also doesn't look native so I don't understand why it's a problem for Qt but not for electron.


Not sure about that, SSDs historically have followed base-2 sizes (think of it as a legacy from their memory-based origins). What does happen in SSDs is that you have overprovisioned models that hide a few % of their total size, so instead of a 128GB SSD you get a 120GB one, with 8GB "hidden" from you that the SSD uses to handle wear leveling and garbage collection algorithms to keep it performing nicely for a longer period of time.


Sounds like an urban legend. How likely is it that the optimal amount over-provisioning just so happens to match the gap between power-ten and power-two size conventions?


It doesn't, there's no singular optimal amount of over-provisioning. And that would make no sense, you'd have 28% over-provisioning for a 100/128GB drive, vs 6% over-provisioning for a 500/512GB drive, vs. 1.2% over-provisioning for a 1000/1024GB drive.

It's easy to find some that are marketed as 500GB and have 500x10^9 bytes [0]. But all the NVMe's that I can find that are marketed as 512GB have 512x10^9 bytes[1], neither 500x10^9 bytes nor 2^39 bytes. I cannot find any that are labeled "1TB" and actually have 1 Tebibyte. Even "960GB" enterprise SSD's are measured in base-10 gigabytes[2].

0: https://download.semiconductor.samsung.com/resources/data-sh...

1: https://download.semiconductor.samsung.com/resources/data-sh...

2: https://image.semiconductor.samsung.com/resources/data-sheet...

(Why are these all Samsung? Because I couldn't find any other datasheets that explicitly call out how they define a GB/TB)


It doesn't, but it's convenient.


More recently you'd have, say, a 512GB SSD with 512GiB of flash so for usable space they're using the same base 10 units as hard disks. And yes, the difference in units happens to be enough overprovisioning for adequate performance.


It's at the end, in the "What are the standards units?" section.


So it does. I guess I skimmed a little too hard.


A little bit off topic: but I couldn't even start to read the article because "I reached my article limit" out of I site I never visited before... What are they using to determine how many articles I've read?

Opening in a private window solved the issue, however I'm pretty sure I don't regularly read anything on this site (maybe never was an overstatement?).


Seems totally possible that the limit is 0...


Yes, the thought crossed my mind too... But then I tried a private window and it opened, so maybe the other suggestion that the cookies are very long lived is right.


I clean my cookies after every session online and had the same problem, so maybe the limit really is 0 but the dev never actually considered people using private windows? Like you could bypass NYT paywalled by disabling Java Script?


Nowadays platforms seem to track IP addresses and other signals to grant limits. Using a VPN works.


Exact same experience here.


Shared public IP?


Maybe their cookies are very long-term and you visited this site 6 months ago?


In the ~30 years I've used computers, they've become ~1,000,000 times faster. My daily experience with computers doesn't show it. There's someone out there who took the time to measure UI latency and has shown that, no only isn't it faster, it's actually slowed down. And yet, our hardware is 1,000,000 times faster...

Edit: this is the latency project I was thinking about https://danluu.com/input-lag/


What beautiful table in how it is seems sorted both by time and latency with the exception of some systems that are ahead of their time in slowness.

If you put a bit of load on the modern hardware things get dramatically worse. As if there is some excuse for it.

I had this thought long ago that the picture on the monitor could be stitched together from many different sources. You would have a box some place on the screen with an application or widget rendered in it by physically isolated hardware. An input area needs a font and a color scheme. The text should still be send to the central processor and/or to the active process but it can be send to the display stitcher simultaneously.

You could even type a text and have it encrypted without the rest of the computer learning what the words say.

I look at and click around KolibriOS one time, everything is so responsive it made me slightly angry.


The abstract of OPs link mentions "Processing-Using-DRAM (PUD)" as exactly that, using off the shelf components. I do wonder how they achieve that, I guess fiddling with the controller in ways that are not standard but get the job (processing data in memory) done.

Edit: Oh and cpldcpu linked the ComputeDRAM paper that explains how to do it with off the shelf parts.


The lack of standards falls on the acting part. I ran a quick search and found that SWGDE best practices guides and documents do consider the case for the presence of malware on the digital evidence sources on many different scenarios [1]. Having an "expert" who is unaware of these guides is another story.

[1] https://www.swgde.org/?swp_form%5Bform_id%5D=1&swps=malware


Do you have anything specific you're pointing to in those search results? Reading the excerpts, all but two are talking about malware on the analysis machine.

2012-09-13 SWGDE Model SOP for Computer Forensics V3-0 merely says to detect "Detect malware programs or artifacts".

2020-09-17 SWGDE Best Practices for Mobile Device Forensic Analysis_v1.0 seemed the most in depth, and it merely states:

> 9.4. Malware Detection Malicious software may exist on a mobile device which can be designed to obtain user credentials and information, promote advertisements and phishing links, remote access, collect ransom, and solicit unwanted network traffic. Forensic tools are not always equipped with antivirus and anti-malware to automatically detect malicious applets on a device. If the tools do have such capability, they do not typically run against an extraction without examiner interaction. If the examiner’s tools do not have antivirus/anti-malware capability, the examiner may need to manually detect malware through the use of common anti-virus software applications as well as signature, specification and behavioral-based analysis.


No, I just went to search if the topic is mentioned in guidelines (which it is, multiple times). I'd then expect a (good) expert to pick on those breadcrumbs and search on how to do that (if they don't have the skills already). If I were working on a computer, I'd try to find IOCs that point to an infection (or lack of evidence for it).

If there's a memory dump to work on, a more in-depth analysis can be done with Volatility on running processes, but it usually falls back on the expert having good skills on that kind of search (malfind tends to drop a lot of false positives).

But at least the guides gave a baseline/starting point that seems to be better than what was described. It's very difficult to prove a negative, so I'd also be careful with the wording, eg: "evidence of a malware infection was not found with these methods" instead of "there's no malware here".


What I quoted perfectly describes what they did. Ran one off the shelf antivirus scan and then considered the concern addressed.

It's obviously impossible to disprove a system had malware on it, but that fact itself should be part of any expert testimony. Especially testimony for the defense in a criminal trial.


Finding evidence of a sophisticated attack is quite difficult. Most "IOCs" are not actually very effective in such a case.


That's interesting. A project at work is affected by Windows slow open() calls (wrt to Linux/Mac) but we haven't found a strong solution rather than "avoid open() as much as you can".


It's likely Windows Defender, which blocks on read I/O to scan files. Verify by adding a folder to its exclusion list, though this isn't helpful if it's a desktop app. The difference is most noticeable when you're reading many small files.


As of now 9%. I thought hitting the HN front page could have a much larger impact on this, but it seems that's about it this time.


It isn't a great sign when a tool as ubiquitous for computer vision isn't getting to its relatively meager $500K goal.

I've chipped in - not for the future V5 - but to recognize that OpenCV is a tool that I've been using for years for various small personal projects.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: