My 2 cents is that GUI is good for exploring new software, while TUI is wonderful if you already have a mental map of what you're doing. So for everyday used software I would definitely hope that more TUI's where used.
What a coincidence, I've just read this paper while I prepare my proposal for a PhD. I feel that the difficulties reported by the novice users were related to the peculiarities of the mainframe interfaces + the 3270 emulator. Not exactly to the fact that they were using a TUI.
Are you taking on new customers? I know a few folks hungry for old fashioned, on premises accounting and task tracking now that Intuit is pushing everyone to cloud subscriptions.
Ideally it would be a perpetual license so we can never have the rug pulled on business critical data, but I like the "x years of updates and support" model
You can contact me at my username + gmail if you wouldn't mind discussing further
I'm looking for something that you can embedd in your own application. LaTeX would be great but it's not really nice to have WEB code in your C application. It's also has a bit troublesome license.
It embeds almost anywhere, including via client-side WASM, and someone even made a nice TypeScript lib [0]. If you dislike `typst`, it even has a package that transpiles LaTeX strings into native typst, which somehow doesn't seem to make `typst` any less fast [1]. WASM plugin magic will do that!
The curious consequence is that the fastest and most portable way to render lightwight LaTeX code might actually be... To transpile LaTeX to embedded `typst`? Sure, sure, not all of LaTeX will map. But from an 80/20 mindset it might just be enough.
For my use case (recreating in-memory from scratch) it basically boils down to three points: (1) journal_mode = off (2) wrapping all inserts in a single transaction (3) indexes after inserts.
For whatever it's worth I'm getting 15M inserts per minute on average, and topping around 450k/s for trivial relationship table on a stock Ryzen 5900X using built-in sqlite from NodeJS.
Would it be useful for you to have a SQL database that’s like SQLite (single file but not actually compatible with the SQLite file format) but can do 100M/s instead?
I have been looking for replacement of SQLite for years -- admittedly, not very actively, embedded databases are just a hobby obsession and my life did not allow me much leisure time in the last years -- and still couldn't find one.
The written-in-Rust `sled` database is more like a key-value store and I had partial successes with it but it's too much work making a KV store a relational database.
I have a PoC KV store that does > 150M write/s (Rust) using a single core (for 8 bytes of data inserted - it gets bandwidth limited on disk quite quickly even latest NVME PCIE5 disks). The plan is to have it support application RAID0 out of the box so that you could spread the data across multiple disks, but of course that's something you have to setup up-front when you create the DB.
I then would add a SQL engine on top - not sure how much SQL would slow things down but hopefully not much. I haven't found anyone who's interested in anything like that though.
And yes I realize this is several orders of magnitude more performance than any other DB out there.
Similar to @zeroq, I don't really need a K/V store. I need a full relational database that I can also use for analytics and time series, and I want it embedded. And as strict as possible -- PostgreSQL is doing this really well.
I don't mind that there are databases that can be forced into all that and work alright but I admit I am not willing to put the plumbing work if I can avoid it. If I can't avoid it at one point then, well, we'll cross that bridge when we get to it.
DuckDB and ClickHouse(-Local) are amazing candidates but I have never evaluated their normal OLTP performance. For now.
(EDIT: let that not stop you however. Please publish it and announce your thing on HN as well.)
kv doesn't cut my use case. After import I'm running several queries to remove unwanted data. I was shortly contemplaiting filtering while I was importing but (a) I couldn't really handle relationships - hence two distinct steps - remove and remove orphans, and (b) even if I could it's really much cleaner code having a single simple function to import a table, and then having a clean one liners to remove weed. And (c) I need sql queries later on.
Honestly I don't see much use for yet-another-sqlite.
The premise of having 100M/s writes instead of 500k/s sounds bit unrealistic, but at the same time, while simply importing tuples and completely ignoring stuff like foreign keys, I'm only utilizing one core. I had on my todo list an experiment to run these imports in paralell into different databases and then merging them somehow, but I ran out of time. Again, 10Gb sqlite is quite large.
On the other hand, I think the adoption and the fact that you can take your db basicially anywhere and it will run out of the bat is something you can't ignore. I was briefly looking at pglite but I don't really see benefits apart from a niche use case when you really need that compatibility with big brother.
And then sqlite has so many hidden gems, like the scenario where you can use a sqlite file hosted on http like a remote database! I can post my 10Gb database on S3 and run count(*) on main table and it will only take like 40kb of bandwidth.
> Honestly I don't see much use for yet-another-sqlite.
Agreed. I want something better than SQLite, something that learns from it and upgrades it further.
> On the other hand, I think the adoption and the fact that you can take your db basicially anywhere and it will run out of the bat is something you can't ignore.
Absolutely. That's why I am soon finishing my Elixir -> Rust -> SQLite library since I happen to believe most apps don't even need a dedicated DB server.
> I was briefly looking at pglite but I don't really see benefits apart from a niche use case when you really need that compatibility with big brother.
I would probably easily pay 1000 EUR next month if I could have SQLite with PostgreSQL's strict schema. That's the one weakness of SQLite that I hate with a passion. I know about strict mode. I am using it. Still not good enough. I want "type affinity" gone forever. It's obviously a legacy feature and many people came to rely on it.
Hence I concluded that SQLite will never change and something newer should arrive at one point. Though how do you beat the (likely) millions of tests that SQLite has? You don't... but we have to start somewhere.
> And then sqlite has so many hidden gems, like the scenario where you can use a sqlite file hosted on http like a remote database! I can post my 10Gb database on S3 and run count() on main table and it will only take like 40kb of bandwidth.*
Admittedly I never saw the value in that, to me that just seems like you are having a remote database again, at which point why not just go for PostgreSQL which is stricter and has much less surprises. But that's my bias towards strictness and catching bugs at the door and not 10 km down the road.
I hear you. As someone who lost rest of his hair during last 10 years talking to frontend kids claiming that types are for grannies - I'm on your side.
But having that said, sqlite has a niche application and you can enforce types on app layer, the same way you do it with web apps. Trust, but verify. Better - don't trust at all. At the end of the day the way I see it - it's just like protobuffers et al. - you put some data into a black box stream of bytes and it's your responsiblity to ensure correctness on both ends.
@serverless
It's twofold. On one hand you have the ability to move faster. On the other you have less moving parts that need maintenance and can break. Plus, for me personally, it's the default mindset. Let me give you an example - in 2025 still most online shops have filters that will trigger a full reload of the web page.
When I'm clicking on TVs in a shop I don't need to reload the webpage everytime I click on something, the app could easily got the whole stock in single json and filter results on the fly while I'm fiddling with filters. Sure it doesn't work for amazon, but it works for 95% of shops online. Yet no one is doing it. Why?
My point - I'm looking for a way to simplify processes, and for some niche applications it's just more convenient.
RE: serverless, I think I understand the use-case and I get the homogeneity argument, it's just that to me SQLite mostly wins for being embedded; the rest of its traits are a nice bonus but using stuff e.g. Litestream I don't view as super important. And I would change a good amount of its internals if it were up to me. But! It helps you, you are using it, you are happy with it -- cool!
RE: stricter types, oh, I am adding a ton of code to verify types, absolutely. My upcoming FFI library (Elixir -> Rust -> SQLite) will have a lot of "raw" FFI bridges to SQLite with some good sensible default checks but ultimately I'd leave it to the users (programmers) of the library to f.ex. make sure that the field `price` in each result from the set is in fact a `REAL` (float / double) value. That's going to be the next game though, the "raw" FFI stuff will just make sure nothing ever crashes / panics (as much as I can guarantee; obviously I can't stop the OS killing the process or running out of disk or memory) and return errors as detailed and as machine-readable as they can be (major selling point, at least for me when I start dogfooding it). Just today I started working on using interruptible SQLite operations (via its progress handler mechanism) and it's almost done and I'll release it via a PR. Which will also make the library near-real-time friendly (I am aiming at 1-10 ms pauses between each check-in at most even if you are fetching a million records). Etc.
So yeah, no trust and a lot of verification indeed. But I would still like to have some more safety around executing raw SQL (not talking injection here) where e.g. you are never allowed to insert a string into an integer column.
It's hard to complain though. SQLite is one of the very very best softwares ever made. If the price for using it is to write some more conservative validation code at the edges then that's still a fantastic deal and I am happy to do it.
I tested couple different approaches, including pglite, but node finally shipped native sqlite with version 23 and it's fine for me.
I'm a huge fan of serverless solutions and one of the absolute hidden gems about sqlite is that you can publish the database on http server and query it extremely efficitent from a client.
I even have a separate miniature benchmark project I thought I might publish, but then I decided it's not worth anyones time. x]
It's worth noting that the data in that benchmark is tiny (28MB). While this varies between database engines, "one transaction for everything" means keeping some kind of allocations alive.
The optimal transaction size is difficult to calculate so should be measured, but it's almost certainly never beneficial to spend multiple seconds on a single transaction.
There will also be weird performance changes when the size of data (or indexed data) exceeds the size of main memory.
Hilarious, 3000+ votes for a Stack Overflow question that's not a question. But it is an interesting article. Interesting enough that it gets to break all the rules, I guess?
Om working om a distributed erp system. The goal being native ui in android, iOS, Mac OS, web, windows, Linux and curses with crazy fast response times. No user operation takes longer than 100 ms.
I'm working with SBoM, one fun side effect is that you can scan SBoM's for vulnerabilities. Suddenly hackers, your customers and your competitors starts do to this and you need to make sure your third party dependencies are updated.
This reveals the cost of dependencies (that often are ignored).
I hope that we in the future will have a more nuanced discussion on when it's okay to add a dependency and when you should write from scratch.
I also switch between a lot of computers (work computer at home/work computer at work) but have to develop on "big powerful machine at work". My current solution is tmux + nvim and it works really good. I can just pickup a session from whatever computer I'm in front of at the moment.
Am I correct in that neither Zed nor VS Code support this usecase yet?
I use VSCode + SSH remote for this and works great. The only nitpick I have is needing to manually reconnect when I suspend my laptop and ssh connection breaks. It's a separate session though, which doesn't matter to me but may be a deal breaker for you.
I use Tailscale for a personal VPN so the beefy workstation is always securely available from my laptop, even when across the pond).
There's no input delay in VSCode (editor, ui) because the UI is local. Delay in saving/reading/sesrching in files is not noticable for me.
(edit to explain: VSCode is still running locally, but it also installs the server-side (headless) component on the remote machine. That way editing files is local/fast, but stuff like running the code, search/replace/etc also works fadt because it's handled by the serverside).
Terminal (incl vscode terminal) feels slightly sluggish, and it's noticable if the server is in another country and uncomfortable if across the pond.
The input delays are very dependent on where the server is and what it is doing. If the server is idle and close by (ping wise) delay is virtually indistinguishable from local VSCode. If I'm connecting to the server via a VPN in a different country while stressing all the cores with some background compiling or number crunching work, input delay gets quite noticeable.
I multiplex my ssh connections so the workflow is just ssh sgain, then reload VSCode window. If Mosh could multiplex these (and paper over the connection problems) that'd be great but after a cursory look, it doesn't look like it's possible.
It's a minor thing tho.
In general, I quite like Mosh! If I routinely had to work on faraway servers I'd use mosh just for its smart local echo.
Not persistent sessions, but VS Code can run the GUI locally and connect to a remote server. When you reconnect it opens all your tabs, workspace settings etc.
I strongly disagree. You should always keep the code as simple as possible and only add abstractions once you really need them.
Too many times I've found huge applications that it turns out be most scaffolding and fancy abstractions without any business logic.
My biggest achievement is to delete code.
1. I've successfully removed 97% of all code while adding new features.
2. I replaced 500 lines with 17 lines (suddently you could fit it on a screen and understand what it did)
Personally I don't see the difference between this and submodules. Repo stores the information in xml files, vdm stores it in yaml files and git submodules in the git database. I don't really care.
The real headache for me is the trouble of traceability vs ease of use. You need to specify your dependencies with a sha1 to have traceable SLSA compliant builds, but that also means that you'll need to update all superrepos once a submodule is updated. Gerrit has support for this, but it's not atomic, and what about CI? What about CI that fails?
I care about the aesthetics and the convenience that the tool provides. git-repo at least has a simple command to get all the latest stuff (repo sync). Git submodules is a mess in this regard. Just look at this stack overflow thread:
People are confused at how to do THE most basic command that you’d have to do every single day with a multi-repo environment. There’s debating in the comments about what flags you should actually use. No thanks.
There’s a lot of room for improvement in this space. git-repo isn’t widely used outside of aosp. Lots of organizations are struggling with proper tooling for this type of setup.
Also, the discussions are there because it's been more than a decade and the options have evolved over time.
Submodules are a bit clunky but the problem it solves is itself clunky. Bringing in another tool doesn't really feel like its going to reduce the burden.
I have yet to be in a situation where I blindly want to update all submodules. It is a conscious action, X has updated and I want to bring that change(s) in.
cd submodule, update, test, commit.
I haven't seen anything in this thread that really motivates me to learn another bespoke tool just for this. I'm sure it varies for different projects though.
Fast forward 15 years and see how the tooling this thread has been evolved and how many different tools people will have used and compare that to the stackoverflow post. I'm more inclined to invest time in git itself.
This is fine until you're working with hundreds of other developers. I believe the reason solutions like this exist is to abstract git away from most devs, because in (my experience) many enterprise devs have only rudimentary git knowledge.
Sure, the devs should "just learn git" - but the same argument applies to a lot of other tech nowadays. Ultimately most folks seem to want to close their ticket off and move to the next one.
Git submodules and git subtrees generally do not fit my org's needs - we have internal tooling similar to this. Happy to expand on that if you have questions.
The risk with that approach is that every other of the hundreds of developers will bring their own tool for X. So now you have hundreds of tools and everyone only knows a subset.
If there is a common operation that people get wrong or don't use often enough but still need to run regularly a five-line bash script will not only do the job it will actively help them learn the tool they are using.
I'm not part of the project at all, but this is the only offline code review system I've found.