This feels like a case of guessing at something you could know. There are two types of allocations that each have a size and free method. The free method is polymorphic over the allocations type. Instead of using a tag to know absolutely which type an object it is you guess based on some other factor, in this case a size invariant which was violated. It also doesn't seem like this invariant was ever codified otherwise the first time a large alloc was modified to a standard size it would've blown up. It's worth asking yourself if your distinguishing factor is the best you can use or perhaps there is a better test. Maybe in this case a tag would've been too expensive.
Do you envision the development to track clojure as much as is possible, similar to how cljs was conceived to be clojure in js and not just clojure-ish js, or do you think you'll eventually diverge? I made a language a while ago that was like 90% clojure but hesitated to call it that because I didn't want there to be an expectation that the same code would run as-is in both languages. Seems like from the landing page you're going for more of a drop in replacement. Look cool, good luck!
jank is Clojure and will track upstream Clojure development. I'm working closely with the Clojure team and other dialect devs to ensure this remains the case. I am leading a cross-dialect clojure-test-suite to help ensure parity across all dialects: https://github.com/jank-lang/clojure-test-suite We have support or work ongoing for Clojure JVM, ClojureScript, Clojure CLR, babashka, Basilisp, and jank.
With that said, jank will do some trail blazing down other paths (see my other comments here about Carp), but they will be optional modes which people can enable which are specific to jank. Clojure compatibility will remain constant.
Most people want their test suite to pass. If they ugprade java and mockito prints out a message that they need to enabled '--some-flag' while running tests they're just going to add that flag to surefire in their pom. Seems like quite a small speedbump.
I understand the desire to want to fix user pain points. There are plenty to choose from. I think the problem is that most of the UI changes don't seem to fix any particular issue I have. They are just different, and when some changes do create even more problems there's never any configuration to disable them. You're trying to create a perfect, coherent system for everyone absent the ability to configure it to our liking. He even mentioned how unpopular making things configurable is in the UI community.
A perfect pain point example was mentioned in the video: Text selection on mobile is trash. But each app seems to have different solutions, even from the same developer. Google Messages doesn't allow any text selection of content below an entire message. Some other apps have opted in to a 'smart' text select which when you select text will guess and randomly group select adjacent words. And lastly, some apps will only ever select a single word when you double tap which seemed to be the standard on mobile for a long time. All of this is inconsistent and often I'll want to do something like look up a word and realize oh I can't select the word at all (G message), or the system 'smartly' selected 4 words instead, or that it did what I want and actually just picked one word. Each application designer decided they wanted to make their own change and made the whole system fragmented and worse overall.
> He even mentioned how unpopular making things configurable is in the UI community.
Inability to imagine someone might have different idea about what's useful is general plague of UI/UX industry. And there seem to be zero care given to usage by user that have to use the app longer than 30 seconds a day. Productivity vs learning time curve is basically flat, and low, with exception being pretty much "the tools made by X for X" like programming IDEs
Back in the 90s, you had a setting for everything! It was glorious. This trend of deliberately not making things configurable is the worst, and we can’t seem to escape it because artists are in charge of the UI rather than human interaction professionals.
App designers need to understand that their opinions on how the app should look and work are just that: opinions. Opinions they should keep to themselves.
It does make quality assurance an absolute nightmare, I would know, our application is like this to the 10th degree. Config on top of config on top of setting on top of options.
But if you also want your product to be productive for a way array of use cases, it's necessary. You need to think about your market.
Which is why you should think about how these options interact and compose at the start, as opposed to only adding options in an ad-hoc manner (whether you do it willy-nilly or only when your arm is really twisted)
"You mean we shouldn't use 10 layers of abstraction and 274 libraries to achieve our goal ? I mean, we use a lot of resuources, but look how polished the UI is: everything is flat. "
Thank god the RAM prices have risen. Maybe some people will start to programm with their heads instead of their (AI) IDE.
I rarely need to configure something on my PCs, but rarely is not never, and when I do really need an option, it better be there. There's a gradient between unmaintainable multidimensional matrices of options and "one size ought to fit everyone" and both ends of it make the user miserable.
I think when it comes to config too people really underestimate its power.
On desktop, I often see people waste inordinate amounts of time on workflows that don't suit their use case. Little do they know - there's a config for that!
For example, I'll see people holding outlook like it's radioactive. They'll do the same busy-body work of manually pruning their inbox and sorting stuff and deleting stuff. The config can really help them there, but I think they either don't know it's capabilities or are scared of it.
Most people also don't care about the mothers of programmers. Until, you know, they have to send an SMS using exactly (particular) one of the 2 SIMs present in the phone and the 20 years old app will not let them.
> that it did what I want and actually just picked one word. Each application designer decided they wanted to make their own change and made the whole system fragmented and worse overall.
This is the trouble. It's been decades of the OS becoming less and less relevant. Apps have more power, more will to build their own thing.
And there's less and less personal computing left. There's the design challenges, the UX being totally different. But the OS used to be a common substrate that the user could use to do things. And the OS has just vanished vanished vanished, receeded into the sea. Leaving these apps to totally dominate the experience, apps that are so often little more than thin clients to some far off cloud system, to basically some corporations mainframe.
The OS's relevance keeps shrinking, and it's awful for users. Why bother making new UX for the desktop, if the capabilities budget is still entirely on the side of the app? What actually needs to change is's UX of the desktop or other OS paradigm (mobile), it's a fundamental shift in taking power out of the mainframe and having a personal computer that's worth a damn, that again has more than a quantum of capability embued in it that it can deliver to the user.
(My actual hope is that someday the web can do some of this, because apps have near always been a horrible thing for users that gives them no agency, no control, that's pre baked to be only what is delivered to the user.)
Text selection used to be frustrating on mobile for me too until Google fixed it with OCR. I get to just hold a button briefly and then can immediately select an area of the screen to scan text from, with a consistent UX. Like a screenshot but for text.
It's possible to use the Gemini "ask me about this screen" to OCR the selected area of the screenshot. I guess that might be more efficient in some contexts then trying to use the native text select.
This is such an indictment of modern technology. No offense is meant to you for doing what works for you, but it is buck wild that this is the "fix" they've come up with.
As somebody learning about this for the first time it sounds equivalent to a world where screenshotting became really hard so people started taking photos of their screen so they could screenshot the photo.
How could such a fundamental aspect of using a computer become so ridiculous? It's like satire.
Unfortunately, some apps don't support text selection and on some websites the text selection is unpredictable.
I'd actually compare screen OCR to screenshots. Instead of every app and every website implementing their own screenshot functionality, the system provides one for you.
Same goes for text selection. Instead of every context having to agree on tagging the text and directions, your phone has a quick way of letting you scan the screen for text.
To be fair, I still use the "hold the text to select it" approach when I want to continue with the "select all" action and have some confidence that is going to do what I want.
> some apps don't support text selection and on some websites the text selection is unpredictable.
That correctly identifies the problem. Now why is that, and how can we fix it?
It seems fixable; native GUI apps have COM bindings that can fairly reliably produce the text present in certain controls in the vast majority of cases. Web apps (and "desktop" apps that are actually web apps) have accessibility attributes and at least nominally the notion of separating document data from presentation. Now why do so few applications support text extraction via those channels? If the answer is "it's hard/easier not to", how can we make the right way easier than the wrong way?
Doesn't have to be - Blackberry BB10 had damn near solved it. I think they had some patents on it, but these should have expired, and I noticed some corresponding changes in Android. But it's still far from being as good as BB10. What BB10 had was a kind of combined cursor and magnifying glass that controlled really well, plus the ability to tap the thing left or right to move one letter at a time.
It looks like the thing that I remembered appears at 2:06 and later. I also tried to find a video example when I wrote my post and didn't find anything. Seems like very few people get excited about text selection.
Universal search on Google Pixels has solved a lot of the text selection problems on Android for me, with the exception being selecting text which requires scrolling.
I've had the same thought about 'written' text with an LLM. If you didn't spend time writing it don't expect me to read it. I'm glad he seems to be taking a hard stance on that saying they won't use LLMs to write non-code artifacts. This principle extends to writing code as well to some degree. You shouldn't expect other people to peer review 'your' code which was simply generated because, again, you spent no time making it. You have to be the first reviewer. Whether these cultural norms are held firmly remains to be seen (I don't work there), but I think they represent thoughtful application of emerging technologies.
With `jjui` this strategy takes only a few keystrokes to do operations like adding/removing parents from merge commits.
It's so nice to have like 4 parallel PRs in flight and then rebase all of them and all the other experimental branches you have on top onto main in 1 command.
Also, I cannot even stress to you how much first-class-conflicts is a game changer. Like seriously you do NOT understand how much better it is to not have to resolve conflicts immediately when rebasing and being able to come back and resolve them whenever you want. It cannot be overstated how much better this is than git.
Also, anonymous branches are SOOOO much better than git stashes.
You can specify a commit, yes, but how do you remember your set of unnamed commits? Once HEAD no longer points to a commit, it will not show up in `git log`.
I agree that Git could gain an operation log. I haven't thought much about it but it feels like it could be done in a backwards-compatible way. It sounds like a ton of work, though, especially if it's going to be a transition from having the current ref storage be the source of truth to making the operation log the source of truth.
The last one is always available via `git checkout -` and if you want to do more you can do `git checkout @{4}` etc. . It will also show up in `git log --reflog`. I honestly don't see the problem with naming things. Typing a chosen name is just so much more convenient than looking up the commit hash, even when you only need to type the unique prefix. When I don't want to think of a name yet, I just do "git tag a, b, c, ..."
I also tend to have the builtin GUI log equivalent (gitk) open. This has the behaviour, that no commit vanishes on refresh, even when it isn't on a branch anymore. To stop showing a commit you need to do a hard reload. This automatically puts the commit currently selected into the clipboard selection, so all you need to do is press Insert in the terminal.
> It sounds like a ton of work, though, especially if it's going to be a transition from having the current ref storage be the source of truth to making the operation log the source of truth.
I don't think that needs to be implemented like this. The only thing you need to do is recording the list of commands and program a resolver that outputs the inverse command of any given command.
Yeah but in jj every time you run ‘jj log’ you see all your anonymous branches and you can rebase all of them at once onto main in 1 command.
When I’m exploring a problem I end up with complex tree of many anonymous branches as I try different solutions to a problem and they all show up in my jj log and it’s so easy to refer to them by stable change ids. Often I’ll like part of a solution but not another part so I split a change into 2 commits and branch off the part I like and try something else for the other part. This way of working is not nearly as frictionless with git. A lot of times I do not even bother with editor undo unless it’s just a small amount of undoing because I have this workflow.
Git is to jj like asm is to C: you can do it all with git that you can do with jj but it’s all a lot easier in jj.
I guess I never had complex trees from such an action, just a bunch of parallel branches, but I would say splitting and picking commits from different branches is not exactly hard with git either. Also you can also see them in git, but they won't have change ids of course.
I know how to do everything in git that I can do in jj but the thing is I would never bother doing most of these workflows with git because it’s way more of a pain in the ass than with jj. I work with version control in a totally different way now because how easy jj makes it to edit the graph.
Within a day of switching I was fully up to speed with jj and I never see myself going back. I use colocated repos so I can still use git tools in my editor for blaming and viewing file history.
Sure even rebasing a complex tree in git can be done by creating an octopus merge of all leaf nodes and rebasing with preserve merges but like that’s such a pain.
Mostly, yes. It also covers changes to the working copy (because jj automatically snapshots changes in the working copy). It's also much easier to use, especially when many refs were updated together. But, to be fair, it's kind of hard to update many refs at once with Git in the first place (there's `git rebase --update-refs` but not much else?), so undoing multiple ref-updates is not as relevant there.
It's not a trick or workaround. It's a very straightforward use of a command flag on one of the most used git commands. It's even conceptually very simple, you're just rebasing a subset of commits of a branch onto a different parent. Anyone who has ever rebased already has a working mental model of this operation. Framing this issue where knowing a single flag on a command every git user uses every day to perform an operation they already broadly understand as some arcane knowledge and 'nonsense' is ridiculous.
You really have to compare articles like this, which are just a bunch of hand waving around the author's biases, against articles with more concrete statements like the Android team finding a 1000x reduction in memory vulnerabilities compared to C/C++. Thanks for your opinion but I'm going to weigh the people who actually use the language more than you.
For my reality of developing for smaller projects than Android, I will absolutely weight Android team's opinion less than the opinion of smaller projects.
Trying to mimic FAANG engineering practices is a fools endeavour that ranges between naiveness at best and CV padding at worst.
reply