Hacker Newsnew | past | comments | ask | show | jobs | submit | bangaladore's commentslogin

> Microsofts esteemed moat (office) is “Web only” on the lowest tier.

If you've ever used it before, you'd quickly come to the conclusion that web only Office is only useful for someone writing essays for school.

The moment you need to do anything more complex than that, the document renders completely differently on web vs app-- not to mention there are tons of critical features that aren't even available on the web version.


Sorry, I think I didn’t make myself clear enough.

I meant that Microsoft is intentionally removing their own moat.

That the tools are awful is just the standard microsoft affair. (with some notable exceptions, which ironically include Excel).


I don’t do anything too “serious” as far as writing documents that Google Docs can’t handle - we use that at work instead of Office - is Word that much better than GSuite for most cases or is Office Web worse than GSuite?

Some people have some bizarre obsession with having absolute and total control over the placement of every single last character in their document while simultaneously not caring about the fact that this placement is sometimes not reproducible and randomly becomes diseased.

My most memorable MS Word experiences are all the times I accidentally put my document into a weird state and didn't notice something was wrong until I've spent 3 more hours on it, at which point I was forced to re-create the document by copy pasting text into an earlier copy.

And the only reason I knew something was subtly wrong was because the weird VB extension I was required to use would stop working correctly. Basically this would happen when some random key element of the document had ended up with a very subtly different style. If I didn't have to worry about the VB extension breaking, I'd just have a document with some weird bug somewhere.

If I wanted a professional looking document, I would use some modern LaTeX variant maybe with Pandoc to generate most of it from something more restricted like Markdown.

If I wanted total control over the content of a page, I would use some kind of graphical publishing software with text and vector graphics.

I have zero idea what kind of Stockholm syndrome you must have to think that Microsoft Office (or any other similar WYSIWYG editor for that matter) is power user software.

It has lots of features, that's for sure. But the features form a Jenga tower. That makes it a toy.


> is Word that much better than GSuite for most cases or is Office Web worse than GSuite?

Excel is really The Thing. So many businesses and departments rely on it.


I believe that. When I was in graduate school in 1999-2001 (MBA) before I dropped out. I learned firsthand the beast that was Excel.

And you’d be making the same mistake as all those people that claim Windows is too awful to use for real work. Web Office is limited, but it is more than enough for the majority of business users.

The bigger problem here is it seems like the rust utilities were rushed to be released without extensive testing or security analysis because simply because they are written in rust. And this isn't the first serious flaw because of that.

Doesn't surprise me coming from Canonical though.

At least that's the vibe I'm getting from [1] and definitely [2]

[1] https://cdn2.qualys.com/advisory/2026/03/17/snap-confine-sys... [2] https://bugs.launchpad.net/ubuntu/+source/rust-coreutils/+bu...


The best discussion I can find for the official reasons for switching is https://discourse.ubuntu.com/t/carefully-but-purposefully-ox... -

> But… why?

> Performance is a frequently cited rationale for “Rewrite it in Rust” projects. While performance is high on my list of priorities, it’s not the primary driver behind this change. These utilities are at the heart of the distribution - and it’s the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me.

> The Rust language, its type system and its borrow checker (and its community!) work together to encourage developers to write safe, sound, resilient software. With added safety comes an increase in security guarantees, and with an increase in security comes an increase in overall resilience of the system - and where better to start than with the foundational tools that build the distribution?

So yes, it sounds like the primary official reason is "enhanced resilience and safety". Given that, I would be interested in seeing the number of security problems in each implementation over time. GNU coreutils does have problems from time to time, but... https://app.opencve.io/cve/?product=coreutils&vendor=gnu only seems to list 10 CVEs since 2005. Unfortunately I can't find an equivalent for uutils, but just from news coverage I'm pretty sure they have a worse track record thus far.


> But… why?

> Performance is a frequently cited rationale for “Rewrite it in Rust” projects.

Rewrite from what? Python/Perl? If the original code is in C there _might_ be a performance gain (particularly if it was poorly written to begin with), but I wouldn't expect wonders.


probably because many of those tools were around for 20ish years before 2005

Could be. The thing is, it kinda doesn't matter; what matters is, what will result in the least bugs/vulnerabilities now? To which I argue the answer is, keeping GNU coreutils. I don't care that they have a head start, I care that they're ahead.

That's short sighted. The least number of bugs now isn't the only thing that matters. What about in 5 years from now? 10 years? That matters too.

To me it seems inarguable that eventually uutils will have fewer bugs than coreutils, and also making uutils the default will clearly accelerate that. So I don't think it's so easy to dismiss.

I think they were probably still a little premature, but not by much. I'd probably have waited one more release.


>>> I don't care that they have a head start, I care that they're ahead.

Nice


fileutils-1.0 was released in 1990 [1]. shellutils-1.0 was released in 1991 [2], and textutils-1.0 was released a month later in the same year [3].

Those three packages were combined into coreutils-5.0 in 2003 [4].

[1] https://groups.google.com/g/gnu.utils.bug/c/CviP42X_hCY/m/Ys... [2] https://groups.google.com/g/gnu.utils.bug/c/xpTRtuFpNQc/m/mR... [3] https://groups.google.com/g/gnu.utils.bug/c/iN5KuoJYRhU/m/V_... [4] https://lists.gnu.org/archive/html/info-gnu/2003-04/msg00000...


It's extremely early to say if things are rushed or not. It's unsurprising that newer software has an influx of vulnerabilities initially, it'll be a matter of retrospectively evaluating this after that time period has passed.

> influx of vulnerabilities initially

https://en.wikipedia.org/wiki/Bathtub_curve

It's a little different with software since you don't usually have the code or silicon wearing out, but aging software does start to have a mismatch with the way people are trying to use it and the things it has to interact with, which leads to a similar rise of "failure" in the end.


There is a license blurb in the readme.

> This code, apart from the source in core/third-party, is licensed under the MIT License, see LICENSE in this repository.

> The English-language models are also released under the MIT License. Models for other languages are released under the Moonshine Community License, which is a non-commercial license.

> The code in core/third-party is licensed according to the terms of the open source projects it originates from, with details in a LICENSE file in each subfolder.


The LICENSE file that refers to is missing. There's one in the python folder, but not for the rest of the code.


IANAL.

Presuming (I haven't checked myself) the git author information supports this, it should be fine to treat this as licensing the code it specifies under MIT; based on that license name being (to my understanding) unambiguous and license application being based on contract law and contract law basically having at it's very core the principle of "meeting of the minds" along with wilful infringement being really really hard to even argue for if the only thing that's separating it from being 100% clearly licensed in all proper ways being not copying in an MIT `LICENSE` template with date and author name pasted into it.


What's the point of posting what is clearly an AI generated comment.


Afaik Anthropic is not giving pretty much any provider model weights, so any inference of Opus is certainly not private. Either going through Anthropic or Bedrock, or Vertex.

Of the three Bedrock is probably the best for trust, but still not private by any means.


Another common tell nowadays is the apostrophe type (’ vs ').

I don't know personally how to even type ’ on my keyboard. According to find in chrome, they are both considered the same character, which is interesting.

I suspect some word processors default to one or the other, but it's becoming all too common in places like Reddit and emails.


If you work with macOS or iOS users, you won’t be super surprised to see lots of “curly quotes”. They’re part of base macOS, no extra software required (I cannot remember if they need to be switched on or they’re on by default), and of course mass-market software like Word will create “smart” quotes on Mac and Windows.

I ended up implementing smart quotes on an internal blogging platform because I couldn’t bear "straight quotes". It’s just a few lines of code and makes my inner typography nerd twitch less.


Word (you know, the most popular word processor out there) will do that substitution. And on macOS & iOS, it's baked into the standard text input widgets so it'll do that basically everywhere that is a rich text editor.


> According to find in chrome, they are both considered the same character, which is interesting.

Browsers do a form of normalization in search. It's really useful, since it means "resume" will match résumé, unless of course you disable it (in Firefox, this is the "Match Diacritics" checkbox). (Also: itʼs, it's; if you want to see it in action on those two words.)


> That's the beauty of constraint-based parametric modeling as opposed to, say, modeling in Blender.

I was thinking the same thing. This looks more like an API that makes 3d modeling look closer to CAD, but without realizing that CAD is about constraints, parametrizing, and far more.


> but without realizing that CAD is about constraints, parametrizing, and far more

Constraints and parametrizing are the trivial parts of CAD, something you can now implement in a weekend with Claude Code, the MINPACK/SolveSpace test suite, and OpenCascade as an oracle. The hard part is a geometric kernel that can express boundary representations for complex shapes (sketches, chamfers, fillets, etc) and boolean operations while somewhat handling the topographical naming problem without driving the user insane (which existing kernels are still all shit at).


> Constraints and parametrizing are the trivial parts of CAD, something you can now implement in a weekend with Claude Code

You go ahead and try that.


;)

Keywords: Jacobian, Newton-Raphson, Levenberg-Marquardt, Powell dog leg, Schur complements, sparse QR/Cholesky, and so on. The LLM can figure the rest out. Try it yourself!

I recommend Rust because the methods are old and most of the algorithms are already implemented by crates, you just have to wire them together. Like I said the hard part is the b-rep: you’re not going to find anything equivalent to Parasolid or ACIS in the literature or open source.


I can't help but find this comment a little insulting. It's very similar to saying "if, while, else, malloc. The LLM can figure the rest out!" as if CS were a solved thing and the whole challenge weren't assembling those elementary bricks together in computationally efficient and robust ways.

Also more to the point, I doubt you'll have much success with local optimization on general surfaces if you don't have some kind of tessellation or other spacial structure to globalize that a bit, because you can very easily get stuck in local optima even while doing something as trivial as projecting a point onto a surface. Think of anything that "folds", like a U-shape, a point can be very close to one of the branches, but Newton might still find it on the other side if you seeded the optimizer closer to there. It doesn't matter whether you use vanilla Newton or Newton with tricks up to the gills. And anything to do with matrices will only help with local work as well because, well, these are non-linear things.

"Just work in parameter space" is hardly a solution either, considering many mappings encountered in BREPs are outright degenerate in places or stretch the limits floating point stability. And the same issue with local minima will arise, even though the domain is now convex.

So I might even reduce your list to: Taylor expansion, linear solver. You probably don't need much more than that, the difficulty is everything else you're not thinking of.

And remember, this has to be fast, perfectly robust, and commit error under specified tolerance (ideally, something most CAD shops don't even promise).


Yeah but have you tried it? You can throw as many keywords as you want into Claude but it does get things wrong in sometimes subtle ways. I’ve tried it, I know.


Look, I'm not trying to decimate you here but your list of keywords is wrong and I know it because I explored that list last month for a completely different application.

The Jacobian is the first order derivative for a function that accepts a vector as an input and produces a vector as an output, hence it must be a matrix.

Newton-Raphson is an algorithm for finding the roots(=zeroes) of a function. Since the derivative of the minimum of a function is zero, it can be used for solving convex optimization problems.

Levenberg-Marquardt is another way to solve optimization problems.

The Powell dog leg method is new to me, but it is just an extension of Gauss-Newton which you could think of a special casing of Newton-Raphson where the objective function is quadratic (useful for objectives with vector norms aka distances between positions).

Most of the algorithms require solving a linear system for finding the zero of the derivative. The Schur complement is a way to factor the linear system into a bunch of smaller linear systems and sparse QR/Cholesky are an implementation detail of solving linear systems.

Now that we got the buzzwords out of the way I will tell you the problem with your buzzwords. Constraint solving algorithms are SAT or SMT based and generally not optimization based.

Consider the humble circle constraint: a^2 + b^2 = c^2. If you have two circles with differing centers and radii, they may intersect and if they do, they will intersect at two points and this is readily apparent in the equations since c = sqrt(a^2 + b^2) has two solutions. This means you will need some sort of branching inside your algorithm and the optimization algorithms you listed are terrible at this.


Optimization can work well for interactive CAD usage, because geometric sketches tend to be close to the intended solution, and it's fast. It also has the nice property of stability (i.e., with optimization, small changes in parameters tend not to cause large perturbations in the solution). Doing more gets into what you mentioned, and is called chirality or root identification in the literature. That's much more intense computationally, but could be useful especially if cheaper approaches like optimization failed.


Sure man. Solidworks will be out of business any day now.


We've started a 2D geometric constraint solver at https://github.com/endoli/fiksi doing the constraint part of this in Rust. We're using it internally and so far it works well, though it's still experimental. More constraints and especially better behavior around failure are needed. The latter will likely entail at least doing more with degree of freedom counting, though there's some of that already.

A C++-library to be aware of is SolveSpace's slvs: https://github.com/solvespace/solvespace/tree/e74c2eae54fdd9....


This is something I don't get about the code-based CAD tools. They don't let you specify declarative geometric constraints.

Constraints are useful beyond just designing parts. If you have a parallel mechanism there are only two ways to solve the kinematics/dynamics for it: Constraint solving for rigid contacts or iterative solving by approximating the model with non-rigid contacts via internal springs.


Could you mock up some code to describe which you feel would be suitable to describing such a thing?


CSP is inherently a client-side browser security feature, so yes.


Very much so. It feels like it can't have been that common in the original training corpus. Probably more common now given that we are training slop generators with slop.


More concerned (for the author) of someone trying to host/show illegal material. AI guardrails can only be so effective.


Or even worse for the author if his Claude subscription gets cancelled.


True. I suspect they will ban you depending on refusal frequency and severity.


they just need to turn on the CSAM filter in cf/whatever they use and they're probably good


That's certainly one of the things to be concerned with. Not certain how that's implemented, but I can still see there being holes in that strategy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: