As someone who first began using Swift in 2021, after almost 10 years in C#/.NET land, I was already a bit grumpy at how complex C# was, (C# was 21 years at that point), but then coming to Swift, I couldn't believe how complex Swift was compared to C# - Swift was released in 2014, so would've been 8 years old in 2022. How is a language less than half the age of C# MORE complex than C#?
And this was me trying to use Swift for a data access layer + backend web API. There's barely any guidance or existing knowledge on using Swift for backend APIs, let alone a web browser of all projects.
There's no precedent or existing implementation you can look at for reference; known best practices in Swift are geared almost entirely towards using it with Apple platform APIs, so tons of knowledge about using the language itself simply cannot be applied outside the domain of building client-running apps for Apple hardware.
To use swift outside its usual domain is to become a pioneer, and try something truly untested. It was always a longshot.
I started using it around 2018. After being reasonably conversant in Objective-C, I fully adopted Swift for a new iOS app and thought it was a big improvement.
But there's a lot of hokey, amateurish stuff in there... with more added all the time. Let's start with the arbitrary "structs are passed by value, classes by reference." And along with that: "Prefer structs over classes."
But then: "Have one source of truth." Um... you can't do that when every data structure is COPIED on every function call. So now what? I spent so much time dicking around trying to conform to Swift's contradictory "best practices" that developing became a joyless trudge with glacial progress. I finally realized that a lot of the sources I was reading didn't know WTF they were talking about and shitcanned their edicts.
A lot of the crap in Swift and SwiftUI remind me of object orientation, and how experienced programmers arrived at a distilled version of it that kept the useful parts and rejected dumb or utterly impractical ideas that were preached in the early days.
I think Swift was developed to keep a number of constituencies happy.
You can do classic OOP, FP, Protocol-Oriented Programming, etc., or mix them all (like I do).
A lot of purists get salty that it doesn’t force implementation of their choice, but I’m actually fine with it. I tend to have a “chimeric” approach, so it suits me.
Been using it since 2014 (the day it was announced). I enjoy it.
No Swift was developed as a strategic moat around Apple's devices. They cannot be dependent on any other party for the main language that runs on their hardware. Controlling your own destiny full stack means having your own language.
Apple already had that "strategic moat" with Objective-C. It was already a language you could effectively only use on Apple platforms (the runtime and the standard library only run on Darwin) and for which Apple controlled the compiler (they have their own fork of Clang).
I suspect that it was developed, in order to make native development more accessible. SwiftUI is also doing that.
They want native, partly as a “moat,” but also as a driver for hardware and services sales. They don’t want folks shrugging and saying “It doesn’t matter what you buy; they’re all the same.”
I hear exactly that, with regard to many hybrid apps.
There are plenty of valid reasons to use classes in Swift. For example if you want to have shared state you will need to use a class so that each client has the same reference instead of a copy.
> But there's a lot of hokey, amateurish stuff in there... with more added all the time. Let's start with the arbitrary "structs are passed by value, classes by reference." And along with that: "Prefer structs over classes."
This is the same way that C# works and C and C++ why is this a surprise?
Nowhere does it say structs provide “one source of truth”. It says the opposite actually- that classes are to be used when unique instances are required. All classes have a unique ID, which is simply it’s virtual memory address. Structs by contrast get memcpy’d left and right and have no uniqueness.
You can also look at the source code for the language if any it’s confusing. It’s very readable.
Have one source of truth is a universal guideline.
Prefer structs over classes is a universal, if weak, guideline.
It's funny how people can be all hung up on composability of things like type systems, and then completely blow off the desire for composability of guidelines.
In the last years, simplistic languages such as Python and Go have “made the case” that complexity is bad, period. But when humans communicate expertly in English (Shakespeare, JK Rowling, etc) they use its vast wealth of nuance, shading and subtlety to create a better product. Sure you have to learn all the corners to have full command of the language, to wield all that expressive power (and newcomers to English are limited to the shallow end of the pool). But writing and reading are asymmetrical and a more expressive language used well can expose the code patterns and algorithms in a way that is easier for multiple maintainers to read and comprehend. We need to match the impedance of the tool to the problem. [I paraphrase Larry Wall, inventor of the gloriously expressive https://raku.org]
Not sure how I feel about Shakespeare and JK Rowling living in the same parenthesis!
Computer languages are the opposite of natural languages - they are for formalising and limiting thought, the exact opposite of literature. These two things are not comparable.
If natural language was so good for programs, we’d be using it - many many people have tried from literate programming onward.
I fully accept that formalism is an important factor in programming language design. But all HLLs (well, even ASM) are a compromise between machine speak (https://youtu.be/CTjolEUj00g?si=79zMVRl0oMQo4Tby) and human speak. My case is that the current fashion is to draw the line at an overly simple level, and that there are ways to wrap the formalism in more natural constructs that trigger the parts of the brain that have evolved to hanle language (nouns, verbs, adverbs, prepositions and so on).
Here's a very simple, lexical declaration made more human friendly by use of the preposition `my` (or `our` if it is packaged scoped)...
Well, when you add in the '$' and ';' tokens the "let" example is still shorter. Also as another person replied to you, those other two examples are declarations in other languages. So 0 for 3 there.
Literate programming is not about programming in natural languages: it's about integrating code (i.e. the formal description in some DSL) with the meta-code such as comments, background information, specs, tests, etc.
BTW, one side benefit of LP is freedom from arbitrary structure of DSLs. A standard practice in LP is to declare and define objects in the spot in which they are being used; LP tools will parse them out and distribute to the syntactically correct places.
Well I think the ambition was to have as much as possible in natural language, with macros calling out to ‘hidden’ code intended for machines. So I do think there is a good link with later attempts to write using natural language and make computer languages more human-friendly and he was one of the first to have this idea.
Exactly. I mean think about the programming languages used in aircraft and such. There's reasons. It all depends on what people are willing to tolerate.
>But writing and reading are asymmetrical and a more expressive language used well can expose the code patterns and algorithms in a way that is easier for multiple maintainers to read and comprehend.
It's exactly the opposite. Writing and reading are asymmetrical, and that's why it's important to write code that is as simple as possible.
It's easy to introduce a lot of complexity and clever hacks, because as the author you understand it. But good code is readable for people, and that's why very expressive languages like perl are abhorred.
> Writing and reading are asymmetrical, and that's why it's important to write code that is as simple as possible.
I 100% agree with your statement. My case is that a simple language does not necessarily result in simpler and more readable code. You need a language that fits the problem domain and that does not require a lot of boilerplate to handle more complex structures. If you are shoehorning a problem into an overly simplistic language, then you are fighting your tool. OO for OO. FP for FP. and so on.
I fear that the current fashion to very simple languages is a result of confusing these aspects and by way of enforcing certain corporate behaviours on coders. Perhaps that has its place eg Go in Google - but the presumption that one size fits all is quite a big limitation for many areas.
The corollary of this is that richness places an burden of responsibility on the coder not to write code golf. By tbh you can write bad code in any language if you put your mind to it.
Perhaps many find richness and expressivity abhorrent - but to those of us who like Larry's thinking it is a really nice, addictive feeling when the compiler gets out of the way. Don't knock it until you give it a fair try!
Then you should write assembly only. Like `MOV`, `ADD`... can't really get simpler than that.
Problem is, that makes every small part of the program simple, but it increases the number of parts (and/or their interaction). And ultimately, if you need to understand the whole thing it's suddenly much harder.
Surely you can write the same behaviour in "clever" (when did that become a negative attribute?) or "good" way in assembly. You are correct. But that's a different matter.
> Get into a rut early: Do the same process the same way. Accumulate idioms. Standardize. The only difference(!) between Shakespeare and you was the size of his idiom list - not the size of his vocabulary.
Complexity-wise, this version is more complicated (mixing different styles and paradigms) and it's barely less tokens. Lines of code don't matter anyway, cognitive load does.
Even though I barely know Raku (but I do have experience with FP), it took way less time to intuitively grasp what the Raku was doing, vs. both the Python versions. If you're only used to imperative code, then yeah, maybe the Python looks more familiar, though then... how about riding some new bicycles for the mind.
> Complexity-wise, this version is more complicated (mixing different styles and paradigms)
Really? In the other Python version the author went out of his way to keep two variables, and shit out intermediate results as you went. The raku version generates a sequence that doesn't even actually get output if you're executing inside a program, but that can be used later as a sequence, if you bind it to something.
I kept my version to the same behavior as that Python version, but that's different than the raku version, and not in a good way.
You should actually ignore the print in the python, since the raku wasn't doing it anyway. So how is "create a sequence, then while it is not as long as you like, append the sum of the last two elements" a terrible mix of styles and paradigms, anyway? Where do you get off writing that?
> Lines of code don't matter anyway, cognitive load does.
I agree, and the raku line of code imposes a fairly large cognitive load.
If you prefer "for" to "while" for whatever reason, here's a similar Python to the raku.
seq = [0,1]
seq.extend(sum(seq[-2:]) for _ in range(18))
The differences are that it's a named sequence, and it doesn't go on forever and then take a slice. No asterisks that don't mean multiply, no carets that don't mean bitwise exclusive or.
> If you're only used to imperative code, then yeah, maybe the Python looks more familiar, though then... how about riding some new bicycles for the mind.
It's not (in my case, anyway) actually about imperative vs functional. It's about twisty stupid special symbol meanings.
Raku is perl 6 and it shows. Some people like it and that's fine. Some people don't and that's fine, too. What's not fine is to make up bogus comparisons and bogus implications about the people who don't like it.
Reminds me a bit of the fish anecdote told by DFW... they've only swam in water their entire life, so they don't even understand what water is.
Here are the mixed paradigms/styles in these Python snippets:
- Statements vs. expressions
- Eager list comprehensions vs. lazy generator expressions
- Mutable vs. immutable data structures / imperative reference vs. functional semantics
(note that the Raku version only picks _one_ side of those)
> seq.extend(sum(seq[-2:]) for _ in range(18))
I mean, this is the worst Python code yet. To explain what this does to a beginner, or even intermediate programmer.... oooooh boy.
You have the hidden inner iteration loop inside the `.extend` standard library method driving the lazy generator expression with _unspecified_ one-step-at-a-time semantics, which causes `seq[-2:]` to be evaluated at exactly the right time, and then `seq` is extended even _before_ the `.extend` finishes (which is very surprising!), causing the next generator iteration to read a _partially_ updated `seq`...
This is almost all the footguns of standard imperative programming condensed into a single expression. Like ~half of the "programming"-type bugs I see in code reviews are related to tricky temporal (execution order) logic, combined with mutability, that depend on unclearly specified semantics.
> It's about twisty stupid special symbol meanings.
Some people program in APL/J/K/Q just fine, and they prefer their symbols. Calling it "stupid" is showing your prejudice. (I don't and can't write APL but still respect it)
> What's not fine is to make up bogus comparisons and bogus implications about the people who don't like it.
That's a quite irrational take. I didn't make any bogus comparisons. I justified or can justify all my points. I did not imply anything about people who don't like Raku. I don't even use Raku myself...
> You have the hidden inner iteration loop inside the `.extend` standard library method driving the lazy generator expression with _unspecified_ one-step-at-a-time semantics
That's why it wasn't the first thing I wrote.
> To explain what this does to a beginner, or even intermediate programmer.... oooooh boy.
As if the raku were better in that respect, lol.
> Some people program in APL/J/K/Q just fine, and they prefer their symbols.
APL originally had a lot of its own symbols with very little reuse, and clear rules. Learning the symbols was one thing, but the usage rules were minimal and simple. I'm not a major fan of too many different symbols, but I really hate reuse in any context where how things will be parsed is unclear. In the raku example, what if the elements were to be multiplied?
> Calling it "stupid" is showing your prejudice. (I don't and can't write APL but still respect it)
> Reminds me a bit of the fish anecdote told by DFW...
Yeah, for some reason, it's not OK for me to insult a language, but it's OK for you to insult a person.
But you apparently missed that the "twisty" part was about the multiple meanings. Because both those symbols are used in Python (the * in multiple contexts even) but the rules on parsing them are very simple.
perl and its successor raku are not about simple parsing. You are right to worry about the semantics of execution, but that starts with the semantics of how the language is parsed.
In any case, sure, if you want to be anal about paradigm purity, take my first example, and (1) ignore the print statement because the raku version wasn't doing that anyway, although the OP's python version was, and (2) change the accumulation.
But that won't get you very far in a shop that cares about pythonicity and coding standards.
And...
You can claim all you want that the original was "pure" but that's literally because it did nothing. Not only did it have no side effects, but, unless it was assigned or had something else done with it, the result was null and void.
I made an analogy which just means that it's hard to understand what the different styles and paradigms are when those are the things you constantly use.
You're apparently taking that as an insult...
> But you apparently missed that the "twisty" part
I didn't miss anything. You just didn't explain it. "twisty" does not mean "ambiguous" or "hard to parse". Can't miss what you don't write.
My instincts about raku were always that perl was too fiddly, so why would I want perl 6, and this isn't doing anything to dissuade me from that position.
same. i thought it would have been as quick to pick up as rust. nowhere near. i spent weeks trying to go through every feature of the language at least once. time in which i could’ve read several rust books and already start hacking up some interesting projects. so much in swift is pointless syntax sugar. why do i need 50 ways to do exactly the same thing, it’s just nonsense. then i have to look up the language reference whenever i read a new codebase
well for backend development, yes - I technically never stopped as I had existing projects to maintain. But after trying out Swift a couple times, I've dropped it entirely for backend. For new backend work it's C#/.NET all the way.
I wanted to try using a native language other than C++ and Swift ostensibly seemed easier to pick up. I continue to use Swift for iOS app development though where it is much easier to use; but that has its own share of compromises and trade-offs - but not centred around Swift, around SwiftUI vs UIKit.
I recall the source code for Windows XP was leaked some years ago; not just isolated parts of the code base, like with the earlier Windows NT4/2000 source code leak, but a completely buildable repository.
If I write an article on training an LLM on the leaked Windows XP source code, blithely mark the source code repo as in 'the public domain', but used Azure resources for the how-to steps, would that would make it OK Microsoft? You know, your Azure division might get some money...
Seriously, this is just so...blatant. It's like we've all collectively decided that copyright just doesn't matter anymore. Just readin this article, I feel like I'm taking crazy pills.
> though the "about notepad" dialog shows the windows 11 version for some reason??
For many built in windows apps, the 'about this program' menu item just invokes a separate program, 'winver'. If you go Start -> Run and type in winver, it does the same thing.
Are you even responding to the right comment? I read your comment and the parent comment you've responded to and this response doesn't make sense - it reads like a non-sequitur.
The parent comment present a scenario where the law is ignored b/c the judge decides for himself it shouldn't apply. I'm pointing out that this kind of approach is fundamentally unjust and wrong.
"And sure you can say the laws should be written better, but so long as the laws are written by humans that will simply not be the case"
> The firm gradually grew more contentious, demanding that the RTX 5060 be handed in because the event it was acquired at was part of a business trip, entirely paid for by the company. The employee would never have won the GPU had the firm not enabled him to attend the venue. Our winner refused, arguing that it belonged to him because he had won it on his own by pure luck.
Hmm...I feel like the company's reasoning here is almost acceptable. Almost, because I know as a (paid) employee, all of the code I write, any inventions or IP I come up with are the company's property, so it almost makes sense that the company might also want to assert its right to claim that any physical things given or gifted in the course of work-related trips that employees take on company time.
but the article mentions the winner was an intern, not an employee, and I know many interns i've worked with never actually signed an employment agreement, because they dont actually get paid. They sign NDAs but not full on employment agreements, so how can any company treat them like an employee? if I wasn't getting paid, I'd 100% hold my ground like the intern did and take it.
Doesn't matter. It's a small amount (in the eyes of the company), and is bound to feel unfair to the employee.
It's like your employer asking that you keep the pretzels on your business flights and hand them in to the office snack bar. Only ill will can come from that, and zero profit.
You realize you can redline the default IP assignment clauses, right? It should never have been normalized that an employer gets blanket claim to all mental output on your part. Especially things done in your off hours on equipment the company doesn't own.
It's just another example of how contract law, lawyers, and legal fictions represent a bottom up funnel of value extraction from the populace in which they exist. Can't even just work and get paid without some arsehole driving/hiding behind a legal fiction strip mining you for all the law will let them get away with.
What name calling? Calling the author 'an unserious person' isn't name calling. Might be worth reading the article:
> "If you like Windows 8’s look, you are a bad person. You are the one Steve Jobs was talking about when he said Microsoft had no taste."
yeah you don't need to read very much of this to know this author hasn't exactly written a substantive article; they certainly aren't bothering to backup their claims with any reasoning. the whole post itself is 'this version of windows was ugly, this one wasn't etc'.
That was exactly the same behaviour in Windows 7 though; it wasn't exactly novel. At least Windows 7 searched your apps, and documents all at once. Windows 8 limited you to just apps. Windows 8 was a huge step down in usability.
They can afford to make a big song and dance about this because chances are they are not selling the hardware at a loss and they have the regular steam store to offset the short term costs. If they were selling the hardware at a loss, I think their marketing trying to sell this device would be very different.
they probably will handle it like with the Steam Deck
- no loss
- but small profit margin anyway, to max reduce the price, to max increase adoption/reach
for Valve people using Steam on non Windows platforms is more important then making a big buck from Steam Machines (because this makes them less dependent on Windows, MS has tried(and failed) to move into the direction of killing 3rd party app stores before, and Windows has gotten ... crappy/bloated/ad-infested which is in the end a existential risk for Valve because if everyone moves away from PC gaming they will lose out hugely)
Switch was always sold for more than component and manufacturing cost. PS4 crossed the threshold quickly (per Sony iirc?)
However, that ignores R&D costs which presumably have to be amortized, largely through game sales and platform fees. The same is true for other platforms like iOS.
And this was me trying to use Swift for a data access layer + backend web API. There's barely any guidance or existing knowledge on using Swift for backend APIs, let alone a web browser of all projects.
There's no precedent or existing implementation you can look at for reference; known best practices in Swift are geared almost entirely towards using it with Apple platform APIs, so tons of knowledge about using the language itself simply cannot be applied outside the domain of building client-running apps for Apple hardware.
To use swift outside its usual domain is to become a pioneer, and try something truly untested. It was always a longshot.