Hacker Newsnew | past | comments | ask | show | jobs | submit | vips7L's favoriteslogin

Not books but it's really important, in my opinion, to go through official documentation

https://learn.microsoft.com/en-us/dotnet/core/whats-new/

https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/

Can also recommend reading:

https://learn.microsoft.com/en-us/dotnet/core/tools/ (CLI, it's good)

https://learn.microsoft.com/en-us/aspnet/core/fundamentals/m...

https://typescript-is-like-csharp.chrlschn.dev/pages/interme...

> are there books specifically about using .NET Core on Linux

There is nothing particular to using ".NET on Linux" - it just works. Standard Linux caveats not specific to .NET apply.


The reason Java rejected data classes in favour of the very different and more powerful approach of nominal tuples (i.e. records) and algebraic data types in general, was the observation that the vast majority of data classes can be record classes. It's by not trying to cover the remaining small minority of cases with the same construct that we gain the guarantees of a more powerful one based on the property that the full state of a record is described by its canonical constructor and its deconstruction serves as its dual.

All records are well-behaved as collection members; not all data classes are.

All records can be safely serialised and deserialised; not all data classes can.

All records can be deconstructed and reconstructed; not all data classes can.

Data classes are about declaring classes with less code. When you have an object of a data class you know nothing more about its behaviour than you do about an object of any class. Record classes, which are similar in their design philosophy to enums, are not about writing less code, but about being reifying the concept of unencapsulated data, which offers semantic guarantees -- not just less syntax -- about the nature of the objects, just as you can with enums.

Java is able to offer new constructs such as records and enums, with runtime support, as it's not a hosted language.


* The Art of Probability by Hamming. An opinionated, slightly quirky text on probability. Unlike the text used in my university course its explanations were clear and rigourous without being pedantic. The exercises were both interesting and enlightening. The only book in this list that taught skills I've actually used in the real world.

* Calculus by Spivak. This was used in my intro calculus course in university. It's very much a bottom-up, first-principles construction of calculus. Very proof-based, so you have to be into that. Tons of exercises, including some that sneakily introduce pretty advanced concepts not explicitly covered in the main text. This book, along with the course, rearranged by brain. Not sure how useful it would be for self-study though.

* Measurement by Lockhart. I haven't read the whole thing, but have enjoyed working through some of the exercises. A good book for really grokking geometric proofs and understanding "mathematical beauty", rather than just cranking through algebraic proofs step by step.

* Naive Set Theory by Halmos. Somewhat spare, but a nice, concise introduction to axiomatic set theory. Brings you from nothing up to the Continuum Hypothesis. I read this somewhere around my first year in university and it was another brain-rearranger.


Go has more line noise than Java combined with the expressiveness of COBOL

According to the NIH, 64% of hospitalizations might have been prevented if not for obesity, hypertension, diabetes, and heart failure. https://www.nih.gov/news-events/nih-research-matters/most-co...

Increasing the size of the array 10x (100_000_000) and filling a glaring omission:

Go (go1.17.1 darwin/amd64)

    took 5.591593544s
    took 5.285948722s
    took 5.218750076s
    took 5.239224787s
    took 5.131232207s
Zig (0.9.0-dev.959+f011f1393)

    took 3.48s
    took 3.37s
    took 3.44s
    took 3.43s
    took 3.49s
Java (openjdk version "17-ea")

    Time: 2684 ms
    Time: 2654 ms
    Time: 2840 ms
    Time: 2676 ms
    Time: 2649 ms
MacBook Pro Early 2013, 2.7 GHz Quad-Core Intel Core i7, 16 GB 1600 MHz DDR3

Yeah, there are so many layers to this. I mean, a mandate for employers oversteps what government is allowed to do anyway. But that aside, if it was polio or something and there was one shot that basically ended it, you could see a rational path to a single mandate that got everyone vaccinated (btw I'd be curious to know polio, measles, etc vaccine rates, I think they are very high). But when "vaccinated" is essentially some political flavor of the week, how is it going to get enforced? Will a third shot get rolled in? And if there are more, or if we switch from mrna to another variety, do they get rolled in too? Will you need to have the original 2021 shots, even once they've been subsumed into something else, or can you just get the latest thing.

For all the pretend deference to science, we're in the middle of a rapidly unfolding situation. It seems preposterous, even with clear evidence that the vaccines do work (and ignoring government overreach which is itself absurd and illegal imo) to start mandating something that is not likely to hold 6 months from now, and what, just require that everyone tracks whatever current orthodoxy says? I have my two pfizer shots, and if there is a vaccine later that actually cures covid, maybe I'll go back and get it. But I'm giving up on trying to track the flavor of the week and trying to match whatever the new York times wants me to do, I'll wait until there is actually a reasonable consensus


Whether or not you agree if there is legal authority for mandates for this vaccine or this pandemic, where does it end? The trajectory we're on is for all people to be monitored by the State for compliance with the vaccine programs.

Israel is already saying 4 shots (and more) will required [0] to participate in that society. Australia, New Zealand, and Canada are quasi-police states [1] and will probably mandate vaccination+boosters soon.

The writing on the wall is that the 'booster' is being repackaged as a 'series' of vaccines. The health authorities and pharma are going to modify and approve vaccines for endless mutations. Will these be independently tested? I don't expect so, despite using the novel mRNA platform. The end game is to treat Covid vaccines like annual (or semi-annual) flu vaccines. Moderna is already packaging both together [2]. Except flu vaccine is not mandated (yet). There is a LOT of money to be made in mandatory vaccine mandates. Think how easy it would be to grow rich if everyone had to use your products, you had no competition, and the government indemnified you against any responsibility for harm. It's hard to see how this stops. The pandemic is a gift for technocratic-authoritarians everywhere.

Here's a prediction. In a few years, if the people don't stand up against this, the pandemic will not be 'over' politically. Nominally 'free' Western governments will have become de facto CCP-style management firms. We will be required to comply with ongoing health directives from the central authorities for vaccination to prevent further restrictions. Our compliance will be monitored via digital passes (or onerous paper-work for the poor). Our employment and freedom of movement will depend on our compliance. Those who disagree and are foolish enough to say something will find themselves censored (for health disinformation) and fined (or imprisoned [3]). Traveling to see grandma without having gotten the latest booster will risk quarantine.

So, if you're pro-mandates, I invite you to come back in a couple of years and ridicule me and this warning if it's wrong (if it hasn't been scrubbed from the internet). I hope you're right! I wonder what you'll say if you're wrong? Maybe post your predictions for posterity.

[0] https://www.timesofisrael.com/virus-czar-calls-to-begin-read... [1] https://www.nytimes.com/2021/08/31/world/australia/new-zeala..., https://www.theguardian.com/australia-news/2021/jun/01/austr... [2] https://www.reuters.com/business/healthcare-pharmaceuticals/...

[3] https://www.theguardian.com/australia-news/2021/jun/01/austr...


Not at all.

Want to develop for Apple platform?

Depending on which level of the stack, the painless option will be C++, Objective-C or Swift.

On Android it will be a mix of Java (slowly being left behind on purpose), Kotlin or C++.

On Windows, .NET (and in some cases C# or C++/CLI only due to the way MSIL gets exposed) and C++. You can also keep using C if your view of Windows is like Windows XP APIs.

On Fuchsia, it is all about Flutter, C++, Go and Rust.

On ARM Mbed and Arduino, C++.

On microEJ, Java and C.

On Meadow, .NET.Core.

Azure Sphere, C.

CUDA, C++, Fortran, Python JIT and PTX aware backends.

There are plenty of other platforms I could keep on listing.

Naturally you can try to fit something else, but then it is on you and others to build the ecosystem, work around support issues and lack of IDE tooling.


>If a small but militant group insists in putting everyone at risk by intentionally and systematically violating basic health and safety precautions then the only hope society has is to mitigate this risk.

Who is militant? And they aren't putting "everyone" at risk. They are putting themselves at risk surely, but the risk doesn't really spread to people who have been vaccinated.

>You need to prove you are vaccinated when you travel internationally

I also need permission from the government of the country I'm visiting to come at all. They can add conditions if they want and I can choose accordingly.

>So why yes, obviously yes. Don't you agree? Or do you actually believe that an individual's whims and irresponsiblr and reckless behavior should simply put everyone else's life at risk just because?

I don't agree. I think people have lost perspective and have very little idea what the risks of various situations are and can't tell the difference between a risk that could kill millions and a risk that could kill hundreds and are happy to charge forwards to try to prevent any risk regardless of how small because "doing something" has become as a religious mandate.

>I mean, if you really valued freedom then wouldn't you be doing your hardest to ensure that everyone around you should, say, not risk death by an easily preventable disease in spite of everyone except you taking basic precautions?

Freedom is not trying to force everyone around me to do what I determine is best for them.

>do you feel the measles or tetanus shots violate your freedom?

Tetanus basically only spreads through deep puncture wounds with soiled implements. I guess if I didn't seem like I was out of my mind I would want my doctor to accept refusing treatment for a painful and life threatening disease that could only ever effect myself... The question "do you think shot violates your freedom?" really shows that you don't understand my point at all and will continue arguing past and around whatever I have to say.

I am opposed to de facto licensing of public activities based on a choice that should be mine to make. (I did indeed get vaccinated) When there was significant risk and incomplete vaccine access, different rules were acceptable, but only because of the large and temporary risk. Now that that risk is avoidable by anyone in the US who wants to avoid it, continuing to enable and enforce restrictions on everyday activities has gone too far, and normalizing the "papers please" activity does serious harm to our future freedoms.


The observation is done by point of view of Swift developers.

The only reason why Swift has reference counting was historical.

Objective-C GC implementation failed, because it was very hard to mix frameworks compiled with and without GC enabled, alongside the usual issues of C memory semantics.

https://developer.apple.com/library/archive/documentation/Co...

Check "Inapplicable Patterns" section.

So Apple did the right design decision, instead of trying to fix tracing GC in such environment, just like Microsoft does in COM, they looked into Cocoa [retain/release] pattern, automated that, and in a marketing swoop sold that solution as ARC.

Swift as Objective-C replacement, naturally had to build on top of ARC as means to keep compatibility with Objective-C runtime without additional overhead (check RCW/CCW for how .NET GC deals with COM).

Here is a paper about Swift performance,

http://iacoma.cs.uiuc.edu/iacoma-papers/pact18.pdf

> As shown in the figure, performing RC operations takes on average 42% of the execution time in client programs, and 15% in server programs. The average across all programs can be shown to be 32%. The Swift compiler does implement optimization techniques to reduce the number of RC operations similar to those described in Section 2.2.2. Without them, the overhead would be higher. The RC overhead is lower in server programs than in client programs. This is because server programs spend relatively less time in Swift code and RC operations; they spend relatively more time in runtime functions for networking and I/O written in C++.

It makes technical sense that Swift uses reference counting, as explained above, but it isn't due to performance, it just sells better than explaining it was due to Objective-C inherited C memory model, which besides the memory corruption problems, it doesn't allow for anything better than a conservative garbage collector, with very bad performance.

https://hboehm.info/gc/



Then so is your diet and exercise regime my business. I also want to control who you spend time with (for your mental health) and what you consume for entertainment - there will be limits on what you can read and watch and how much. Alcohol is of course no longer ever an option, and many injury prone sports are also forbidden. At any point we predict you might commit a crime, we will jail you first to reduce the cost to society.

IMO, it really depends on the team you join, as with most programming projects. Scala can be used in a classic mutable OOP manner, as if you were writing Java in 2005, or if you heavily buy into pure FB libraries it can be used basically like it's Haskell, but IMO neither of these approaches embrace the strengths of the language. It's meant to be a true mix of OOP and functional paradigms, not going extreme one way or the other. The object oriented aspects let you embrace great OOP software architecture patterns like Domain Driven Design, so that you can really model the business domain clearly and faithfully. Then the functional aspects allow you to implement a "functional core, imperative shell" style, that's really easy to reason about, and easy to make concurrent (safely), while also having excellent/safe/descriptive types like Future/Option/Try and ADTs.

It sounds like you've worked with a group of people who want to treat Scala like it's Haskell, but that's a minority community that IMO isn't really embracing the strengths of the language. I've worked on projects that go all-in on libs like Cats and Scalaz, and I agree that they're mostly unnecessary complexity that also obscure the modelling of the business domain. You're going to massively confuse newcomers with all the Applicative/Effect/Monad/Monoid/Functor/etc. talk, for next to no benefit. Actors are a bit of a different story - excellent for the 1% of times where you really need them (lots of mutable state and lots of concurrency), but 99% of the time you don't need them. Some devs are enamoured with the mathematical purity of pure functional programming, without properly considering how hard it is to understand for most other devs. If you let these types take over your company, that’s a cultural problem, not really a language problem.

Odersky describes why he created the language here: https://www.signifytechnology.com/blog/2018/01/why-scala

> ... programmers are increasingly facing new challenges for high-level domain modeling, rapid development, and, more recently, parallelism and concurrency. It seemed to me that a unification of the traditional object-oriented model with functional programming concepts was feasible and highly desirable to address those challenges.

I completely agree, and have yet to find a better language than Scala for the above demands, assuming you really embrace the OOP/functional mix. I've been writing lots of Scala at my day job for the past ~5 years, and it's my favourite language. I've worked on backends in Scala, Java, Go, Python, PHP and Node, and prefer Scala to all of them.

I also agree with Li Haoyi - it's becoming a really solid, stable, reliable language, with improvements focused on the build tools, compiler, and the odd language wart, without significant changes to syntax/style, which is great. It does take awhile to learn, and you do have to be careful about what style you program in, but I think if you just embrace the language's OOP/functional fix, and for "core style" mostly stick with the standard library (Future/Try/Option/etc.) vs. going crazy with Cats/Scalaz/Akka/etc. (unless you REALLY need Akka specifically), it's an outstanding language.


Looks like the big challenge is managing a large, LRU cache, which tends to be a difficult problem for GC runtimes. I bet the JVM, with its myriad tunable GC algorithms, would perform better, especially Shenandoah and, of course, the Azul C4.

The JVM world tends to solve this problem by using off-heap caches. See Apache Ignite [0] or Ehcache [1].

I can't speak for how their Rust cache manages memory, but the thing to be careful of in non-GC runtimes (especially non-copying GC) is memory fragmentation.

Its worth mentioning that the Dgraph folks wrote a better Go cache [2] once they hit the limits of the usual Go caches.

From a purely architectural perspective, I would try to put cacheable material in something like memcache or redis, or one of the many distributed caches out there. But it might not be an option.

It's worth mentioning that Apache Cassandra itself uses an off-heap cache.

[0]: https://ignite.apache.org/arch/durablememory.html [1]: https://www.ehcache.org/documentation/2.8/get-started/storag... [2]: https://blog.dgraph.io/post/introducing-ristretto-high-perf-...


Here is my list, which is biased towards Linux. Almost all the books mentioned here are dated and primarily written for 2.6 based kernels. Although many concepts are still applicable and in certain subsystems, the code will be as is with minor changes. So, despite them being old they are still good references.

* Design and implementation of the FreeBSD operating system: Good and thorough deep dive in FreeBSD OS. Must have.

* FreeBSD Device Drivers: A Guide for the Intrepid: Didn't quite read it all but looks fine for FreeBSD.

* Mac OSX internals - a systems approach by Amit Singh: It was good back in days but now outdated.

* Linux Device Drivers, 3ed : Very dated but still good to grasp Linux device drivers in general. The code examples are bit silly but good enough. There are some GitHub repos which have updated code for latest kernels.

* Essential Linux Device Drivers by S Venkateswaran: It complements LDD3 book quite nicely. Has some real device examples and very exhaustive. Must have.

* Linux kernel development by Robert Love: Very good for short/quick intro. Best for preparing before interviews ;)

* Professional Linux kernel architecture by Wolfgang Mauerer: As other book, its dated but some of the explanations about interrupts, PCI, etc are good. His callgraph approach was very handy in understanding things.

* The Linux programming interface by Michael Kerrisk: Not really about OS but next thing close to it - system programming (the real system programming). Must have.

* Understanding Linux virtual memory manager by Mel Gorman: As other books, it's dated but still one of the best available to get a handle on memory management under Linux. Must have.

* Understanding the Linux kernel: Dated but still my go to book to refresh certain subsystems. Must have.

* Linux Kernel Programming by Michael Beck: Mentioned for historical reasons, else most out dated book here (2.4 based). Horrible editing and English but heck! I loved it back in the days :)

* Linux kernel networking by Rami Rosen: Never read it but quite dated.

* Understanding Linux network internals by Christian Benvenuti: A real bible of Linux networking. If nothing else, your jaw drops at the effort author made to write this book. Dated but unlike more generic Linux kernel things, network stack is still same in its core. Must have.

* The Linux kernel primer: A top-down approach for x86 and PowerPC Architecture: Very dated book but good to read from PPC perspective. Lot of things have changed since its publication.

* See MIPS Run by Dominic Sweetman: dated but gives good idea about MIPS internals and how Linux uses it.

* IA-64 Linux Kernel: Design and Implementation: Its dated not just w.r.t code but also w.r.t. hardware. Nonetheless, it gives a good insight into IA-64 architecture and Linux from non-x86 perspective.

* Definitive guide to Xen hypervisor: this is the only book on virtual machine which is not just from user perspective. While best way to learn VM is through reading architecture spec and code, this book still satisfied me w.r.t. virtual machine internals.

Every other book on amazon about Linux kernel is more or less useless. For more academic book, "Operating systems - three easy pieces" is good.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: