Hacker Newsnew | past | comments | ask | show | jobs | submit | more failbuffer's commentslogin

Looks good, but I wish there was a practicioner-oriented resource for how to use cryptographic libraries that didn't start by focusing on the math. I don't need to know the intricacies of RSA, I need to know how to securely compose it with other primitives to engineer a system with the desired properties.


I wanted to have a better understanding of crypto, simply to feel more confident in writing programs that use existing protocols, and started 'Real-World Cryptography' by David Wong. I'm about 3/4 through, and I've been happy with it. It is light on math, but does go into it a little bit - it seems designed for the kind of person who isn't comfortable using something until they understand how it works under-the-hood, but doesn't actually need to do any under-the-hood work.

It has taught me enough that I think I could compose a protocol out of primitives that on the surface appears to do what I've intended it to do. It has also taught me that there are many subtleties that can completely break a protocol, combining primitives can lead to unexpected weaknesses, and many people who understand crypto far better than I ever will have created broken protocols out of secure primitives.

I'm not sure it's the book you're looking for, but I think it's a good book if you want to understand crypto, but not design your own.


I want to put a word in here for being cautious about the capabilities you can achieve in novel systems --- software developers are often working with multiple whole sieverts of novelty without realizing it --- without having a lot of the boring theory stuff nailed down.

If you're using (say) libsodium to do exactly the kind of thing 100 other developers have successfully used libsodium to do in the past, you're fine. But it takes a deceptively small and subtle set of steps to end up synthesizing a new cryptosystem (see: attempts to build secure messaging systems out of libsodium primitives) without realizing that's what you're doing.

Learn a bunch of the theory! It's important.


> sieverts of novelty

Yikes!

Is this clever metaphor original with you?


I'm a little proud of it.


Google "Cryptographic Right Answers". There are a couple of different posts, but they agree on the most of the things you would look for.

Ex.: https://gist.github.com/tqbf/be58d2d39690c3b366ad or https://www.latacora.com/blog/2018/04/03/cryptographic-right...


This is great, finding NaCl (libsodium) has been a godsend, specifically the JS lib.

1 - https://nacl.cr.yp.to/

2 - https://github.com/dchest/tweetnacl-js


Perhaps what you need is something like „Cryptography Engineering: Design Principles and Practical Applications“

Book by Bruce Schneier, Niels Ferguson, and Tadayoshi Kohno.


Pretty outdated. For awhile, it was the best book available, but in 2024 it's probably harmful.

Today, I'd read Serious Cryptography or Real World Cryptography.


"Secure composition" is definitely covered in the course. It doesn't talk only about the details of RSA (though there are some lectures about that), but also about what security properties different primitives satisfy, how to compose them safely, etc.

A large part of modern cryptography is figuring out secure composition.


I really like the hearty amount of configurability they make available thru the control panels. They're not afraid of giving you a lot of options, yet it's organized well, easy to use, and the defaults are sensible.


What you have right now is a situation where airlines compete stiffly on sticker prices and then find ways to screw you on the backend. You save money if you're lucky, but it's because you're getting a hidden subsidy from people whose flights were cancelled.


UK becomes increasingly authoritarian every year. What a tragic and ignoble development for the nation that gave us the Manga Carta.


This is true the world over. For example women in many states of America are fighting for their right for abortion.

When the world is prosperous, people tend to vote for more liberal policies. And when economy is struggling, people tend to vote more conservatively. So we see more authoritarian policies like these.


Is this instance liberal (accepting of change) or conservative (hesitant of change)?

On one hand it seems liberal, wanting to see a change that gives individuals who might be subject of deepfakes more autonomy.

On the other hand, AI is changing the world and it seems like an effort to try and hold the things around AI in place as to not seem them change.

It could go either way. So could the state of the economy right now, for that matter.


> Is this instance liberal (accepting of change) or conservative (hesitant of change)?

I don’t think this framing is very useful. Modern conservatives don’t have problems with change, they just want that change to go their way. They are perfectly able to innovate and push new doctrines or legal theories when it suits them. And on the other hand they do not care about preserving anything besides an idealised vision of “traditional values”, which are not as old as they seem.

Looking at the actual policies and arguments, the spectrum is between oligarchy (or dictatorship for the most extremes) and democracy. It makes much more sense that way.

> On one hand it seems liberal, wanting to see a change that gives individuals who might be subject of deepfakes more autonomy.

It’s also a significant limitation of personal freedom in the case of a victimless crime, which is anything but liberal.


Who you have described are reactionaries, not conservatives.

In fairness, some political parties who subscribe to reactionaryism operate under a capital C Conservative banner to try and muddy the waters, but to consider them conservative because of that is like considering the Democratic People's Republic of Korea democratic.


> Who you have described are reactionaries, not conservatives.

Indeed, that’s exactly what they are. But they prefer calling themselves conservatives. It’s better from a marketing perspective.

To be fair, there are not many politicians that could be called old-fashioned conservatives in the US. Most of them are a subset of the republican establishment and they were mostly wiped out in the last couple of years.

> to consider them conservative because of that is like considering the Democratic People's Republic of Korea democratic.

Definitely! Unfortunately, that’s how the semantics went, though. In the same way as what most people call “liberals” now are very different from who liberals used to be (they used to be all about capitalism and free enterprise, for example).


Magna carta only created rights for the oligarchs of the time, many of which have descents today with the same inherited wealth. You don't have to dig that deeply in the UK society to find the old aristocracy alive and well and still above others and usually the law.


That's the Magna Carta, OP was talking about the Manga Carta which is presumably a comic book version though I've never come across it.

Also, what you say is very true. The Magna Carta was a hastily cobbled document to protect the power balances and keep the serfs in their place.


The Magna Carta is celebrated because it was the first written agreement ever that actually did a decent job of protecting the rights of any group.


There were things like the old Roman laws that defined the rights, privileges, and obligations of Roman citizens. Or the various constitutions of the old Greek poleis, which similarly guaranteed the freedom of the citizens. Or even the Hammurabi code, if you squint a bit.

“A decent job” is subject to interpretation, but I find very difficult to argue that the Magna Carta was the first, even if it was very significant.


> MANGA Carta

That's japan.


Killing your pipeline for innovation and talent development doesn't make you secure, it makes you fall behind. The Soviet Union found this out the hard way when they made a policy decision to steal chip technology instead of investing in their own people. They were outpaced and the world came to use chips, networks, and software designed by Americans.


That's the exact opposite of what I'm saying we do. We need to invest in engineers we can trust, and cut off those we can't.


Who's we? Americans? Sure that's fine for you, but Americans aren't exactly trustworthy outside of the US either and I say that as someone who's usually pro US. This sort of mentality just shows a lack of understanding of how most of the world sees the US. Even in places like say, france, the us is seen as an ally but a very untrustworthy one. Especially since out of all the confirmed backdoors up until now, most of them were actually US made.

If this backdoor turns out to be linked to the US, what would your proposal even solve?


"We" doesn't have to be the U.S. This is a false dichotomy that I see people in this thread keep pushing. I suspect in bad faith, by the people that want to insert backdoors. As a baseline, we could keep the contributors to NATO and friends. If a programmer is caught backdooring, they can be charged and extradited to and from whatever country.


If it's just an extradition issue, the US has extradition treaties with 116 countries. You'd still have to 1) ensure that user is who they say they are (an ID?) and 2) they are reliable and 3) no one has compromised their accounts.

1) and 3) (and, to an extent, 2) )are routinely done, to some degree, by your average security-conscious employer. Your employer knows who you are and probably put some thought on how to avoid your accounts getting hacked.

But what is reliability? Could be anything from "this dude has no outstanding warrants" to "this dude has been extensively investigated by a law enforcement agency with enough resources to dig into their life, finances, friends and family, habits, and so on".

I might be willing to go through these hoops for an actual, "real world" job, but submitting myself to months of investigation just to be able to commit into a Github repository seems excessive.

Also, people change, and you should be able to keep track of everyone all the time, in case someone gets blackmailed or otherwise persuaded to do bad things. And what happens if you find out someone is a double agent? Rolling back years of commits can be incredibly hard.


Getting a TS equivalent is exactly what helps minimize them chances that someone is compromised. Ideally, such an investigation would be transferable between jobs/projects, like normal TS clearance is. If someone is caught, yes rolling back years isn't practical, but we probably ought to look very closely at what they've done, like is probably being done with xz.


I guess it depends on the ultimate goal.

If the ultimate goal is to avoid backdoors in critical infrastructures (think government systems, financial sector, transportation,...) you could force those organizations to use forks managed by an entity like CISA, NIST or whatever.

If the ultimate goal is to avoid backdoors in random systems (i.e. for "opportunistic attacks"), you have to keep in mind random people and non-critical companies can and will install unknown OSS projects as well as unknown proprietary stuff, known but unmaintained proprietary stuff (think Windows XP), self-maintained code, and so on. Enforcing TS clearances on OSS projects would not significantly mitigate that risk, IMHO.

Not to mention that, as we now know, allies spy and backdoor allies (or at least they try)... so an international alliance doesn't mean intelligence agencies won't try to backdoor systems owned by other countries, even if they are "allies".


The core systems of Linux should be secured, regardless of who is using it. We don't need every single open source project to be secured. It's not okay to me that SSH is potentially vulnerable, just because it's my personal machine. As for allies spying on each other, that certainly happens, but is a lot harder to do without significant consequences. It will be even harder if we make sure that every commit is tied to a real person that can face real consequences.


The "core systems of Linux" include the Linux kernel, openssh, xz and similar libraries, coreutils, openssl, systemd, dns and ntp clients, possibly curl and wget (what if a GET on a remote system leaks data?),... which are usually separate projects.

The most practical way to establish some uniform governance over how people use those tools would involve a new OS distribution, kinda like Debian, Fedora, Slackware,... but managed by NIST or equivalent, which takes whatever they want from upstream and enrich it with other features.

But it doesn't stop here. What about browsers (think about how browsers protect us from XSS)? What about glibc, major interpreters and compilers? How do you deal with random Chrome or VS Code extensions? Not to mention "smart devices"...

Cybersecurity is not just about backdoors, it is also about patching software, avoiding data leaks or misconfigurations, proper password management, network security and much more.

Relying on trusted, TS cleared personnel for OS development doesn't prevent companies from using 5-years old distros or choosing predictable passwords or exposing critical servers to the Internet.

As the saying goes, security is not a product, it's a mindset.


We wouldn't have to change the structure of the project to ensure that everyone is trustworthy.

As for applications beyond the core system, that would fall on the individual organizations to weigh the risks. Most places already have a fairly limited stack and do not let you install whatever you want. But given that the core system isn't optional in most cases, it needs extra care. That's putting aside the fact that most projects are worked on by big corps that do go after rogue employees. Still, I would prefer if some of the bigger projects were more secure as well.

Your "mindset" is basically allowing bad code into the Kernel and hoping that it gets caught.


>Your "mindset" is basically allowing bad code into the Kernel and hoping that it gets caught.

Not at all. I'm talking about running more and more rigorous security tests because you have to catch vulnerabilities, 99% of which are probably introduced accidentally by an otherwise good, reliable developer.

This can be done in multiple ways. A downstream distribution which adds its own layers of security tests and doesn't blindly accept upstream commits. An informal standard on open source projects, kinda like all those Github projects with coverage tests shown on the main repo page. A more formal standard, forcing some critical companies to only adopt projects with a standardized set of security tests and with a sufficiently high score. All these approaches focus on the content, not on the authors, since you can have a totally good-willing developer introducing critical vulnerabilities (not the case here, apparently, but it happens all the time).

On top of that, however, you should also invest in training, awareness, and other "soft" issues that are actually crucial in order to actualy improve cybersecurity. Using the most battle-tested operating systems and kernels is not enough if someone actually puts sensitive data on an open S3 bucket, or if someone only patches their systems once a decade, or if someone uses admin/admin on an Internet-facing website.


Strange how you, kind stranger, see it as disgusting but all the participants who forwarded the video apparently did not. I know I would share such a video with my coworkers (at least, over the shoulder).

That's a mismatch in moral expectations. One of many in our increasingly divided society.

I don't know the best way to lower/reduce such mismatches, but doing it thru the courts reactively seems arbitrary and unfair. It's hard to follow rules that haven't been written.


We'll never be post-scarcity because humans can never reach a state of sustained satisfaction.


You abstain from coffee, chocolate, and fish too I trust? Otherwise you're undoubtedly supporting trace amounts of slavery and child labor. I bet the device you're reading this with was also made with such (and if it doesn't then it's still supporting a genocidal regime).

Get real. Scraped data is going to have problems. The expectation should be that reasonable measures are taken to filter proactively and that reactive measures are taken to remove illicit content found after that.


A whiteboard is complete. Every other way of diagramming software is deficient. Change my mind. ;-)


Great! So, do you bring your whiteboard to lecture to turn in, or take a picture of it, or just schedule time with your professor to whiteboard in front of them?

All I'm arguing for here is that UML serves the same purpose as those online homework apps. Correctly formatting your calculus homework to be accepted by that interface is as unrelated to calculus as UML mastery is to effective software design, but it resolves some of the same logistical challenges, while introducing others.


A whiteboard is just a medium to draw. Uml is a standard that says how to express certain ideas as symbols that can be drawn.

It's not clear to me what your argument is. Is it using whiteboards to draw uml instead of special uml software? If so, be prepared to take much longer to draw the diagram.

Or do you mean uml is deficient compared to free drawing of symbols on a whiteboard without a standard? If so, be prepared that nobody will completely understand your diagram without explanation


UML is just the common parlance so that we all understand what’s represented.

No need for a specific tool - unless you’re doing PowerPoint slides. Visio is good enough in that case if you have Windows.


It's how Blazor Server apps are architected. Blazor WebAssembly apps don't maintain client state in the server and can be load-balanced like normal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: