1k sounds like a discretionary amount that would quite neatly fit within a manager's budget for external consultants and so on, which is probably what they'll say you are when accounting for it. They're trying to fly under the radar, and have likely kept this knowledge to only a few people.
The organisation will never change their ways unless they get bad publicity or have to spend so much money that their c-suite gets involved.
I would be wary of trying to negotiate the payment upwards in case you are accused of extortion; just explain you'll disclose publicly in 30 days, which is more than enough time to fix what I assume is a web app backend bug. You don't want them dealing with this kind of issue as a feature to be implemented when there's space in one of the future sprints.
They may try at this point to negotiate the payment upwards, which is a matter for you and your conscience, but I would say that if you don't get something close to 100k, it's likely to be swept under the rug internally and they'll never learn from their mistakes.
I have hidden recovery information in a few places on the internet - someone stumbling across it would not know what they are looking at, or what it's for. For example, you can hide the TOTP secret for an authenticator app, but it's useless unless you know what account and service it's for, and the associated master password.
So to mitigate lockout risk, you keep multiple Yubikeys, store recovery codes in multiple physical locations including presumably a fire-proof safe bolted into your home (at your expense), and use obscurity to store the TOTP secret on random places in the internet, presumably relying to external services or a self-hosted solution, which are themselves dependent on regular credit card payments going through.
Okay, I grant that you've reasonably mitigated the lockout risk. But I don't want to do any of this, and is it really reasonable to expect the everyday person to understand or implement all this? What happens in practice is that many users will not realize anything is wrong until they get locked out with no recourse.
This makes it hard for me to recommend Bitwarden to my friends who use typical insecure practices like password reuse or post-it notes.
Security has either been easy and weak, or difficult and strong. It will never change and so you will always have the option of weak security if you dont want to jump through the hoops for the peace of mind.
> my friends who use typical insecure practices like password reuse or post-it notes
IMO people who do those things will never change. Its like the environment, everybody knows what they should be doing but no-one cares enough to do it.
So Bitwarden should offer 2FA for users who want the additional security – they should never force users to enable it. It would be like refusing to save "password" as a password, because it is insecure.
If you have literally no other option than SMS 2FA because of bad support from websites, maybe. Otherwise it's probably one of the worst options (though I suppose unlike using your main number at least it's harder to discover the number for the 2FA phone to attack it with social engineering).
Email is not a good second authentication factor anyway. I have 6 u2f tokens on my high priority digital accounts, as well as printed recovery codes in several places. Only 1-2 tokens ever actually travel with me, the others are kept safely in different locations.
Given that most people are cracked wide open if their password manager is compromised, I do feel it's sensible for a password manager to insist on 2FA, but the email chicken and egg problem is a concern for those migrating, and hopefully they backed up their recovery codes.
Email can be a perfectly good second authentication factor.
It depends on the asset you’re protecting and your threat model.
I have quite a few accounts whose value does not cross a threshold where I care about the risks of email… and my workflows would be enhanced dramatically if I could use it as a second factor.
The reason I can’t is not because of security or anything at all to benefit me, the user. It is because the services themselves need to throw sand in the gears of the bad actors abusing their services.
My email address can't be SIM swapped, my emails aren't transmitted using weak 90s encryption algorithms over the air (and via dubious, largely unauthenticated 80s protocols on the wire), and my mailbox is itself guarded by 2FA.
You can use the powershell command Add-MPPreference -ExclusionPath[0] and ship a script with your app if you want. I do the same for Terraform providers - whenever a new version comes out, for a time the process can be randomly killed as I suppose a process that spawns a child process that starts talking to lots of endpoints looks somewhat suspicious.
I'd make it a pluggable middleware with a document on how to implement your own and provide a reference configuration that uses something like Vouch [0] which will redirect the user to another identity provider.
You could also provide another implementation that implements Cloudflare's zero trust authentication [1].
In other words, I don't think I'd want to actually take responsibility for authentication these days and use an authenticating proxy. The less security infrastructure you have, the less there is to go out of date.
You can always start with this approach and then implement your own built-in user directory later.
One of the techniques for a voice assistant to distinguish its own voice from background sound is called a Fourier transform, although I expect that the state of the art in this area also includes some other techniques and research.
If you've used one, you might know that you can easily talk to a smart speaker even when it is playing very loud music, it's the same idea.
Companies can quite happily hold two opposing viewpoints when it suits them. Apple's products usually have some kind of pleasing consistency but that doesn't mean their corporate dealings have to be.
In a similar vein, a startup will be very happy to talk about how valuable it is, except when it comes to talking to tax authorities, whereupon suddenly their shares are borderline worthless.
Eh, this is at least a little different. Startups talk themselves up to investors where they need to convince the investors that they will be really valuable at some point in the future. This is compared to tax authorities who are only concerned about current value, which is often essentially zero when it comes to startups.
This is relevant to a discussion I am having at work right now. I am not a fan of using a templating language as such to generate string templates, especially for a whitespace sensitive language.
I would rather use Terraform's Kubernetes or Kubectl module for this. Are there any pros or cons I should consider?
I think one of the key things I like about it is that Terraform will show me what it plans to change whereas Helm doesn't (last time I checked)
The kubernetes provider, and kubectl works, but its not the nicest way of making changes. Its slow, quite clunky, and its not particularly intuitive. If your just getting started, and you know terraform its ok though. Its useful to bootstrap gitops tools like Argo or FluxCD though.
Helm diff will show you a similar diff to terraform. Running Helmfile in CD isn't a bad move, its really simple, and its a pattern that is easy to grok by any engineer. I think this is still a valid approach in a simple setup, its what some people call "CD OPS". It's a push model instead of pull, and there are downsides, but its not the end of the world.
Ultimately, at scale, i think gitops tooling like Flux and ArgoCD are some of the nicest patterns. Especially Flux's support for OCI artifacts as a source of truth. However then you will venture into the realm of kustomize, and much more complex tooling and concepts, which is not always worth doing.
Cons: The provider tries to manage its own state, as terraform normally does. This makes it slow to diff and the state often get out of sync with what is really in k8s. When it does, it fails loading the manifest during diff phase, so you can’t apply, even if all you want is to overwrite.
The diffs are very noisy to read because they show every possible attribute, even those you didn’t write.
The ready-made resources can some times be a bit behind on versions but you also have a raw manifest resource as an escape hatch if you depend on bleeding edge
Pros: The templating capabilities are fantastic, because it leverages terraform. Bye bye string templates. This also makes it easy to use values from other data sources.
YAML is just some data, like JSON or CSV. Ask yourself whether you'd use a third-party JSON templating or CSV templating tool, or whether you'd use your shop's language of choice and write a program to spit out the generated data.
You can also save yourself a step by just spitting out JSON, which is valid YAML.
every time I hear someone suggest such a thing, I remind them that now you have two systems who believe they own the state of the world: .tfstate and etcd and let me assure you that no matter how much our dumbass TF friend thinks it knows what's going on, etcd wins hands down every time
that's why I strongly suggest that if anyone is a "whole TF shop," they go the operator route, because trying any lower level management is the road to ruin
Terraform wants to be the only thing that owns a K8S object, but the way things work in reality is you have a dozen things that want to write back to this attribute, or that overwrite objects in other places, etc, and you're constantly fighting with TF about this or that triviality.