It's a civil proceeding not a criminal proceeding so he would not be incriminating himself.
He could argue that by answering he would be admitting crimes and opening himself to criminal liability. But there's a possibly they give him immunity and that route is taken away.
IANAL either but I'm not sure anyone involved in the civil case would have the power or authority to grant criminal immunity (perhaps up to and including the judge, at least local to me the civil judges do not do criminal cases - there is no overlap).
It sure would be nice if this standard of conduct in court were also upheld for the US federal officials who refuse to answer or straight up bold faced lie in court. But nah, it only ever happens to normal people.
To be held in contempt indefinitely you must "hold the keys to the jail cell" meaning you can leave at any time if you simply comply with the courts order.
Having a visual builder tool in an IDE like Delphi or Visual Basic or any of the others.
They ship with an existing library of components, you drag and drop them onto a blank canvas, move them around, live preview how they’ll change at different screen sizes, etc… then switch to the code to wire up all the event handlers etc.
All the iteration on design happens before you start compiling, let alone running.
Before I drop 5 figures on a single server, I'd like to have some confidence in the performance numbers I'm likely to see. I'd expect folk who are experienced with on-prem have a good intuition about this - after a decade of cloud-only work, I don't.
Also, cloud networking offers a bunch of really nice primitives which I'm not clear how I'd replicate on-prem.
I've estimated our IT workload would roughly double if we were to add physically racking machines, replacing failed disks, monitoring backups/SMART errors etc. That's... not cheap in staff time.
Moving things on-prem starts making financial sense around the point your cloud bills hit the cost of one engineers salary.
> I've estimated our IT workload would roughly double if we were to add physically racking machines, replacing failed disks, monitoring backups/SMART errors etc.
That's why nowadays one would use a managed collocation service, not hosting a rack in the office basement.
IAM comes to mind, with fine grained control over everything.
S3 has excellent legal and auditory settings for data, as well as automatic data retention policies.
KMS is a very secure and well done service. I dare you to find an equivalent on-prem solution that offers as much security.
And then there's the whole DR idea. Failing over to another AWS region is largely trivial if you set it up correctly - on prem is typically custom to each organization, so you need to train new staff with your organizations workflows. Whereas in AWS, Route53 fail-over routing (for example) is the same across every organization. This reduces cost in training and hiring.
I've worked at many enterprises that have done and do these very things. Some for fixed workloads at scale, some for data creation/use locality issues, some for performance. I think there is about a 15 year knowledge gap in on-prem competence and what the newest shiniest is on prem for some people. Yes, some of the vendors and gear are VERY bad, but not all, and there's always eBPF :)
I would probably just build the infra in crossplane which standardizes a lot of features across the board and gives developers a set of APIs to use / dashboard against. Different deployments and orgs have different needs and desire different features though.
I mean not just anyone, but its far less complicated than dealing with arcane iptables commands. And yet far more powerful, being able to just say "instances like this can talk to instances like this in these particular ways, reject everything else". Don't need subnet rules or whatever, its all about identity of the actual things.
Meanwhile lots of enterprise firewalls barely even have a concept of "zones". Its practically not even close to comparing for most deployments. Maybe with extremely fancy firewall stacks with $ $MAX_INT service contracts one can do something similar. But I guess with on-prem stuff things are often less ephemeral, so there's slightly less need.
I could type your arcane iptables commands for a couple hundred an hour. That stuff is easy compared to some software development tasks. I have sometimes struggled, but I've always found a solution after a few hours max.
There are standards but actually designing a sane network architecture, buying all of the correct network hardware, and configuring all of the software to properly use that hardware is hard. At my company we have a team of about 20 people whose job it is to just design, install, and run the network.
> There are standards but actually designing a sane network architecture, buying all of the correct network hardware, and configuring all of the software to properly use that hardware is hard. At my company we have a team of about 20 people whose job it is to just design, install, and run the network.
I switched to my own domain ages ago; it only took 2-3 years to stop getting relevant mail to the old one (I put a forwarding rule in place and just used the new one for everything).
Imported all my past mail on day one, forwarding meant I had one inbox only, and I only sent mail from the new domain. A few gentle “please stop using my old address” conversations with family.
If they couldn't borrow $100, or get $100 from any other investor, that just puts you in the position of being an investor, and even then the difference between bradfa's version and mine is simply when you became an investor, not that you became one.
Again, this is not a cheat code: if you sell $80 of cost for $100 of stock, the stock you now own can go up or down, and if you overvalued it then down is the more likely direction.
The primary cheat code here would actually seem to be (a) getting preferential access to Nvidia's production through these deals and (b) creating a paper story of increasing OpenAI private valuation.
I did a similar thing with a regular backlit computer screen.
It automatically shuts off after 30 seconds of inactivity.
I added a $3 webcam, and use openCV to detect motion. If three consecutive frames (sampled 0.5s apart) are each sufficiently difficult from the previous one, it attaches a virtual USB mouse, then moves it one pixel.
This wakes up the display whenever you walk past, then puts it back to sleep again when you stop moving.
The motion-detection pipeline uses less than 0.3% CPU on an intel N100 (6w TDP).
> The MR60BHA2 is a 60GHz wave sensor that detects breathing and heartbeat patterns. Using its radar technology, it can monitor vital signs without direct contact, even through materials like clothing or bedding. You can use it for sleep monitoring, health assessments, and presence detection.
This is kind of crazy, I had no idea this was a thing. And here I have PIR sensors all over the place and hacks around those, that definitively sounds much better. Besides being more expensive and weaker range, any drawbacks for using it for motion sensing?
230 is an obvious place to say “if you decide something is relevant to the user (based on criteria they have not explicitly expressed to you), then you are a publisher of that material and are therefore not a protected carriage service.
SERIALIZABLE is really quite hard to retrofit to existing apps; deadlocks, livelocks, and “it’s slow” show up all over the place when you switch it on.
Definitely recommend starting new codebases with it enabled everywhere.
Do you have examples of deadlocks/livelocks you've encountered using SERIALIZABLE? My understanding was that the transaction will fail on conflict (and should then be retried by the application - wrapping existing logic in a retry loop can usually be done without _too_ much effort)...
I guess I'd say -- I think you're right that you shouldn't (ideally) be able to trigger true deadlocks/livelocks with just serializable transactions + an OLTP DBMS.
That doesn't mean it won't happen, of course. The people who write databases are just programmers, too. And you can certainly imagine a situation where you get two (or more) "ad-hoc" transactions that can't necessarily progress when serializable but can with read committed (ad-hoc in the sense of the paper here: https://cacm.acm.org/research-highlights/technical-perspecti...).
I’m not sure they were _introduced_ by switching to serialised, but it means some processes started taking long enough that the existing possibilities for deadlocks became frequent instead of extremely rare.
Haven’t kept history from the bug tracker back that far, but we definitely hit some pretty awful issues in prod trying to solve race issues with “serialisable”. Big older codebases end up with surprising data access patterns.
Acceleration at 1g lets you get to another galaxy in a single human lifetime (although earth will have been swallowed by the sun by the time you arrive). Relativity is pretty counterintuitive.
The ongoing refusal to answer questions under oath is.
He could have agreed to talk anytime and been released shortly.
reply