1. As you said, it was compute-rich before, and native token-rich now. I see it that way: the difference is that your richness is now a part of the protocol which allows it to e.g. punish bad actors (aka slashing) which isn't possible in PoW scenario where attacks can continue indefinitely.
2. Validators don't solely decide rules of execution: full nodes do. If you, as a validator, try to break the rules (double spend, incorrect execution, etc.), full nodes won't accept this new world view and discard it. So providers like Infura which most users (currently) use will continue working as expected and the canonical chain won't be damaged.
Good work, always happy to see new tools related to SQLite and Wasm!
As I see, currently you're loading whole DB into the memory, so large databases might become a problem. Did you think of adopting https://wicg.github.io/file-system-access, so users could query their databases directly from filesystem?
Thanks! Not really — the playground is meant for small databases. Just as one won't develop the whole app in JSFiddle, they probably won't import a large database to the playground.
Several months ago I've made a proof-of-concept of exactly what you're talking about, feel free to check it out: https://shekhirin.com/sqlite-fs/.
I recommend downloading sample DB, writing some dummy query like "SELECT BILLINGCOUNTRY, COUNT(INVOICEID) FROM INVOICE GROUP BY 1 ORDER BY 2 DESC" and then pressing Execute.
I've been planning to write an extensive article about it and open sourcing the solution cleaning up the code a little bit, but still haven't got much time to do so.
now let's see what it takes to make absurd-fs, where we use https://github.com/guardianproject/libsqlfs to make a filesystem on top of sqlite on top of the File System Access API.
gotta keep ourselves fully looped! ⥀
(is there perchance a repo available with your work? that'd be lovely to see.)
The phones of 50,000 individuals, including human rights activists and journalists, have been targeted by surveillance tools that were used by numerous governments. These tools can hack any iOS and Android phone, and there is no way to protect your device from it. It doesn't matter which apps you use, because the system is breached on a deeper level.
According to the Snowden revelations from 2013, both Apple and Google are part of the global surveillance program that implies that these companies have to, among other things, implement backdoors into their mobile operating systems. These backdoors, usually disguised as security bugs, allow US agencies to access information on any smartphone in the world.
The problem with such backdoors is that they are never exclusive to just one party. Anybody can exploit them. So if a US security agency can hack an iOS or Android phone, any other organization that uncovers the backdoors can do the same. Unsurprisingly, this is exactly what has been taking place: an Israeli company called NSO Group has been selling access to the spying tools that allowed third parties to hack tens of thousands of phones.
Since at least 2018, I have been aware that one of my phone numbers was included in a list of potential targets of such surveillance tools (although a source from the NSO Group denies it). Personally, I wasn't worried: since 2011, when I was still living in Russia, I’ve got used to assuming that all my phones were compromised. Anyone who gains access to my private data will be utterly disappointed – they will have to go through thousands of concept designs for Telegram features and millions of messages related to our product development process. They won't find any important information there.
However, these surveillance tools are also used against people far more prominent than me. For example, they were employed to spy on 14 heads of state. The existence of backdoors in crucial infrastructure and software creates a huge challenge for humanity. That's why I have been calling upon the governments of the world to start acting against the Apple-Google duopoly in the smartphone market and to force them to open their closed ecosystems and allow for more competition.
So far, even though the current market monopolization increases costs and impedes privacy and freedom of speech of billions, government officials have been very slow to act. I hope the news that they themselves have been targeted by these surveillance tools will prompt politicians to change their minds.
>> Personally, I wasn't worried: since 2011, when I was still living in Russia, I’ve got used to assuming that all my phones were compromised.
I know it's fun to slam on Telegram (and for sure its encryption has flaws, I really don't think anyone denies this), but everyone needs to understand the mindset of Durov and what I'm guessing is the mindset of russian-born telegram developers: your phone can be compromised, and easily at that.
I think this is something very important for everyone to remember when the discussion of encryption and messaging comes up.
The level of encryption in transit doesn't matter if your adversary has full access on your phone that can just screenshot and pull local messages of whatever they want.
NSO's ridiculousness hopefully has made it very clear that it doesn't matter which phone/OS you're using; full access to your phone is a salable item for basically anyone with the interest in having it, and this is only the software we know about.
Journalisst, Activists, or even just someone looking for a fun weekend is at risk with modern phones and messaging; it does not matter about tapping the communication in-between if they can just screenshot/copy your phone on the fly.
In my previous job I have worked for a company that developed enterprise focused encrypted chat apps. When interviewing potential hires, one of the first general questions we asked was to give a high-level list of possible attack vectors on an installed app and its user data. Very few developers even considered the OS and device themselves as a potential threat, despite these interviews taking place well after Snowden revelations.
There's a difference between saying "bah, it's all insecure, let's just give in to the surveillance", and recognising that there is nothing on the market that actually makes strides toward foolproof security. Remember, a smartphone running the Linux kernel doesn't automatically make it secure just because it's open source.
This is a typical problem with the proprietary apps: they can dictate you how you must run them. Not a fault of Pinephone. By the way, Librem 5 is significantly faster.
Haha, you're right about that. It's the only method of communication with many people and businesses around here. Which puts it roughly in on a par with the old-school phone network, except that has an oligopoly of spyware companies.
>GrapheneOS is heavily focused on security enhancements making exploitation significantly harder:
>grapheneos.org/features
>Those other operating systems [Calyx and Lineage] don't improve resistance against exploitation and won't provide more resistance against an exploit working against AOSP/stock.
>If they specifically target GrapheneOS and put work into adjusting their exploit chains and finding new bugs as necessary, then they could certainly develop an exploit working against GrapheneOS. Costs will be higher and they'll usually need to specifically take it into account.
>Firmware exposed to remote attack surface like the radios (Wi-Fi, Bluetooth, cellular, NFC) and GPU is generally a lot harder to exploit than the OS and those components are isolated. It's much rarer and generally involves using an OS exploit to bypass the component isolation.
>Nearly all of these exploits are memory corruption bugs. GrapheneOS does actually provide hardening for firmware through attack surface reduction including the LTE only mode and other features. It can't directly harden firmware, but it can avoid exposing as much attack surface.
>So, for example, with the GrapheneOS 4G only mode enabled, vulnerabilities in 2G, 3G and 5G are not usable to exploit the cellular radio, only those exposed by 4G.
>The radio firmware also does have substantial hardening and internal sandboxing, but GrapheneOS can't improve it.
>GrapheneOS also fortifies the OS against exploitation by an attacker that has gained code execution on a component like the GPU or radio.
>Main hardening we provide is for the most common path of exploiting an RCE bug in userspace and then exploiting the kernel to escape sandbox.
GrapheneOS runs only on Pixel phones which have great hardware security.
Likely not ; they might be, by chance - but the exploits are often for bugs in places like media parsing libraries (e.g. jpeg decoder), which are not usually modified in those alternatives.
Different compile settings might render an exploit ineffective. But I’d expect any remotely popular Android derivative (e.g. lineage) to be tested by the attacker - and even postmarketOS, which is not Android based, is likely to use some of the same media parsing libraries.
Hardware kill switches are unfortunately pretty much useless. For camera it's okay, but a tape is just as good, for microphone, even the gyrosensors can record voice in some quality. And here is the big thing: there is hardly any threat model where blocking the camera would help when the software stack is a burning pile of C buffer overflows from top to bottom. If you can't trust the software to such a degree, then you might as well just not turn on your device. Seriously, what's up with the linux userspace where goddamn gnome initial setup is a C program?! Like, we were okay with lisp code decades ago in more serious things, and nowadays we actually have memory-safe languages with very close to native performance.
But the biggest problem is the lack of sandboxing, and UNIX permissions are way too crude to be of any use. The attacker at worst can't install a video driver, but can easily add anything to your bashrc, or read the content of your browser's cache, etc.
You are right, but Apple does try to rewrite most things in memory safe languages and have been doing so for quite some time now. So it is not exactly GNU/Linux.
I’m no security researcher so do correct me if I’m wrong but I assume you use firejail which is a suid program - a bug here could cause an escape to even become root. And why would you write a sandbox in a memory safe language…
Yes, you're right to be wary of suid, but primarily against local attacks on my laptop. The suid risk for a remote attacker seems rather less than from remote malware without the sandbox. Opinions may differ.
Of course you are correct, it is better than no sandbox, I'm just saying that compared to even the now affected Android, ios OSs GNU/Linux is seriously lacking in terms of security.
He pivoted the NSO group targeting to Apple-Google discussion, with out any proof that Apple had anything to do with Pegasus.
He wants biggest American companies that world has ever had to open source and loose all the edge against rest of the world, but he runs close source proprietary server software which he wants people to use for secure communication.
Apple is known to hand off whole China iCloud to CCP.
Also they refuse to zero-knowledge (e2e) encrypt US iCloud backups[1].
In San Bernandino shooter’s case, they refused FBI’s request to develop new tools to hack an already locked iPhone.
However I have little doubt they will refuse to sign&push OTA update of a Signal.app or “improved” iOS developed and provided by NSA.
Mercenary who helped Carlos Ghosn, recalled that in the middle of operation, while riding a train, his iPhone suddenly rebooted and started an iOS update[2]:
--
On the train, Taylor’s phone began an unexpected automatic software update. “The first thing I thought was, I wonder if the NSA knows,” he recalls. “I wouldn’t put anything past them.”
> with out (sic) any proof that Apple had anything to do with Pegasus.
Um, bundling a messaging app that parses feature-rich messages sent from anyone in the world using a memory-unsafe language and abusing DRM laws intended for anti-piracy protection to *ensure that no one can uninstall it from their phone* doesn't count as proof that Apple had something to do with Pegasus?
Yes, Durov's assertion that the bugs NSO exploited were intentionally left there by Apple at the behest of US intelligence agencies is presented without proof, and while conceivable is very unlikely [1].
But his assertion that monopoly practices by Apple had something to do with the Pegasus hacks is perfectly accurate given that Messages is insecure, forcibly bundled, and was in fact how many journalists and human rights defenders were hacked.
Durov's point that "it doesn't matter what apps you have installed on your phone" is especially depressing and a direct result of Apple's use of DRM to prevent users from uninstalling Messages. It would be nice if people could install Messages from their iPhones right now. Thanks to Apple, they can't.
[1] Not because Apple wouldn't do it if pressured (we know, for instance, that they caved to such pressure on iCloud encryption) but merely because there are likely so many vulnerabilities to find that the chances NSA, Apple, and NSO were all aware of the same vulnerabilities are very low.
I’m a first year college dropout working at international fintech company as a Data Scientist. Now I want to shift to Backend, that’s why I’m writing here. Before Data Science I’ve been mostly doing backend in Python creating my own side projects.
1. As you said, it was compute-rich before, and native token-rich now. I see it that way: the difference is that your richness is now a part of the protocol which allows it to e.g. punish bad actors (aka slashing) which isn't possible in PoW scenario where attacks can continue indefinitely.
2. Validators don't solely decide rules of execution: full nodes do. If you, as a validator, try to break the rules (double spend, incorrect execution, etc.), full nodes won't accept this new world view and discard it. So providers like Infura which most users (currently) use will continue working as expected and the canonical chain won't be damaged.