Hello! I've got experience working on censorship circumvention for a major VPN provider (in the early 2020s).
- First things first, you have to get your hands on actual VPN software and configs. Many providers who are aware of VPN censorship and cater to these locales distribute their VPNs through hard-to-block channels and in obfuscated packages. S3 is a popular option but by no means the only one, and some VPN providers partner with local orgs who can figure out the safest and most efficient ways to distribute a VPN package in countries at risk of censorship or undergoing censorship.
- Once you've got the software, you should try to use it with an obfuscation layer.
Obfs4proxy is a popular tool here, and relies on a pre-shared key to make traffic look like nothing special. IIRC it also hides the VPN handshake. This isn't a perfectly secure model, but it's good enough to defeat most DPI setups.
Another option is Shapeshifter, from Operator (https://github.com/OperatorFoundation). Or, in general, anything that uses pluggable transports. While it's a niche technology, it's quite useful in your case.
In both cases, the VPN provider must provide support for these protocols.
- The toughest step long term is not getting caught using a VPN. By its nature, long-term statistical analysis will often reveal a VPN connection regardless of obfuscation and masking (and this approach can be cheaper to support than DPI by a state actor). I don't know the situation on the ground in Indonesia, so I won't speculate about what the best way to avoid this would be, long-term.
I will endorse Mullvad as a trustworthy and technically competent VPN provider in this niche (n.b., I do not work for them, nor have I worked for them; they were a competitor to my employer and we always respected their approach to the space).
Whisper is genuinely amazing - with the right nudging. It's the one AI thing that has genuinely turned my life upside-down in an unambiguously good way.
People should check out Subtitle Edit (and throw the dev some money) which is a great interface for experimenting with Whisper transcription. It's basically Aegisub 2.0, if you're old, like me.
HOWTO:
Drop a video or audio file to the right window, then go to Video > Audio to text (Whisper). I get the best results with Faster-Whisper-XXL. Use large-v2 if you can (v3 has some regressions), and you've got an easy transcription and translation workflow. The results aren't perfect, but Subtitle Edit is for cleaning up imperfect transcripts with features like Tools > Fix common errors.
EDIT: Oh, and if you're on the current gen of Nvidia card, you might have to add "--compute_type float32" to make the transcription run correctly. I think the error is about an empty file, output or something like that.
EDIT2: And if you get another error, possibly about whisper.exe, iirc I had to reinstall the Torch libs from a specific index like something along these lines (depending on whether you use pip or uv):
If you get the errors and the above fixes work, please type your error message in a reply with what worked to help those who come after. Or at least the web crawlers for those searching for help.
The smallest GIF is still useful because it is the smallest possible valid favicon. This means you can stuff it into a data: URI to prevent useless requests showing up when you are working on something:
One of my Core Memories when it comes to science, science education, and education in general was in my high school physics class, where we had to do an experiment to determine the gravitational acceleration of Earth. This was done via the following mechanism: Roll a ball off of a standard classroom table. Use a 1990s wristwatch's stopwatch mechanism to start the clock when the ball rolls of the table. Stop the stopwatch when the ball hits the floor.
Anyone who has ever had a wristwatch of similar tech should know how hard it is to get anything like precision out of those things. It's a millimeter sized button with a millimeter depth of press and could easily need half a second of jabbing at it to get it to trigger. It's for measuring your mile times in minutes, not fractions of a second fall times.
Naturally, our data was total, utter crap. Any sensible analysis would have error bars that, if you treat the problem linearly, would have put 0 and negative numbers within our error bars. I dutifully crunched the numbers and determined that the gravitational constant was something like 6.8m/s^2 and turned it in.
Naturally, I got a failing grade, because that's not particularly close, and no matter how many times you are solemnly assured otherwise, you are never graded on whether you did your best and honestly report what you observe. From grade school on, you are graded on whether or not the grading authority likes the results you got. You might hope that there comes some point in your career where that stops being the case, but as near as I can tell, it literally never does. Right on up to professorships, this is how science really works.
The lesson is taught early and often. It often sort of baffles me when other people are baffled at how often this happens in science, because it more-or-less always happens. Science proceeds despite this, not because of it.
(But jerf, my teacher... Yes, you had a wonderful teacher who didn't only give you an A for the equivalent but called you out in class for your honesty and I dunno, flunked everyone who claimed they got the supposed "correct" answer to three significant digits because that was impossible. There are a few shining lights in the field and I would never dream of denying that. Now tell me how that idealism worked for you going forward the next several years.)
We built “safe npm”, a CLI tool transparently wraps the npm command and protects developers from malware, typosquats, install scripts, protestware, telemetry, and more.
You can set a custom security policy to block or warn on file system, network, shell, or environment variable access.
A curated sequence of logical commits assembled into an idealized history is often much easier to review than the real history. However, to rewrite history well enough for that you need....
* a solid understanding of interactive rebasing, including `fixup` and `reword`.
* `git add -p` for adding partial sections of files
* `git commit --amend` for patching the last commit
* `git commit --fixup [COMMIT_ID]` for attaching patches to commits further back in history.
* `git stash` for pausing progress while you fix up an old commit.
* `git rebase -i --autosquash [COMMIT_ID]~` to apply the fixups
* topic branches that never get too big or drift too far from the mainline, because rebasing often becomes infeasible when repo snapshots are too far apart.
* A low enough error rate that you don't screw everything up when rewriting history (which is a reasonable critique and argument for why you shouldn't attempt this in the first place).
I can usually manage this, and efficiently enough that it's worthwhile — and my colleagues appreciate that my PRs are easy to follow. But as a reviewer, I don't insist on other people putting in the same investment.
The Philippines (under Duterte) said to Chinese criminals: Feel free to base your operations here and rip off your fellow Chinese, but don't rip off us Filipinos.
Then these POGO centres started scamming locals and Chinese alike, and staffed themselves with scam victims, using blackmail and torture to force them to work there and commit fraud.
The “X”-line was not trying to be a macbook, only the X1 was doing this, which was a class of its own and confused the product lines tremendously.
X- was always ultraportable business laptops, essentially the smaller version of the T-, lots of connectivity, conservative design etc.
T- was the standard sized laptops built to business standards- usually built the best and lasted the longest, with a conservative design and port selection.
W- was the desktop replacement class, the super-amped variant of the T-, compromising portability for power.
Everything else was confused, experimental, sub-tier, and the X1 muddied the branding of the other X-series..
I wrote about my experience working as a software developer and being black in the industry and I was lucky to have it published on BBC [1].
What immediately followed, every large company reached out to have me work as a consultant for their diversity program. I found it fascinating that they had a team of DEI experts in place already. Like what makes one an expert?
In addition to my job, I spent nights developing programs trying to help these companies. Some folks right here on HN shared their successful experiences and I presented it to several companies. I was met with resistance every step of the way.
Over the course of a year and hundreds of candidates I presented, I've managed to place just one developer in a company.
However, most these companies were happy to change their social media profile to a solid black image or black lives matters. They sent memos, they organized lunches, even sold merch and donated. But hiring, that was too much to ask. A lot of graduates told me they never even got to do a technical interview.
Those DEI programs like to produce a show. Something visible that gives the impression that important work is being done. Like Microsoft reading who owned the land where the campus was built [2] in the beginning of every program. It eerily reminds me of "the loyalty oath crusade" in Catch-22.
I wonder how many people realize you can use the whole 127.0.0.0/8 address space, not just 127.0.0.1. I usually use a random address in that space for all of a specific project's services that need to be exposed, like 127.1.2.3:3000 for web and 127.1.2.3:5432 for postgres.
For those who haven't jumped ship to Kagi, there's a uBlacklist feed which strips out most big sites dedicated to AI images, with an optional extra "nuclear" feed which also knocks out sites that aren't strictly dedicated to AI images but do have a very large proportion of them.
Hi andrei-akopian. I am the main author of SiteOne Crawler and thank you for this post!
For about 20 years, I have been leading the development and infrastructure of the Czech webdev company SiteOne, so I wouldn't say that I am not a professional ;) I have around 50,000 hours of practice. But I definitely know a lot of better programmers than me and many of them are at SiteOne :)
However, after the extremely premature birth of my son, I had to change my role at SiteOne, but all the more I wanted to help my colleagues with a useful tool that we generally lacked and at the same time I wanted to give myself joy in difficult times.
I want to implement a number of other useful improvements into the crawler (some of them are also described in the documentation and roadmap) and work on its promoting in parallel. The more people will use it, the more it will help to optimize the website, the more it will help developers and testers with various needs, the more joy it will give me :)
What an unimaginable horror! You can't change a single line of code in the product without breaking 1000s of existing tests. Generations of programmers have worked on that code under difficult deadlines and filled the code with all kinds of crap.
Very complex pieces of logic, memory management, context switching, etc. are all held together with thousands of flags. The whole code is ridden with mysterious macros that one cannot decipher without picking a notebook and expanding relevant pats of the macros by hand. It can take a day to two days to really understand what a macro does.
Sometimes one needs to understand the values and the effects of 20 different flag to predict how the code would behave in different situations. Sometimes 100s too! I am not exaggerating.
The only reason why this product is still surviving and still works is due to literally millions of tests!
Here is how the life of an Oracle Database developer is:
- Start working on a new bug.
- Spend two weeks trying to understand the 20 different flags that interact in mysterious ways to cause this bag.
- Add one more flag to handle the new special scenario. Add a few more lines of code that checks this flag and works around the problematic situation and avoids the bug.
- Submit the changes to a test farm consisting of about 100 to 200 servers that would compile the code, build a new Oracle DB, and run the millions of tests in a distributed fashion.
- Go home. Come the next day and work on something else. The tests can take 20 hours to 30 hours to complete.
- Go home. Come the next day and check your farm test results. On a good day, there would be about 100 failing tests. On a bad day, there would be about 1000 failing tests. Pick some of these tests randomly and try to understand what went wrong with your assumptions. Maybe there are some 10 more flags to consider to truly understand the nature of the bug.
- Add a few more flags in an attempt to fix the issue. Submit the changes again for testing. Wait another 20 to 30 hours.
- Rinse and repeat for another two weeks until you get the mysterious incantation of the combination of flags right.
- Finally one fine day you would succeed with 0 tests failing.
- Add a hundred more tests for your new change to ensure that the next developer who has the misfortune of touching this new piece of code never ends up breaking your fix.
- Submit the work for one final round of testing. Then submit it for review. The review itself may take another 2 weeks to 2 months. So now move on to the next bug to work on.
- After 2 weeks to 2 months, when everything is complete, the code would be finally merged into the main branch.
The above is a non-exaggerated description of the life of a programmer in Oracle fixing a bug. Now imagine what horror it is going to be to develop a new feature. It takes 6 months to a year (sometimes two years!) to develop a single small feature (say something like adding a new mode of authentication like support for AD authentication).
The fact that this product even works is nothing short of a miracle!
I don't work for Oracle anymore. Will never work for Oracle again!
This article mentions changing demographics, aging population etc.
While the number are out there, I've always found the below story (to the best of my recollection), illustrates what life was like for someone born in the early 1950s:
- "I went to brand new schools built during the start of the baby boom. All of the teachers were excellent b/c this was before women started entering the workforce"
- "I got into an Ivy League school b/c acceptance rates were around 50% due to the Eisenhower funding getting cut so the schools needed more students to cover the lost funds"
- "I got a job from looking in the paper and started a few weeks after graduating from college"
- "Six months after I started, my boss retired at 55. So there I was at age 22 with a management role and a secretary"
I think about this story often whenever someone mentions demographics and/or the economy and why young people are spending more time living at home after college.
E-ink, the company, holds the patents of the pigment core tech that makes "paper-like" displays possible and strongarms the display manufacturers and the users of their displays to absolute silence. Any research project or startup that comes up with a better alternative technology gets bought out or buried by their lawyers ASAP.
E-ink don't make the display themselves, they make the e-ink film, filled with their patented pigment particles and sell it to display manufacturers who package the film in glass and a TFT layer and add a driver interface chip, all of which are proprietary AF and unless you're the size of Amazon, forget about getting any detailed datasheets about how to correctly drive their displays to get sharp images.
In my previous company we had to reverse engineer their waveforms in order to build usable products even though we were buying quite a lot of displays.
With so much control over the IP and the entire supply chain and due to the broken nature of the patent system, they're an absolute monopoly and have no incentive to lower prices or to bring any innovations to the market and are a textbook example of what happens to technology when there is zero competition.
So, when you see the high prices of e-paper gadgets, don't blame the manufacturers, as they're not price gouging, blame E-ink, as their displays make up the bulk of the BOM.
Tough, some of their tech is pretty dope. One day E-ink sent over a 32" 1440p prototype panel with 32 shades of B&W to show off. My God, was the picture gorgeous and sharp. I would have loved to have it as a PC monitor so I tried building an HDMI interface controller for it with an FPGA but failed due to a lack of time and documentation. Shame, although not a big loss as an estimated cost for that was near the five figure ballpark and the current consumption was astronomical, sometimes triggering the protection of the power supply on certain images.
Take a look at Coroot [0], which stores logs in ClickHouse with configurable TTL. Its agent can discover container logs and extract repeated patterns from logs [1].
A lot of our core packaging development is now happening in uv [1]. Rye uses uv under the hood, so as we improve uv, Rye gets better too.
E.g., we recently added support for "universal" resolution in uv, so you can generate a locked requirements.txt file with a single resolution that works on all platforms and operating systems (as opposed to _just_ the system you're running on). And Rye supports it too in the latest release.
One of the most interesting discussions I've had around this was at a small company who mass-hired a bunch of people from a big company. We went round and round in circles for a while because of issues similar to what you're describing. I ran a small 4-6 person hardware team (softly blurry on the edges) and the new people wanted significant amounts of design and documentation review as well as financial oversight. We needed $40 worth of solder and connectors and had to wait a week for PO approval as well as demonstrate that we had gotten multiple quotes... even though there was just a store down the block that sold exactly what we needed.
Anyway, after a month or two of this I started catching significant flak because my team was nowhere near as productive as it used to be. Complaining about slow POs was met with "maybe you should plan better". Complaining about design reviews for one-off boards that would go from idea-to-problem solved was met with "you're engineers, you need to document your work". It was painful and I came very close to resigning.
What ultimately worked, though, was figuring out a catch phrase that spoke the language of the new people: "accountability without authority". This ruffled some feathers but once I repeated "you are trying to hold me accountable for delivering a $500,000 project on time but are not giving me authority to buy $40 worth of stuff to execute on that" enough times it finally got through and the system started to change. But man did it suck for a while.
I'm constantly amazed that people still don't understand why Red Hat makes money, and this article won't enlighten you as it doesn't understand it as well as being full of other incidental mistakes (like their description of CentOS is way off the mark).
Red Hat takes Linux and certifies it against all kinds of government, safety, privacy etc standardards, such as PCI and FIPS and numerous local ones. As a result if you are a government or many large companies you simply cannot download Linux and YOLO it. You are legally required to use the certified software so you take the path of least resistance and buy RHEL. That's it. It's why Red Hat makes piles of money, and Canonical or your SaaS does not. Understand what your big customers are required to have and provide it to them, even though it's boring and expensive to do.
- First things first, you have to get your hands on actual VPN software and configs. Many providers who are aware of VPN censorship and cater to these locales distribute their VPNs through hard-to-block channels and in obfuscated packages. S3 is a popular option but by no means the only one, and some VPN providers partner with local orgs who can figure out the safest and most efficient ways to distribute a VPN package in countries at risk of censorship or undergoing censorship.
- Once you've got the software, you should try to use it with an obfuscation layer.
Obfs4proxy is a popular tool here, and relies on a pre-shared key to make traffic look like nothing special. IIRC it also hides the VPN handshake. This isn't a perfectly secure model, but it's good enough to defeat most DPI setups.
Another option is Shapeshifter, from Operator (https://github.com/OperatorFoundation). Or, in general, anything that uses pluggable transports. While it's a niche technology, it's quite useful in your case.
In both cases, the VPN provider must provide support for these protocols.
- The toughest step long term is not getting caught using a VPN. By its nature, long-term statistical analysis will often reveal a VPN connection regardless of obfuscation and masking (and this approach can be cheaper to support than DPI by a state actor). I don't know the situation on the ground in Indonesia, so I won't speculate about what the best way to avoid this would be, long-term.
I will endorse Mullvad as a trustworthy and technically competent VPN provider in this niche (n.b., I do not work for them, nor have I worked for them; they were a competitor to my employer and we always respected their approach to the space).