Hacker Newsnew | past | comments | ask | show | jobs | submit | Doohickey-d's commentslogin

They specialize in domains management for businesses who consider their domain to be _very_ important. Think Google, Amazon, Microsoft, Wikipedia... (all of those are listed as clients on the wiki page)

As in "pay a lot of money", and we'll dedicate someone to your domain who makes sure that "giving a domain to a stranger without any documents" will _never_ happen.


a number of the largest companies that used to be 'clients' of markmonitor have now basically become their own domain registrars and have a direct relationship with ICANN. Amazon for instance. It's curious that google was one and has offloaded it to squarespace.

I'm pretty sure google never used them for their own domains, and the whole markmonitor/squarespace thing was their "google domains" product where they sold registrar services to others. Besides that they also are a registry for .app/.dev and others, but don't sell them via their own registrar anymore.

This is the best approach IMHO if you're a large, extremely valuable company registering a lot of domains.

What are you doing for DB backups? Do you have a replica/standby? Or is it just hourly or something like that?

Because with a single-server setup like this, I'd imagine that hardware (e.g. SSD) failure brings down your app, and in the case of SSD failure, you then have hours or days downtime while you set everything up again.


Hetzner normally advertises their hardware servers as 2x 1 TB SSD, because it's strongly recommended to run them in SWraid1 for net 1TB. (Their image installer will default to that)

Once the first SSD fails after some years, and your monitoring catches that, you can either migrate to a new box, find another intermediate solution/replica, or let them hotswap it while the other drive takes on.

Of course though, going to physical servers loses redundency of the cloud, but that's something you need to price in when looking at the savings and deciding your risk model.

And yes, running this without also at least daily snapshotting/backup to remote storage is insane - that applies to cloud aswell, albeit easier to setup there.


For over a decade I ran a small scale dedicated and virtual hosting business (hundreds of machines) and the sort of setup you describe works very well. Software RAID across 2 devices, redundant power supplies, backups. We never had a significant data loss event that I recall (significant = beyond user accidentally removing files).

For quite a while we ran single power supplies because they were pretty high quality, but then Supermicro went through a ~6 month period where basically every power supply in machines we got during that time failed within a year, and replacements were hard to come by (because of high demand, because of failures), and we switched to redundant. This was all cost savings trade-offs. When running single power supplies, we had in-rack Auto Transfer Switches, so that the single power supplies could survive A or B side power failure.

But, and this is important, we were monitoring the systems for drive failures and replacing them within 24 hours. Ditto for power supplies. If you don't monitor your hardware for failure, redundancy doesn't mean anything.


> But, and this is important, we were monitoring the systems for drive failures and replacing them within 24 hours. Ditto for power supplies. If you don't monitor your hardware for failure, redundancy doesn't mean anything.

It does still mean something.

If you have a 5% annual chance of failure and no redundancy, your five year failure chance is 23%.

If you have redundancy and literally never check for five years, your five year failure chance is 5%. That's already a huge improvement. If you do an inventory of broken parts twice a year, still no proper monitoring, it goes down to 0.6%

For 2% the numbers are: 10% 1% 0.1%

For 10% the numbers are: 41% 17% 2.6%

(The approximations for small percents are x*5, x²*25, and x²*2.5)


Do not rely on raid alone.

Have 2x servers atleast then invest in proper monitoring.

Server can fail without disk failures.


If that's the tradeoff they're willing to make, who are you to say that they're doing it wrong?

Not every app needs 24/7 availability. The vast majority of websites out there will not suffer any serious consequences from a few hours of downtime (scheduled or otherwise) every now and then. If the cost savings outweigh the risk, it can be a perfectly reasonable business decision.

A more interesting question would be what kind of backup and recovery strategy they have, and which aspects of it (if any) they had to change when they moved to Hetzner.


It's possible no one will care much if it's down even for that long. I couldn't care less if my HOA mobile app was down even for a week for example. We don't need constant uptime for everything.

Don’t forget that integrity matters as much as availability in many applications. You might not mind if your HOA takes time to bring a server back up but you’d care a lot more if they lost the financial records or weren’t able to recover from a ransomware attack.

Hetzner provides backups for VPS and machines across all tiers, which are very easy to set up.

I agree with the overall sentiment, but having an HOA app go down around the time when dues need to be paid could be a serious issue.

> Because with a single-server setup like this, I'd imagine that hardware ...

Yeah. This blog post reads like it was written by someone who didn't think things through and just focused on hyper-agressive cost-cutting.

I bet their DigitalOcean vm did live migrations and supported snapshots.

You can get that at Hetzner but only in their cloud product.

You absolutely will not get that in Hetzner bare-metal. If your HD or other component dies, it dies. Hetzner will replace the HD, but its up to you to restore from scratch. Hetzner are very clear about this in multiple places.


For the price, they could buy an exact replica bare metal server and still save money.

> they could

They could, but they didn't and instead they wrote that blog post which, even being generous is still kinda hard to avoid describing as misleading.

I would not have written the post I did if they had presented a multi-node bare-metal cluster or whatever more realistic config.


> They could, but they didn't and instead they wrote that blog post which, even being generous is still kinda hard to avoid describing as misleading.

What do you feel was misleading?


That they get the exact same level of service for $1,199 less per month.

They don't.

And reading the article, they don't seem to understand that.


> What do you feel was misleading?

Erm. I already spelt it out in my original post ?

I'm not going to re-write it, the TL;DR is they are making an Apples and Oranges comparison.

Yes they "saved money" but in no way, shape or form are the two comparable.

The polite way to put is is .... they saved as much money as they did because they made very heavy handed "architectural decisions". "Decisions" that they appear to be unaware of having made.


They could but then that exchanges cost savings for complexity. You now need to keep them in sync and it is double the cost.

I agree with the other poster, this is fine for a toy site or sites but low quality manual DR isn't good for production.


Surely you must've noticed that pretty much all of their bare metal offerings ("dedicated" and the stuff on "auction") have multiple disks, allowing for various RAID configurations?

> Surely you must've noticed that pretty much all of their bare metal offerings ("dedicated" and the stuff on "auction") have multiple disks, allowing for various RAID configurations?

I don't know where to start with this comment. Do I really need to spell out the difference between cloud and bare metal ?

A few examples...

    - Live migration ? Cloud only.
    - Snapshots ? Cloud only.
    - Want to increase disk space ?  Tick box in cloud vs. replace disks (or move to different machine) and re-install/restore in bare metal....
    - Want to increase RAM ? Tick box in cloud vs. shutdown, pull out of rack, install new chips (or move to different machine and re-install/restore)....
    - Want to upgrade to a beefier processor ? Tick box in cloud vs move to a completely different machine and re-install/restore

You can get snapshots and live migrations working on-prem. The cloud isn't magic, it's just servers with hypervisors and software running on top of them. You can run that same software.

Also, with something like Hetzner you would not be going in and physically doing anything. You also just tick a box for a RAM upgrade, and then migrate over or do active/passive switch.

The cloud does have advantages, mostly in how "easy" it is to do some specific workflows, but per-compute it's at least 10x the cost. Some will argue it's less than that, but they forget to factor in just how slow virtual disks and CPU are. Cloud only makes sense for very small businesses, in which the operational cost of colocation or on-prem hosting is too expensive.


cloud vs bare metal is:

are you a capable engineer or do you believe in magic?

the savings of a cheap engineer disappear on the cloud bill. get a badass well paid engineer who can do both and doesn't talk his way out of this financial madness


> get a badass well paid engineer who can do both

Well, fine, but its abundantly clear that this blog post was not written by a "badass well paid engineer".

The person who wrote that blog post was clearly unaware of the trade-offs of the decisions he was making.


Well you did say your data is lost when a disk fails, which is not true. Parent pointed out that for you.

Yeah you pay for and get additional stuff with cloud. Nobody disputed that.


> Well you did say your data is lost when a disk fails, which is not true.

Well, technically its still a possibility.

I am old enough to have seen issues with RAID1 setups not being able to restore redundancy, as well as RAID controller failures and software RAID failures.

Also, frankly you are being somewhat pedantic. My broader point was regarding cloud. I gave HD Failure as one example, randomly selected by my brain ... I could have equally randomly chosen any of the other items ... but this time, my brain chose HD.


You can just run 3 dedicated servers and design your app so that it never fails.

Can you elaborate? I'm coming up with similar designs recently (static site plus redundant servers) but my designs so far assume no database and ephemeral interactions. (Realtime multiplayer arcade games.)

Curious what the delta to pain-in-ass would be if I want to deal with storing data. (And not just backups / migrations, but also GDPR, age verification etc.)


database isn't hard to have HA with, it's actually very easy to do any of this.

i already design with Auto Scale Group in mind, we run it in spot instance which tend to be much cheaper. Spot instances can be reclaimed anytime, so you need to keep this is kind.

I also have data blobs which are memory maped files, which are swapped with no downtime by pulling manifest from GCS bucket each hour, and swapping out the mmaped data.

i use replicas, with automatic voting based failover.

I've used mongo with replication and automative failover for a decade in production with no downtime, no data lost.

Recently, got into postgres, so far so good. Before that i always used RDS or other managed solution like Datastore, but they cost soo much compared to running your own stuff.

Healthchecks start new server in no time, even if my Hertzner server goes out or if whole Hertzer goes out, my system will launch digital ocean nodes which will start soaking up all requests.


The easiest I’ve done is in MongoDB replication, sharding, failover, and all that is super easy.

Recently, I did it in PostgreSQL using pg_auto_failover. I have 1 monitor node, 1 primary, and 1 replica.

Surprisingly, once you get the hang of PostgreSQL configuration and its gotchas, it’s also very easy to replicate.

I’m guessing MySQL is even easier than PostgreSQL for this.

I also achieved zero downtime migration.


Replication is not a backup. It helps for migrations or clean single node failures but not human error, corruption, or an attack.

Changing project framerate is apparently quite a hard problem, even DaVinci Resolve when you change it, warns you that you cannot change it for that project again.

Probably internally everything in a project is referenced to specific frame numbers, which would break if you changed the project framerate.


And I would rather have the _choice_ whether to prove my age to Apple or not. I think if it were optional, with the additional option of "share my age with websites & apps", nobody would have an issue with it.


The issue is that if you don’t prove your age, access is blocked.

So it’s not optional. At least in Australia.


It's entirely optional within Australia - I don't use Apple, nor do my kids or their kids.


If a website/app requests you to prove your age, you can’t optionally avoid it and continue to use the website/app.


Hasn't happened in any meaningful way yet, so I'll deal with that if and when it happens.


It is optional, you can skip past it. Presumably you will lose access to some websites and apps though.


This is where the lack of installing software (sideloading) becomes problematic.


Looks like it's using leaflet + map tiles from https://carto.com/

I think Mapbox also provides a similar looking basemap style.


There's even eSIMs specifically marketed as being a "backup" esim, with coverage on _all_ UK networks.

At least on my android, you could set the second esim as a "backup" that it would switch to for data if the main one lost connection (it took a few seconds, so it wasn't an "always connected" experience, probably because the phone wants to save power)

Lots of options if you search for "esim UK all networks".


I used to be a first responder with a Firstnet setup (not just the plan discount, but the actual black SIM card) that could roam AT&T to Verizon to TMO as needed, so was as close to universal connectivity as feasible. Though (probably relatedly) it was always 1-2 generations behind (many areas were still ATT LTE, maybe 5GE, when they were rolling out 5G).

And the clusterfuck when I tried to transition my account back to normal, where an $8 balance that wasn't reconciled triggered the suspension of my AT&T whole family account, but when I tried to pay, no-one in FirstNet support or AT&T could tell me how much to pay or where or my account number (and this is in the store), until a poor store and a poor phone CSR spent THREE HOURS getting it resolved. "I am literally trying to give you the money to take care of this." "We don't know where to have you pay that money to fix this."

I was an early adopter, but FML.


And that one is actually a direct, no-change trip on a single bus. 5 days of non-stop bus.

YouTuber Noel Philips also covered it, if you want to see it in that form.


You can do this all in fly.io, no cloudflare container needed.

The whole selling point of fly is lightweight and fast VMs that can be "off" when not needed and start on-request. For this, I would:

Set up a "peformance" instance, with auto start on, and auto-restart-on-exit _off_, which runs a simple web service which accepts an incoming request, does the processing and upload, and then exits. All you need is the fly config, dockerfile, and service code (e.g. python). A simple api app like that which only serves to ffmpeg-process something, can start very fast (ms). Something which needs to load e.g. a bigger model such as whisper can also still work, but will be a bit slower. fly takes care of automatically starting stopped instances on an incoming request, for you.

(In my use case: app where people upload audio, to have it transcribed with whisper. I would send a ping from the frontend to the "whisper" service even before the file finished uploading, saying "hey wake up, there's audio coming soon", and it was started by the time the audio was actually available. Worked great.)


It may be even easier to not even leave a vm in off. Using either the fly command or their api, you can kick off a one-off machine that runs an arbitrary script on boot and dies when that script ends.

yanked from my script:

    cmd = [
      "fly", "machine", "run", latest_image,
      "--app", APP_NAME,
      "--region", options[:region],
      '--vm-size', 'performance-1x',
      '--memory', options[:memory] || '2048m',
      "--entrypoint", "/rails/bin/docker-entrypoint bundle exec rake #{rake_task}",
      "--rm"
    ]
    
    system(cmd)
or a 1-1 transliteration to their api. You can of course run many of these at once.


That's a good trick (the "get ready" ping). It reminds me of how early Instagram was considered fast because they did the photo upload in the background while you were typing your caption so that by the time you hit "upload" it was already there and appeared instantly.


For at least some codebases, I'm not sure this is a useful metric. Because you don't usually put the whole codebase in your context at the same time.

For example in my current case, there are lots of files with CSS, SVG icons in separate files, old database migration scripts, etc. Those don't go in the LLM context 99% of the time.

Maybe a more useful metric would be "what percentage of files that have been edited in the last {n} days fit in the context"?


Another option for anonymous mobile service: https://silent.link/

eSIM, global, variable pricing per country with per-GB billing, anonymous crypto payments and no KYC. Although it seems to not have some of the additional security features of the OP.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: