Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's sad truth. But it's convenience like anything else.

In my business I can put a hardware site online with

---6X--- 2 x intel gold 5115 10 core + 64 GB RAM 1 nvme @512G + soft raid1 @4TB magnetic 1 10G, 2 1G ether ~= 42K.

---2X--- storage or NAS with 60TB @RAID5 + 2x quad core low end xeon + 32 GB RAM = 16k

---2X--- 1G edge/core mngd switches + 10G SAN/LAN mngd switches = 5K

---2X--- Endian firewalls + threat appliances = 5K

---1X--- Colo with 2 year lease and 25 amps @208v 1g port speed and committed throughput > 100/mbps = 16K yearly

68K one time cost for depreciating assets we maintain, provision and secure + 16K yearly recurring cost.

Or I can go AWS and modify my processing model, security expectations and service infra and spend 25K a year + 15K 1x migration cost.



So... you save $9k a year in recurring and it will be more than 5 years before you break even due to your $68k up front equipment costs.

And that's assuming you don't have any needs to quickly scale up or down and you are limited to 1 colo instead of the ability to expand to multiple regions like with AWS.

And that's not even taking into account the cost of the brain power to make sure your hardware stays up and running.

Doesn't sound like rolling your own stuff in a colo is a very good idea in this case. But that's job security if you are the sys admin I guess.


> And that's not even taking into account the cost of the brain power to make sure your hardware stays up and running.

Although, as I said upthread, I agree that AWS is very likely ideal for this particular deployment size, let me try to dispell this oft-repeated myth.

Modern server hardware takes almost no "brain power" (or effort of any kind) to keep up and running.

We aren't living in the days of the early dot-com boom where Linux-on-Intel in the datacenter could mean flimsy cases, barely rack-mountable, with nary a redundant part to be seen.

Applying some up front "brain power", one can even choose and configure hardware in such a way as to provide things like server-level redundancy, if that's important and/or preferable to intra-server redundancy (think Hadoop), or the ability to abandon mechanical disks in place instead of ever having to replace one.


This is the main "sweet spot" for AWS (or "cloud" infrastructure in general): small scale.

I am generally a strong proponent of using ones own hardware in a colo or on-premises, instead of or in addition to the cloud (primarily for "base" workload).

However, if the entirety of your needs can fit into a single rack, even I will advocate for AWS, since "convenience" is, perhaps, not strong enough a word.

I do think your server and storage prices are around $25k too high, but that's easy to do buying brand name and/or not negotiating with multiple vendors on price (which is particularly tough at low volume unless you're a startup with a credible growth story). That's assuming such an expensive CPU (in comparison to so little RAM) isn't foolishly profligate, along with the other hardware choices. Of course, this underscores the point (on which we agree) that, as a rule, it's just not worth that much time and effort for so little.

I'll take your word on the AWS pricing, as it's fairly predictable, if very tedious to perform the prediction. The main "gotchas" I've found people run into are forgetting to add in EBS costs for EC2 instance types without (or without comparable) local storage and underestimating data transfer costs.


You'll have to trust me that this examples hardware spec and requirements are for a basic/base site. You can thin the profile and increase the # of chassis, compromise on redundancy, etc...but experience has shown that this arrangement is most cost effective. Kinetic event impact modeling system -w- RT data delivery -- that should answer your conjectures.

No large vendors used in this example - thinkmate or aberdeen supermicro re-brands for due diligence and warranty.


> You can thin the profile and increase the # of chassis, compromise on redundancy, etc

No, I wouldn't suggest more chasses, as that's almost always more expensive (it's tough to break even on that $1k minimum buy-in on a server).

I believe your workload needs the resources you say. It just happens to be a remarkably rare ratio, hence my remark.

> No large vendors used in this example - thinkmate or aberdeen supermicro re-brands for due diligence and warranty.

The vendor doesn't have to be large to jack up the price.. Any re-brand is super suspicious. To me, a large part of the point of a commodity server product is the reliability is predictable (and therefore easy enough to engineer for/around). Paying extra for "diligence", warranty, or hardware support is just flushing money down the toilet.

A fee for custom assembly and/or a basic smoke test is fine, but it had better be a flat rate per server and on the order of $100. Technician labor isn't that expensive.

Larger or "enterprise" vendors are merely the extreme version of this, with upwards of a 10x premium on something like storage arrays, especially if one includes


You seem to be an absolute type of planner. I used to approach IT mgmt and provisioning that way some years ago before being confronted with the realities of small and large business. One size obviously does not fit all and sometimes you take shortcuts..usually you pay for them later.

I agree with your cautions around supermicro resale but the warranty support and build diligence are absolutely necessary for a small business. Having a good business relationship with a trusted provider of hardware that always performs the first time is priceless.


I don't know what an "absolute type of planner" is, but I consider myself an engineer and a pragmatist. I'm well versed with realities. In reality, with business, there's no such thing as "priceless", only risk, and risk is, generally, quantifiable. With enough data, it's easily quantifiable.

I admit that, having an affinity for startups, rather than more traditional small businesses, I have a greater affinity for risk. Ironically, perhaps, I'm usually the voice of risk-aversion with respect to IT infrastucture, so I don't believe it affects my overall understanding.

I recently pointed out to an interviewer who was trying to convince me that it was worth spending half a megabuck on a petabyte from Netapp because it was "business critical" instead of 1/10th that amount for DIY, that, just like the DIY solution, Netapp does not indemnify the business against loss. One isn't buying insurance, only a bunch of technology.

Sure, "works the first time" is worth something. Is it worth the cost of a whole, complete, extra server on a order of qty 6? If the infant-mortality rate on servers is anywhere approaching 1-in-6 and they're being shipped somewhere that the replacement time and/or cost would be prohibitive, I'd still probably rather just order 7 servers instead.

That's my main problem with paying a vendor for "reliability": it's a very fuzzy, hand-wavy assurance. Paying for reliability with more hardware has data and statistics behind it, which is an engineering solution.


Risk is not quantifiable based on your insight into most businesses. You provision and despair.


I can't understand either of these two sentences, not even who the "you" is supposed to be.

I'd hope for a more substantive reply, if anything.


That is some serious set up. I can’t see how you get close to this only spending $25K a year on AWS - maybe the price I was quoted for my needs was some sort of suckers price.


This is a ballpark average and only for one VPC based site. You'd need multiple sites...just as you do with hardware.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: