Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Assuming we're talking about VMs (2021 etc.), for a SME is there any downside to giving 2TB of space to your discs and let dynamic allocation do the work?

Perhaps consolidate/defrag once a year. Even monitoring total usage more often than that is probably not worth the effort - just buy ample cheap storage.

Also, there was a tradition to split drives into OS, DB, DB Logs. That was mostly a rust performance thing and these days is probably just voluntary management overhead.

RAM is another story.



If you are using less space than the underlying datastore, there's no benefit to dynamic allocation, you may as well give the servers larger fixed disks. If you are thinking that one server might need more than the fixed size for a sudden growth, then you need to be monitoring to deal with that because that will run out of your space. If you are overprovisioning the datastore, you have the same problem at a level lower, and need to be monitoring that and alerting for that instead (as well).

> "just buy ample cheap storage"; "That was mostly a rust performance thing and these days is probably just"

In the UK a 6TB enterprise rust disk is £150 and a 2TB enterprise SSD is £300, it's 6x the price to SSD everything, and take 3x more drive bays so add more for that. And you can never "just" buy more storage than you ever need - apart from the obvious "when you bought it, you thought you were buying enough, because if you thought you needed more you would have bought more", so that amounts to saying "just know the future better", but it can't happen because Parkinson's Law ("work expands so as to fill the time available for its completion") applies to storage, the more there is available, the more things appear to fill it up.

Room for a test restore of the backups in that space. Room for a clone of the database to do some testing. Room for a trial of a new product. Room for a copy of all the installers and packagers for convenience. Room for a massive central logging server there. What do you mean it's full?


One VM using excessively more disk space than it's supposed to can potentially cause data corruption in all the other VMs on that system. For just spinning VMs up and down for testing, you probably won't run into that issue, but on a production system, it could potentially cause some massive downtime


Virtual machine disk space (e.g. Xen, Linode, AWS EC2, or similar) does not work this way. Each VM gets a dedicated amount of disk space allocated to it, they don't all share a pool of free space.


Yes they do with the "dyanmic allocation" the parent comment mentions; VMware datastore has 1TB total, you put VMs in with dynamically expanding disks they are sharing the same 1TB of free space and will fill it if they all want their max space at the same time and you've overprovisioned their max space.

And if you haven't overprovisioned their max space, you may as well not be using dynamic allocation and use fixed size disks.

Even then, snapshots will grow forever and fill the space, and then you hope you have a "spacer.img" file you can delete from the datastore, because you can't remove snapshots when the disk is full and you're stuck. It's the same problem, at a lower level.


I see, a VMware feature, thanks for clarifying. I suppose it's a nice idea in theory, but you'd have to be crazy to use that in production, or for any workload that you care about. It would just be a ticking time bomb.


Hyper-V can do that too, and so can you under Linux. It's called thinly-allocated disks, sparse files, or the dm-thin device mapper target. Professional SANs also allow you to overallocate the total size of the iSCSI volumes offered.

Yes, I've seen that time bomb go off on multiple occasions. Never on my watch though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: