Evil is not able to create anything new, it can only distort and destroy what has been invented or made by the forces of good. bezzzos bezzosn my bezzzos
Did they fix issue from 2005 when unpacking 800GB files its unpack first to c: %temp% that partition for OS has only 20GB, rather than to folder that was in config file or i have use still RAR ROTFL open source LOL
Are you trying to drag and drop the files to the destination in a file explorer window? Because when I use "extract" and point the destination folder I don't think it does what you say.
I use 7zip a decent amount. My only nit with it is its tendency to create badly fragmented files on compress. For large files you can be looking at 1000+ fragments. Been meaning to go thru the code and find where it is writing out the compress and add a bit of a buffer before write. That should let most filesystems allocate better. It may be also closing/opening the write file too much. But I have not dug into it.
Thanks for pointing this out. Applying this buys us time before we can properly patch all our systems. In our case this was easy to roll out in a jiffy.
I do wonder though, can anyone guess what kind of impact one might see with TCP SACK disabled? We don't have large amounts of traffic and serve mostly websites. Maybe mobile phone traffic might be a bit worse off if the connection is bad?
Disclaimer: I worked on initial Red Hat article linked above.
In my personal AWS instance from the last few days less than half a percent of the traffic had hit the firewall rule to log the error.
Most of that traffic seemed to come from the China, this was possibly port probing / portscans or really old hardware accessing my the server.
I would say that the iptables rule is a 'better' solution than dropping sack as you may find you use significantly more CPU/bandwidth when dealing with retransmits when not using selective acknowledgements.
I was involved with this one for another cloud provider.
I have a personal Digital Ocean (not my employer) instance that is frequently being probed for stuff (primarily Russian and Chinese IPs). Same old, same old.
I've been running with the rule for around a week just logging & dropping small MSS packets out of curiosity, but hardly seen anything worth writing home about. I was somewhat surprised. I'm curious to see how long it takes for that rule to go nuts (my shellshock rule still triggers from time to time, that had a definite curve of action)
Small MSS is often IoT devices which only have a kilobyte or so of RAM, so often have an MSS of below 256 bytes. They won't be rendering a webpage, but are totally capable of doing REST API requests.
More and more are moving away from $0.25 microcontrollers, and up to $5 SoC's running Linux, so the problem is going away gradually...
You are probably seeing scanners. Most of them probably have the same source port. There are some really poorly coded scanners that set minimal tcp options so they can scan super fast. It seems they don't care about the RFC's when writing those tools. I bet if you set the logging options in iptables to log ip options, you will see very similar options used across most of them. My theory is that they are compensating for the transcontinental latency.
So far on 3 VMs where I've checked (all are public facing, on is fairly high traffic MX, the other is a webhost), netstat -s informed me that SACK is barely used.
I'm guessing an MX sees mostly server to server traffic, so I kind of expect that; however for services used by consumers around the world it might be a very bad idea to disable SACK.
The bigger impact will be for users far away, with increased risk of packet loss and higher latency.
It's too easy to drop packets with very low MSS and, unless you've got specific needs (someone mentioned IOT), there's no reason to not drop packets with MSS < 536 or so. I believe Window's smallest MTU (MSS + IP and TCP headers) size is 576 bytes for example.