Sure, a heads-up is certainly nice, but I don't think that running a (reasonable) set of benchmarks is all that out of the ordinary, or any different from just taxing the service at 100% with some periodic batch job or the like. Paying for it is even stranger IMO.
And for what it's worth, I did actually work for a few small SaaS businesses, but a few reasonable benchmarks wouldn't have been a problem.
Of course, if your benchmarks are going to take 50 hours it's a different story.
Also: I suspect a lot of these database SaaS services are a lot smaller than you'd might think. I know at least one of them is anyway because I worked there (and there's no DeWitt Clause).
These specific customers were not reasonable though. They wanted to specifically know when and how the software breaks, because they had been burned by previous vendors. Eventually sales reacted with a rather frustrated "then pay us 10 engineering days so we can setup a dedicated system for this so you can run your tests" and instead of cancelling the deal, they were like "Ok. Here you go, let's go"
And naturally, they were able to break it eventually, but they could have had all global employees trigger request 10 times per second and the system would've held and it recovered as soon as the load was gone. It was a silly deal, but now it's a great business partner.
An occasional spike probably won't even be noticed. Redlining it when you usually hover much lower, without ceasing, -should- trigger alarms, and get engineers going "WTF is going on?!"
Give the vendor a heads up so the engineers can sleep.
And for what it's worth, I did actually work for a few small SaaS businesses, but a few reasonable benchmarks wouldn't have been a problem.
Of course, if your benchmarks are going to take 50 hours it's a different story.
Also: I suspect a lot of these database SaaS services are a lot smaller than you'd might think. I know at least one of them is anyway because I worked there (and there's no DeWitt Clause).