Compilation used to be slower, and one 5K line file would be noticeably faster than 10 500 line files, not to mention possibly having to build extra header files to connect them together. That would encourage larger files.
I'm not sure that's true. Computers used to be smaller, and had a hard time with very large files. Swapping out of limited RAM and so forth. Not fast.
I think long files are solely caused by somebody incapable of software design at any level. They just keep typing and never think about structure or separation of duties or whatever.
I recall the WindowsCE DHCP service was one large file. An enormous busted-ass straightline pile of garbage code that didn't handle most errors. Written by some intern. I re-wrote it for our platform and removed all the issues.
Microsoft of course didn't want my code because, arrogance.
As a n00b, I enjoyed libs with everything in one file cause I didn't know how to drop the lib into my codebase and build otherwise. Like how was I supposed to merge their makefile into mine, I dunno. And my code was in one file cause I was too lazy to mess with .h files.
I could buy a similar argument for directories: you will almost never see a C project with sources in subdirectories of the top-level source directory -- this is because of the recursive Makefiles which earned quite a bit of a somewhat justifiable hate.
But I don't think compilation times explain the size of the source files. This hasn't been a problem for such a long time that I cannot even remember when it could have possibly been a problem.
I had seen the reverse problem, but not with C... rather with Python source files. The older parser used to be very bad and would start using too much memory if the source file was in the thousands of LOC. I had to witness this firsthand with SWIG-generated Python bindings. I don't remember this kind of problem with C compilers / other utilities though.