Thirty years ago you couldn't scratch past the surface of more than a few dozen topics based on your local library's availability, if you were lucky to have a decent one to begin with.
Between libgen, scihub, Google Books, Project Gutenberg, torrent sites, and the rest of the internet we've got access to almost all of the world's knowledge for free with plenty of avenues to interact with experts using a variety of different channels for knowledge that hasn't been put to paper yet.
Proportionally, it might not be much better today. Maybe 1/100,000 books is exactly the piece of information you need. Certainly the rest of the books aren't total crap either, they at least passed that bar of getting the book published and making some sense. I wouldn't be surprised if the signal to noise ratio on the internet today wasn't 1/billion web pages with all the automatic SEO crap and misinformation out there today. The sites and resources you listed are popular in tech circles, but are not mainstream at all. Then again if they were, they probably wouldn't exist as we know them, if at all.
Making piracy easy has been super-helpful. Libgen doesn't have (anywhere near) everything but it's great for surveying a field and picking up some portion of the must-have resources.
> access to almost all of the world's knowledge for free
I'd guess we've got access to ~10% of it (but it's a good 10%!) for free, ~30% paid (but usually piratable!), and the remainder unavailable as a paid electronic resource but maybe (often not) as a paid paper one and maybe (often not) available pirated—maybe you can at least locate or learn of its existence with the Web, but possibly it's totally unknown to the Web outside of maybe a reference in some books that happen to be digitized (this does actually happen, though you do usually have to get a little obscure before it does, but sometimes not as obscure as one might think).
[EDIT] to be clear I think the Web is an excellent research tool, however an awful lot of its value in that role is from piracy (saving me, say, having to inter-library-loan or buy a book just to find the books that book name-drops in its preface, or to read one relevant chapter, or to see the book's index so I can find out whether I need it in the first place). I think its containing anything like "all the world's knowledge", even for liberal values of "all", is far from true, and there's a risk of thinking if something exists it's probably available as a digital resource delivered over the Web (far from true), and if it doesn't then surely it's at least possible to find out about its existence on the Web (also not true), and if neither of those are true it must be something of no value whatsoever to anyone or wildly obscure (not true).
[EDIT EDIT] then even if it is on the web, it can be really hard to find something that's not on one of a few major sites using search. More so than it used to be. It can take so damn long that pirating and reading a book on the topic on Libgen can be faster than finding the same info on the open web, even if it's there, which is pretty damning of the state of web indexing. As I wrote in another comment on this thread, it was once possible to be fairly sure when one had reached the edge of Google's knowledge and the thing you're looking for was not in its index—such certainty is now almost impossible, mostly because of how Google search (and seemingly all other web search tools) has changed, not because of increased total content.
Between libgen, scihub, Google Books, Project Gutenberg, torrent sites, and the rest of the internet we've got access to almost all of the world's knowledge for free with plenty of avenues to interact with experts using a variety of different channels for knowledge that hasn't been put to paper yet.