1. Only one byte to specify filetype in the protocol, with only a few filetypes being being defined in the protocol, the remaining being unofficial and implementation-dependent. Most filetypes that are defined still are ambiguous.
2. No support for Unicode in the protocol, or even for any character set but ISO-8859-1. Non-europeans can eat shit and die.
3. No support for encryption nor compression in the protocol.
4. Any implementation supporting an above-mentioned feature is operating out of specification.
> No support for encryption nor compression in the protocol.
I consider it a feature.
Compression adds complexity. Not only you have to implement compression itself, but also a way to communicate what type of compression you want. It also makes the protocol less readable though packet dumps and make using simple tools like netcat more complex. Simplicity is a big reason for using Gopher.
Encryption adds a lot of complexity, same argument, but ten times worse.
In addition to complexity, I don't really like the idea of HTTPS-style encryption for a niche like this, for the following reasons:
- HTTPS gives only limited privacy protection, an eavesdropper can still see your IP address and the server you are trying to reach. It is not TOR.
- The signature aspect only ensures you are connecting to the right server, not that the content has been tempered with. It is not PGP.
- Encryption is useful to send secrets, like passwords or credit card numbers. Why would you want use passwords? Isn't the big idea to make information publicly available? Why hide things behind passwords? And why would you want to send a credit card number, do you want e-commerce on your platform? Not having encryption is a way to ensure all information on your small network is freely accessible and stays this way.
And if you disagree with regard to encryption, maybe take a look at Gemini.
> HTTPS gives only limited privacy protection, an eavesdropper can still see your IP address and the server you are trying to reach. It is not TOR.
This is what I've dubbed the "perfect is the enemy of good" fallacy, and it's frustratingly common around here. Technology Connections calls it "but sometimes".
A surprising number of otherwise educated and rational people are entirely prepared to argue that because a solution is partial that means it is useless and we'd be better off without anything at all. It feels weird that I even have to say this, but: sometimes a partial solution covers most of the solution space that actually matters in the real world and it's okay that it's incomplete. I'd rather an attacker see my IP address and the server I'm trying to reach than that they be able to see all of my communications with a server.
You must consider the use cases, though. With Gopher, if an attacker knows who connected to which server, they can normally deduce the transmitted content too, since (IME) gopherholes tend to be structured as static blogs. Gopher is not the Web, you aren't going to buy stuff or log into your bank account, you will mainly be downloading & reading plain text.
And there is an easily overlooked benefit of no encryption for a niche protocol: it's much easier to implement in resource constrained environments, e.g. very old computers. The Web doesn't compete in this area, so it's perfect for a smaller alternative like Gopher.
> The Web doesn't compete in this area, so it's perfect for a smaller alternative like Gopher.
This is just the goofiest assertion. Old computers have been in the Web since both the Web and those computers were new. You can transfer the same plain text content over HTTP as you can Gopher. It takes very little in the way of resources to handle HTTP 1.1 on the client side.
Have you tried browsing the web recently on a vintage computer running a 90’s browser? It works on only a minority of sites. The vast majority are broken due to either https or JavaScript.
So yes, in principle you can use the web on those old devices but it really doesn’t work in practice. The strong desire for gopher is for a protocol where every server you might want to connect to actually works on your ancient client.
If you're going to be constrained to a tiny subset of servers running gopher, you can constrain yourself to web servers catering to old clients. Gopher adds nothing there, it's just a less capable protocol than HTTP.
I'm not opposed to someone running a gopher server for fun. I don't hate the protocol. There's just a really strange meme that gopher somehow facilitates something the Web does not. If you want a hierarchy of directories with documents, the web can do that and be used by the software everyone already has in their pocket.
I think the draw of gopher is that one hundred percent of gopher servers work on old clients. Whereas with http you just don’t know. A website that works fine on an old browser can link to a website that doesn’t. That makes the whole thing feel broken. So the desire for gopher is the desire for something reliable, consistent, self-contained, and separate from the web.
> The signature aspect only ensures you are connecting to the right server, not that the content has been tempered with. It is not PGP.
The “T” in TLS stands for “transport.” It absolutely does guarantee that the content has not been tampered with, unless you don’t trust the website itself (in which case you would trust the PGP key no more or less than you would the website’s certificate).
But regardless, this kind of argument against encryption is just plain bad: there are all kinds of things you might want to do on the Internet that don’t involve passwords or “secret” information that a passive adversary (like a government or ISP) shouldn’t be able to observe. The contents of the political websites you visit, for example. This is especially true on our modern Internet, where observing individual endpoint IPs doesn’t really tell the passive adversary much (thanks to CGNAT and SNI/multiple sites hosted on the same IP).
Encryption: Well, not encryption exactly, but encryption is just an end to a means, which is privacy of communication. Libraries offer private reading rooms and generally refuse to release lending history to any random person who asks.
Yeah, I mean you can come up with some nonsense to make anything “true” but books generally don’t have encryption or compression and are some of the best communication technology ever. Among a whole bunch of others. The original assertion that communication technologies without both(!) those things are “toys” is just weird nonsense.
[edit] downvoting will not make this:
> Communication technologies without both [encryption and compression] are no more than toys.
Anything but a stupendously and obviously incorrect claim.
How about street signs? Letters? Newspapers? PA systems FFS? All toys I guess.
Wikipedia is roughly analogous to a university library and notice that Wikipedia connects with https by default now. Why should we care that our connection to Wikipedia is encrypted (assuming we’re not going to log in)? Because encryption protects our privacy when browsing the encyclopedia. In a physical library your privacy is protected by logistics: if someone wants to spy on what you’re reading they need to actually travel to the library or otherwise set up some kind of physical surveillance there.
Unencrypted communications over the internet do not present any logistical challenge whatsoever for would-be snoops. All unencrypted traffic can be scooped up and catalogued automatically to build a detailed dossier on all users. This sort of dragnet surveillance was never feasible at physical libraries.
Only one byte for the filetype is comically shortsighted. Even HFS from 1985 had 4 byte codes. The lack of Unicode is understandable given the era, but the lack of support for any other encoding is another "this is a toy system" red flag. Encryption and compression are the kind of thing that get layered on later as the protocol matures, leaving them out of the first implementation is not an issue.
I'm sure if Gopher had stuck around there would have been a version 2.0 of the spec that addressed these issues, although that one byte filetype would make for some fun backwards compatibility hacks.
The bigger issue is "what compelling reason would you have to use Gopher instead of HTTP"? What does it do better? Small reductions in overhead aren't generally worth the tradeoff.
On a side note I remember one of the earliest search engines for Gopher was named VERONICA, for Very Easy Rodent Oriented Net-wide Index to Computer Archives, a glorious backronym that came about because they wanted to riff on the FTP search engine being named ARCHIE.
1. With libmagic these days this is sort of not that useful. Type "9" means Item is a binary file. Client must decide what to do with it. Just use type 9 and then determine the filetype using the first few bytes.
2. Screams for an upgrade. Could assume if encryption is used, UTF-8 is the new default charset.
3. Most TCP services are wrappable, eg. over SSL or ssh with compression.
4. Fortunately the once-feared gopher police, famed as both jaw-droppingly vicious and most morally dubious in method, have long since retired to the Bahamas.
2. This would be Gemini. Most people here don't like it either. Also, most gopher sites I visit use UTF-8, and my client (which I wrote---it's not hard) hasn't had an issue with mojibake.
3. It's not as simple as wrapping TLS around the TCP connection. [1]
3. Break backward compatibility. It's fine. Nobody needs to run an old client these days. I'd worry more about ensuring curl and wget have support than old gopher clients.
3. Hmm. I think the biggest issue then might be the inability to extend the protocol then, not the lack if TLS.
Http(s) had at least some back and forth that allowed clients/servers to do protocol negotiation, not perfect, but workable such that normally crashes do not occur...
Although speaking of crashes, if your app is as poorly written to easily crash, perhaps it should be re-written - if e.g. gopher were to come back in a meaningful way, that sounds like ripe pickings for security folk.
Gemini is, explicitly, an art project. It's not, nor is it meant to be, a serious communication technology.
This both explains some of the bizarre (and anti-intellectual) design decisions, and also makes it crystal clear that it shouldn't be used for anything serious (e.g. serving the course website for a class you're teaching).
Libmagic is a heuristic, and I wouldn't trust it in this role. Setting a magic value that tells you to look elsewhere for the mimetype would be a reasonable solution.
The lack of encryption is a problem, and the filetypes issue seesm to be one.
> Non-europeans can eat shit and die.
A highly inaccurate and emotive way to put it. I appears it provides all characters used by English, Spanish and Portuguese which, while European in origin, are main languages in all the Americas, Australia and New Zealand. On top of that English is widely spoken (often as a first language) in much of Africa and Asia. It also provides full support for Bahasa Indonesia and Afrikaans.
On the other hand, it does not provide any characters for Russian (the language with the most speakers in Europe) or other languages written in Cyrillic letters, or full support for French of German, the next most widely spoken languages in Europe.
Most of the population of Russia lives in Europe - a minority of Russians can be counted as Asians going strictly be geography, and I think Russian would still be the most widely spoken language just counting the number of speakers in Europe.
They were talking about finally adding MIME types to Gopher+ in 2002, which I think was too late to compete with the Web. Sniffing for UCS-2 or UTF-8 or Latin-1 might have worked or might have given us more https://en.wikipedia.org/wiki/Mojibake.
ISO-8859-1 only supports Italian, Spanish, Portuguese, French, German, Dutch, English, Danish, Swedish, Norwegian, and Icelandic. So, about half of Europe.
Lack of support for "non-Europeans" is a very generous way to describe ISO-8859-1. Apart from most European and non-European languages, it doesn't support emojis, mathematical symbols, combining marks, and many other characters that are normally taken for granted.
What makes gopher still relevant in 2024, for information centred
activities (like research [0], as opposed to shopping, gossiping and
showing-off) is that it's structured.
What's becoming irrelevant about the Web is search. The decline of
search - and the whole spider/retrieval model - is a real problem in
the post AI-SEO-optimised spam epoch.
The Web is really a giant heap. People stopped usefully linking over a
decade ago. By contrast Gopher offers a different way of discovering
content because it tends toward structure.
You're pretending you can't do on the web what you can do on Gopher. This is just the most absurd argument.
Anyone building a gopherhole can built the exact same structure on a website. Better is anyone can access it because most of the planet carries a web browser in their pocket. If you want to put together structured content on the web no one is stopping you.
Fetishizing gopher is the weirdest thing. You're making arguments about SEO and then talking about a protocol that offers zero discovery unless someone fires up some old Veronica servers. If you're unconcerned about SEO/Google there's no reason to include anything SEO or Google related on a website.
It's fine to dislike how Google has ruined the web. That doesn't make the web itself bad or useless. Today you can use the web the exact same way we did thirty years ago. It has way more capability (even ignoring JavaScript bullshit) than gopher and you can reliably send a link to content to someone that isn't obsessed with a dead protocol.
That's like saying you can do anything by writing a book that you can
do with haiku.
It's trivially true. But it misses the value of strongly imposed
constraints. I know that goes against a prevailing (well USA) culture
of "bigger and more is always better".
Sometimes less is more.
> It's fine to dislike how Google has ruined the web. That doesn't
make the web itself bad or useless.
It sort of does though. It's like saying; yeah it's bad that loads of
drug dealers and thieves moved into the neighbourhood. but that
doesn't make the neighbourhood itself bad. For practical purposes it
totally does. It may not make the buildings and streets bad, as if a
technology in isolation could be "bad", but it makes nobody want to go
there.
Your neighborhood analogy is nonsensical. You can't ignore drug dealers in a neighborhood. You can ignore Google. You can build a website and not give two shits about Google. There's zero requirement for a website to run ads, host trackers, or even use JavaScript. You can just type some HTML into a text editor and put it on a server.
Websites are not physical neighborhoods. They're not affected by things in physical proximity. A visitor is not affected by anything in physical proximity. A website, like a gopher site only contains the content the owner put up. You can put up a bunch of plain text on a website to show how much of a hipster you are. No one is stopping you.
True but incidental, possibly anathema, to the point...
Which is that physical neighborhoods make up the user-base of the web, whether they be large server farms or small form-factor computing devices.
And because those users live in real neighborhoods, connected by real wires (or radio waves connected to towers that are connected by wires), the following statement simply isn't true:
> Your neighborhood analogy is nonsensical. You can't ignore drug dealers in a neighborhood.
You can't ignore cyber-criminals or SEO farms that are trying to sell you stuff. There are real neighborhoods made by real people and real devices, just because you can't draw a nice circle around it on a single landmass on a map doesn't make them less real.
> You can build a website and not give two shits about Google.
You can, but you building a website is not you using a website. This is no different than glorifying Gopher - yes the protocol is there, but if no one is using it in the way you intend to use it then it hardly matters?
I don't think gopher is necessarily the best protocol, but there is probably a real "neighborhood effect", at least for a while (until the metaphorical drug dealers move in again).
Neither gopher nor http+www does much for solving that issue, though.
To be fair, you can write Hungarian without accents. It's fairly readable, it's not a huge deal. The bigger problems happen with languages written with non-Latin alphabets.
The web took over, but as discussed on HN before, the University of Minnesota did their best to help kill Gopher when they tried to license it for commercial use.
> the University of Minnesota did their best to help kill Gopher when they tried to license it for commercial use.
They didn't just "help" kill gopher. They assassinated and buried it. As soon as they announced this, most people dropped gopher like it was radioactive waste. It was widely perceived as greedy overreach by almost everyone else.
With a readily available (and some would argue superior) protocol just sitting right there it was the most boneheaded of decisions. It was worse than when Unisys started getting all litigious with GIF and forced the creation of PNG, in this case the obvious alternative was already in widespread use which made the switch extremely easy.
While it's focus is Gemini, the LaGrange Browser [1] supports Gopher too (as well as Finger and possibility FTP in the future) on almost all modern platforms. Bombadillo [2] is a great TUI gopher client as well.
Setting up a Gopherhole is relatively easy, as most pubnix/tilderverse communities offer Gopher hosting for free [3].
I haven't tried that one but VF1 is a nice gopher client. It's what I generally use for browsing gopherspace. And I think both lynx and elinks have gopher support as well.
This is good to know. I used to enjoy gopher browsing. Compared to today’s web saturated mediascape gopher publishing might make sense on the internet.
>The misconception that the modern renaissance of Gopherspace is simply a reaction to "Web overload" is unfortunately..
It isn't a misconception. Gopher is small underground counter-culture to the WWW. Lack of corporations, lack of ads, and lack of complex tracking is what keeps Gopher organic and therefore relevant. These other things, low computing power and fixed hierarchy, are nice-to-haves. It would collapse into darkness if Google were to begin indexing it and injecting ads onto it.
First, it's a simple protocol. Learning about the basics of networking, building your own Gopher client, Gopher offers a great opportunity to tip your toe into the pond.
Second, I have the DiggieDog browser installed on my phone. During my commute, I pass through stretches with limited connectivity. Reading some of the phlogs I follow is a great pass time.
I don't think Gopher needs to be the "next big thing". It has been that a while ago. The limitations are why it got superseded by protocols that cater to the needs of the many. That said, it still has its place on the Internet as a retreat for a community that values Gophers original simplicity. And I think that's absolutely valid.
I agree! My little ol' gopher server gets hundreds of hits a day, thankfully not from robots. It's a repository for the unusual and hard-to-find, so (real) people come looking for that. Last year I made a token attempt to modernise it, and opened up a number of categories to my web server. It's all good fun, simple to manage and by and large, used by real humans. What's not to love?
Shameless plug, but I am working on a Swift Client/Server Gopher implementation[0] used in a SwiftUI app for macOS/iOS/visionOS[1]. I will soon be making a Gtk implementation for Linux!
Why? It is a pretty simple protocol to implement, and I love reading people's daily phlogs
The arguments feel quite weak. Gopher doesn’t bring much to the table IMO, the simplicity is overshadowed by lack of content and interactivity (same problem applies to Gemini).
The simplicity, especially the constrained specification, is part of the appeal for some people and part of the reason for the (mild) resurgence of interest in Gopher, Gemini, and related protocols: https://www.linux-magazine.com/Issues/2021/245/The-Rise-of-t...
Is it possible to browse a forum and post/reply to comments on gopher/gemini? I saw a debate where one party insisted it was possible and supported, but no one was willing to provide evidence like documentation or code samples.
Same people around me who talk about Gopher are also promoting Gemini.
I admire their efforts to keep simple text based systems simple but I just don't see the point. HTTP has its flaws but at least we can all work together on it.
1. Only one byte to specify filetype in the protocol, with only a few filetypes being being defined in the protocol, the remaining being unofficial and implementation-dependent. Most filetypes that are defined still are ambiguous.
2. No support for Unicode in the protocol, or even for any character set but ISO-8859-1. Non-europeans can eat shit and die.
3. No support for encryption nor compression in the protocol.
4. Any implementation supporting an above-mentioned feature is operating out of specification.