If something terrible is posted online and it is true but it hurts nobody if you take it down, it seems like common courtesy to not repeat it. You have to weigh the public interest versus peace of mind for the individual. It's not clear cut and dry.
There are things you choose not to say in real life, even if the are true. Just because something is on the internet does not preclude you from that courtesy and respect. Technical reasons (such as caches) can be used as an excuse to act respectably. You do not go around telling colleagues at work about what other colleagues have done. What would you have to gain?
Imagine if you had relations with someone and it was recorded online. The allegations might be true but who has more right for that information to be left up there for all to find? Should you be allowed to post all you like about others, disregarding the potential effects on that person? Or should you be thoughtful about what is in the public interest?
Now imagine a site that records who had relations with whom. There is no maliciousness in this one: it's more of a public record or encyclopedia. Do you have the right to opt out of it?
To go further with my example. Imagine having a sexually transmitted infectious disease. Is the answer the same?
I think placing the onus on google for this is the wrong precedent and I think it speaks to the public and the court's misunderstanding of what a search engine does.
If this man had problems with his debts being online, he should have contacted/sued the sites which displayed this data. This is like suing TV Guide because there's a show on tv that is offensive to you.
> I think placing the onus on google for this is the wrong precedent and I think it speaks to the public and the court's misunderstanding of what a search engine does.
I also think it's wrong that they didn't sanction the original publisher, which makes far more sense as a solution.
However, Google should be subject to the laws of Data Protection - a search engine is, in some sense, a data repository that may handle personal data. It is unlike TV Guide in this respect.
Do you think search engines should be proactively or reactively responsible for not displaying personal information?
I mean, this stuff is literally in news articles about the guy, there's really no way to proactively not index articles which "affect someone's honor or dignity".
Absolutely not so. That's not how free speech works in the EU, and that's not what the court ruling says. EU citizens have a general right to information privacy.
The original publisher was "...legally justified as [the publication] took place upon order of the Ministry of Labour and Social Affairs and was intended to give maximum publicity to the auction..."
My understanding: La Vanguardia (exclusively) gained the legal right to process private information for (only) this specific reason - but Google processed the private information without that legal right.
Censorship is never a good thing. Just because you don't like information being made public, doesn't mean that you should be able to get the government to force other people to hide that information (assuming the information itself isn't illegal).
If I steal money from your house, and I'm arrested and sent to jail, should I be allowed to have Google remove any information related to that from search results?
You have to weigh the public interest versus peace of mind for the individual.
Exactly, and in this case you've put the desire of any given individual over the needs of the public interest.
This really is not the same as censorship, equating it does not make an argument.
Your example uses a robbery - a crime which is very much in everyone's interest to be public. It's useful to know where the crime was committed and what kind of crime it was.
My personal life does not concern you. You gain nothing and lose nothing by losing access to it for you should never have had access to begin with. The negative consequences of it being public fall on me, not on you. You have an asymmetric imbalance of power and social obligation with this information.
If you really do not care about your information being public, post something damaging. Someone would probably publicly mirror it to avoid it being removed. If you don't want to post, why not?
Social conduct is a real thing.
Forcing someone else to hide information is an interesting part of this dilemma. That someone else should have no interest in the data either - it does not cost them to remove it.
If there is a wall owned by a landlord and someone graffitis on that wall with some hurtful truth about someone. The graffiti can be considered unnecessary and should never have occurred to begin with. The landlord would have no qualm with removing it.
This really is not the same as censorship, equating it does not make an argument.
Explain how this is not censorship. The government in this case is forcing Google to remove (correct and public) information from their search engine because "they say so."
That someone else should have no interest in the data either - it does not cost them to remove it.
Of all places, it's strange to see someone on HN refer to development resources referred to as "no cost". How do you expect Google to handle these personally-filtered search results without spending money building the framework necessary to handle this?
The person who wants the information is the one causing the removal of the content - not the government. It is not about the government trying to suppress information. The government (or court system) is acting on behalf of the individual.
Like anything involving people, it is merely a cost of society that makes us better off if we accept it as the cost of doing business. There are many costs like this in society. For example; your neighbours are outrageously noisy and unwieldy and are making it difficult for you to sleep. You have tried asking them to keep the noise down. Our society has has a slow and expensive process to come to some sort of resolution. Rather unnecessary since your local government and police have better things to do. It's not really their fault, you just happen to live in their catchment area.
Rather than just accept that people can do what they want or without regard to others, a pleasant society provides solutions to social issues like this. Without it, life would be less smooth.
Google has many engineers and big datacentres, I am sure they will manage.
Except in this case there was no defamation. The original news source is allowed to stay just as it is, Google (or anyone else) just isn't allowed to link to it.
To answer your question, yes I should have the right to post a statement that you don't like. I should be able to express myself, make an observation, re-iterate a fact, without censure.
If you type the current year into your calculator, and then hit subtract, and then you type the year I was born in to your calculator, it shows you how old I am.
But that's protected information! You could use that to discriminate against me! Texas Instruments, must modify their SUBTRACTION ALGORITHM to not show how old I am!!!
A number on a calculator is data, not information.
Do you want a perfect search algorithm that knows no bounds? Would you be okay with it finding a webcam stream into your home? Or phone or your GPS? Would you be okay with making searching everything you have ever visited online or said to someone? Would you be okay with this search algorithm to allow finding your political allegiances?
No? There has to be some limit then. That comes from people and enshrined by the actions of people, constituting common courtesy.
A search algorithm can only search what it is allowed to search.
If you don't want a search engine to find a webcam stream of your home, then don't allow a search algorithm to index it.
Don't put it online in the first place, or if you do restrict access to people with a log-in.
> Would you be okay with making searching everything you have ever visited online or said to someone?
If I want to allow a search engine to access that information, then yes. GMail does search everything I have GMail'ed to or from people. That's incredibly useful. Chrome History Search does let me search sites I have visited.
> Would you be okay with this search algorithm to allow finding your political allegiances?
If I post, "I, Viking Coder, am a tea party Conservative," then yes. (I'm not, by the way.)
> No?
I didn't answer "no" to any of your questions.
> There has to be some limit then.
Yes, people need to limit what information they, themselves, SHARE.
And if someone else shares something ABOUT YOU with everyone else on the internet, then you should be mad at them.
The fact that the search engine is a magnifying glass / microscope / index tool that allows you to search the data on the internet doesn't give me any right to be mad at that tool.
The article and discussion is not about what you yourself put online, it is what others put online about you which is then indexed. The cost of removing from an index is insignificant compared to the personal cost to the person.
Only that Google is responsible for the effects of their service.
To a limit though, right? I mean, you wouldn't expect Google to be sued because someone bombed a location they got directions to via Google Maps. So, why in this case are they any more responsible?
Certainly there are limits. That's why it is an issue for courts and legislatures to decide what they should be. My argument is simply that it's entirely appropriate for courts to decide based on the social harms and that 'it's just an algorithm' is utterly irrelevant.
This is a dangerous precedent. Google should (and probably will) appeal, and I really hope they win. Who knows, the "right to be forgotten" could turn into a "fight to be remembered" ala Trotsky.
I agree, as it seems like it allows people to basically curate their google results so that only positive things appear about them.
I agree there should be circumstances where content should be removed from indexes but if the content publisher is not required to remove it, and it is legal content, why should google or other search engine providers have to stop indexing it?
What I believe would be a functional system would be that a court order for the content publisher to take down the content should also require google to quit indexing the content, so cached versions aren't able to be found and in case the publisher doesn't comply. But that having a ruling that google has to act as a gatekeeper to legal and already public content about an individual because they feel it is not relevant any longer does seem like a dangerous precedent.
Really, this is an age-old problem. Journalists have had to deal with this for years. The general consensus was that if it's true, tough crap. The difference between journalism and google is that anybody can google someone. Journalists generally had to spend a lot of time digging up someone's past. The "problem" is just how efficient Google (and other web crawlers) have been at aggregating data. And I'm kind of on the fence about that
Exactly, it's actually quite similar to those infamous porn dns filters. If there is a plausible reason why not remove the source? The ruling as I understand it just alters the map while the city is still there.
I disagree. I have a friend that had some crazy person badmouthing him on various usenet groups. It really bummed him out because when you googled his name, you'd see all those things first. Replace "usenet" with "blog" and you could have the same problem today.
It almost seems backwards until you stop and think about it, but the existence of these kinds of laws actually makes this problem worse for most people.
There is no way for anybody without significant means to continuously monitor the internet for negative search results and then continuously sick lawyers on publishers, search engines, blogs, websites, and forums to get this stuff removed. And even with unlimited means, there really is no way to put some facts back in the bottle once it's been unveiled.
The existence of this kind of law makes it much more likely that most people will automatically believe anything negative that they read on the internet about you because then otherwise they'd assume that you'd be able to get lawyers to remove it.
Right now, today, there's a hint of doubt about the veracity of anything on the internet, and that's actually a good thing for people who have been unfairly attacked.
But if this law becomes widespread and sets a very strong precedent, then people will automatically tend to believe anything that they read about online.
This is a net loss not only for freedom, but for most people who have negative search results.
> But if this law becomes widespread and sets a very strong precedent, then people will automatically tend to believe anything that they read about online.
What if your friend has actually done something wrong though, for example, scammed multiple clients out of money? Giving them the ability to remove that information means future clients can't get a fair warning. I'm not knowledgeable about the concept as a whole to have a strong opinion yet but it seems like there's plenty of times I think something should remain public even if a person doesn't want it. If someone posts something false, we have legal systems to handle that complaint. It's not perfect but this seems more dangerous.
replace 'usenet' with 'search engine' and .... your point makes no sense. Your friend should have the slanderous comments removed from Usenet's servers (and mirrors). Then, when Google Search crawls them again, they won't show up in _search_ results. If, on the other hand, you just stop search engines from showing the results _that are on Usenet_, you haven't fixed anything, you've just buried your head in the sand.
First, it's not a precedent, Google filters search results to comply with the law in pretty much every jurisdiction, in Europe they are already filtering Maps to comply with local individual privacy law.
In the US these are frequently DMCA requests, and in Europe frequently anti-hate requests. In Europe they simply have found a more expansive view of privacy than that of the US.
If there's a precedent here it's a pretty minor one.
Your argument about this being a precedent goes counter to what the article itself says. I honestly have no idea whether you are correct in your judgement about whether this is a precedent but it seems like this is a complaint about the article more than the post you replied to.
"According to The Guardian, the case will be used as a precedent in more than 200 cases pending in Spain's court system; in many of them, plaintiffs are asking for links to be deleted."
The case was about google displaying search results of data that others had posted online. It was not about taking down data that google was hosting (eg DMCA).
This is a case of shooting the messenger and leaving the person sending the messages alone.
"Gonzalez had his home repossessed 16 years ago. If you Google his name, you can still see newspaper stories about his debts. 'It hurts my reputation,' he says in Spanish. 'My debts are long paid, but those links were the first thing you'd see.'"
Congratulations, Mr. Gonzalez; now when people Google your name, your reputation won't only be defined by your foreclosure and past financial troubles--which is now discussed in 10,000+ articles (good luck with those takedown requests!)--but also for being the jerk who killed the internet for Europe.
Killed the internet in Europe? Come on.
Free speech in the US and in Europe are two different things. But neither is better, it's a matter of culture. This decision is not backwards and it's not killing free speech as I know it in France (and most of the EU).
Now, on the technicalities of the decision, there's probably to be said.
I intended my comment hyperbolically, consistent with my overall satirical tone.
To be clear, I don't believe this will kill the internet, and I don't think it is necessarily undermining European free speech protections. My concern is not as much with freedom of speech and the speaker's right to disseminate information about others; my concern is with the role of the government in curating the universe of "relevant" knowledge that we consume.
As I mentioned in a lengthier discussion of this issue yesterday (when I assumed a more serious tone), I am concerned that the EU's opinion is effectively denuding the internet of its power for disseminating knowledge quickly and cheaply, and thereby democratizing the processes of determining truth. The court has approved a pernicious form of content restrictions that will be based on the utterly toothless (not to mention absurdly subjective) standard of "relevance," and driven by individuals whose interests are contrary to the public interest in information.
It may be more cost effective for Google to stop serving search results in Spain.
Before you call me ridiculous, how much do you think Google makes off of those 47 million Spaniards each year, and how much do you think it will cost Google if those 47 million Spaniards or a large portion of them start demanding the company remove information about them?
If the cost to comply is greater than the profit, what would you do?
I believe the European Court of Justice's ruling applies to all of the EU countries, not just to Spain. I think it's unimaginable that Google would stop serving the entire EU market on principle.
But you raise a neat question: assuming it was just Spain (or the next similarly sized country to reach this decision), what would happen if Google threatened to entirely pull out of the country--i.e., stop serving search results or supporting any Google services there? Are they powerful enough to influence national policy? It seems unlikely, but it's a neat hypothetical.
Pulling out of China is a good PR move (although that's not the main reason).
Pulling out of Spain, not so much. The shitstorm that would ensue would be very detrimental.
And tbh, you won't see many lawsuits where people try to enforce their "right to be forgotten". We may have stronger libel laws here but we don't have the lawsuit culture of the US. There's no way it'll be cost-effective for Google to pull out from Spain or the EU.
A precedent for which the case becomes invisible to the public because all articles about it had to be hidden because of the precedent itself....
(I realize the precedent wouldn't allow for that because an important legal case like this would remain relevant, and therefore not fall into the category which it allows for removal of, but its funnier to me to imagine that it could apply to itself.)
No, he should sue the sites which host the original article IF they're causing a harm. Since the articles are factually correct and dated appropriately, I don't think any harm is being done to him merely by the presence of those articles.
Google searches discriminate based on the kind of information that is available about a person. Someone who has a strong online presence will look better than someone who does not because then these public records will be more prominent. That is a harmful form of discrimination.
What do you think should happen when someone searches "Tom Cruise"?
Should it somehow show you all 14,724 people who happen to have that name, somehow cleverly giving equal visual weight to all of them? Not accidentally listing one of them first, etc?
No, it's not harmful discrimination. This is a flawed line of thinking.
The Spanish fellow had a problem with a government agency that would not remove his records. So instead of dealing with that government abuse he sues a private company that reports using government data. Then the government says the private company has to stop displaying the information. So the government creates the problem and then hides it. Worse, the main beneficiaries of this law will not be ordinary people, it'll be criminals, politicians and powerful people who now have a way to hide their wrongdoings.
There are probably hundreds of reasons why archive.org would be illegal in Europe. It's wonderful that they are based in a country that respects freedom of speech.
That's funny most of HN seems pretty disappointed that it's AT&T's free speech right to hand over your (technically AT&T's) phone records to the gov't (or anyone else they feel like).
AT&T does not have a free speech right to expose information that was not publicly available. Where as, if AT&T was providing information that you had agreed to make available, or was providing information that was already publicly available then they would have, arguably, the right to provide that information again.
So - how long does it take to crawl the entire web yourself with a modern PC and broadband connection? Is it feasible to have your own uncensored search engine (even if it isn't frequently updated)?
I can't reply to the other comment, but the court case in general is not about "creating profiles" of people. It only deals with things found and presented from web-spidering. I dare say that duckduckgo would have the same results.
Are you opposed to spidering the web and presenting the data based on keyword search? Why should Google be held responsible for indexing publicly available webpages?
I am opposed to the idea that because something is a machine process, the people who operate that machine should not be held responsible for any harm it does.
Who gets to decide what is harmful? Do you simply defer to the courts?
Do you think Google holds a larger, equal, or smaller responsibility than the content provider who is actually hosting this "harmful" information?
In this case, I don't see anyone at fault and don't think a crime has been committed. The data presented was true and of public record.
I would see a case if Google was somehow treating this man as a special case and only indexing negative things about him, or if google treated his publicly available data differently than any other person's
The responsibility could be measured in how much exposure each site gave to the information.
Who gets to decide what is harmful is indeed generally decided by courts.
I agree that it shouldn't be a special case. Google does discriminate based on how much other information about a person is posted online - someone with a lot of professional info online will look better than someone who doesn't used the internet much and so only these public records appear against their name in search. That is a dangerous form of discrimination.
Because every time you list something that a search engine does, you conclude that it's harmful.
Yes, you are opposed to search engines. You don't think they should echo information (an intrinsic function of a search engine), and you don't think they should prioritize results (an intrinsic function of a search engine).
Show me a search engine which DOESN'T echo information posted online, or prioritize results.
SHOW ME JUST ONE.
Otherwise, admit that you are opposed to the operation of search engines in general.
Because your original comment hints that you don't know how a search engine works.
"Mario Costeja Gonzalez" is a sequence of three words to Google, when you search for that.
It's not like Google found a unique human profile record corresponding to Mr. Gonzalez, and then showed you all of the web pages corresponding to that unique person.
No. It's just words to Google.
Now, "George Clooney" is much more likely to be a person with an actual record someplace inside Google...
I know exactly how a search engine works, so your explanation is irrelevant.
I also know that Google is a system operated by humans who are responsible for its effects. If you operate a machine and it harms people, you are responsible. The fact that the harm was done by an automatic process does not change this.
I also know that Google is a system operated by humans who are responsible for its effects. If you operate a machine and it harms people, you are responsible. The fact that the harm was done by an automatic process does not change this.
How far down the line does this logic follow? Are the people who created the programming language that Google used to write their search algorithm responsible as well? What about the construction company that dug the ditch for the fiber cables in the ground through which Google's information runs.
I know exactly how a search engine works, so your explanation is irrelevant.
I know how a search engine works. Saying I don't know simply proves that you are willing to make statements that you provably cannot support, which weakens everything else you say.
I simply disagree with your views on the responsibility of corporations for what they publish.
The comments about construction companies and programming languages are a straw man. The issue is the information that Google the company provides to searchers about people.
Google the company is providing a service for which they as a company are responsible. The methods they use to provide the service don't change that.
> I know exactly how a search engine works, so your explanation is irrelevant.
No, you don't.
> Google spiders the web and creates profiles about people
It does not create profiles of people. It creates indexes of words.
When you look at those words, YOU conclude that it's a profile of a person.
CLEARLY, if Google is designed to operate on profiles of people, then it should handle that data carefully, etc.
But CLEARLY if Google is designed to operate on indexes of words, then it's nowhere near as clear what should happen if you happen to search for the name of a person.
You're now demanding that Google LEARN the difference between words and the names of people, and that it behave differently for the two.
So, yes, EvanKelly asked an insightful question:
Are you opposed to the idea of indexes of words in general?
To which you must honestly answer YES. Because you apparently do not believe that it can be considered JUST an index of words. You want this extra layer of understanding.
The telephone company is not responsible for you getting death threats. The post office is not responsible for stopping you from getting a pipe bomb. Making information like this available on the web IS THE PROBLEM.
Indexing words should be innocent.
It's like you want to sue the owners of a building because a bullet ricocheted off of it and killed your dog. Go after the jerk with the gun, okay?
What have these straw men got to do with anything other than that you don't like the idea of Google having any responsibility for the negative consequences of their service?
You keep asking that and I've already answered. No. The harm is in the information you publish about people. Using a word index algorithm in the process is irrelevant to that.
Google IS a word index algorithm, which ECHOES the information.
If you don't want something to show up in the index, then stop it from being published.
That's why we keep asking:
I assert, "Google IS a word index algorithm on the internet's data. Is it be harmful to run a word index algorithm on the world's data?" To which you say "NO."
Then, somehow, you're still asserting that Google did harm.
Google Search is JUST a word index algorithm. That is the entirety of what it is. So, by your argument, it is NOT harmful to run Google.
"Is it be harmful to run a word index algorithm on the world's data?"
That question COULD NOT BE MORE CLEAR.
It doesn't matter if Larry Page and Sergey Brin form a corporation named Google to do it, or not.
And yet, you say "No, it's not harmful" when we ask you, but then you say, "Yes, it is harmful, if Larry and Sergey make a company called Google to do it."
Any time you publish information on the web you need to consider whether you are defaming someone. This applies whether a word index is part of your publishing process or not.
These laws aren't written about 'word indexes' they are written about the rights people have to publish information about other people. That is what is at issue.
Please just admit that you can't answer my question.
You assert that A) is harmless, and B) is harmful, but you can't explain the difference between A and B.
YOU'RE the one who said that A is harmless, and B is harmful. NOT ME.
If you conclude from all of this discussion, that it is inherently HARMFUL to operate a word index algorithm on the internet's data, that's fine. I disagree, but that's at least a coherent position.
But for some bizarre reason, you claim that operating a word index algorithm on the internet's data is not harmful... but that what Google does is harmful.
I'm sorry you're incapable of defending your own thought process.
The harm is done by what Google publishes. I've told you this repeatedly. The fact that they use a word index as part of the mechanism is irrelevant.
I have repeatedly defended my thought processes. You have simply ignored what I've said, and focused obsessively on whether 'word indexes' are inherently harmful. This has no more relevance than asking whether 'cpu's' are inherently harmful.
Many companies use word indexes in what they do. Not all companies are Google. Therefore google is not just a word index.
A word index of that one site would look like these key / list-of-values pairs:
gress: [a.com/b]
is: [a.com/b]
dumb: [a.com/b]
If I search on a that word index for the word "gress," I might get as a result:
a.com/b "gress is dumb"
So, now you're pissed, because the word index says you're dumb.
I say you can be pissed at the operators of a.com, or the specific author of the web page b. Maybe even take them to court.
YOU'RE the one who said that operating the word index was harmless.
This legal case is EXACTLY ANALOGOUS to what I described, and the only important function of Google Search Engine, as far as this debate goes, is that the index under the word "gress" links to the site a.com/b.
It's not that Google USES a word index. It's that Google IS a word index. And you said that operating a word index was harmless.
And here's how stupid your last paragraph is:
All men are mortal. Socrates is a man. Therefore, all men are Socrates.
If something terrible is posted online and it is true but it hurts nobody if you take it down, it seems like common courtesy to not repeat it. You have to weigh the public interest versus peace of mind for the individual. It's not clear cut and dry.
There are things you choose not to say in real life, even if the are true. Just because something is on the internet does not preclude you from that courtesy and respect. Technical reasons (such as caches) can be used as an excuse to act respectably. You do not go around telling colleagues at work about what other colleagues have done. What would you have to gain?
Imagine if you had relations with someone and it was recorded online. The allegations might be true but who has more right for that information to be left up there for all to find? Should you be allowed to post all you like about others, disregarding the potential effects on that person? Or should you be thoughtful about what is in the public interest?
Now imagine a site that records who had relations with whom. There is no maliciousness in this one: it's more of a public record or encyclopedia. Do you have the right to opt out of it?
To go further with my example. Imagine having a sexually transmitted infectious disease. Is the answer the same?