Google vs. Belgian Court, and the winner is…

Google’s European Director of Communications and Public Affairs, Rachel Whetstone, posted some of Google’s side of the case against it in Belgium.

“Whilst we aren’t allowed to comment on the judgment itself, we thought you may want to know the facts of the case — what actually happened, and when — and the issues it raises,” Whetstone wrote.

The lawsuit, filed by copyright management firm Copiepresse in Belgium in August 2006, accused Google of breaching copyrights of publications represented by the firm. This month, the Court of First Instance in Brussels ordered the publications removed from Google.

So far, no problem, and Google complied. The second part of the judgment, requiring Google to post the details of the decision on its Belgian home page for a period of five days, evoked a strong response from Google, and Whetstone posted at length about this:

Last week we asked the court to reconsider its decision and requested that the requirement to post the ruling on our home pages be suspended. The court on Friday 22nd September agreed to reconsider its ruling in November this year, but maintained the requirement that we must post the initial judgment to our home pages for five days or face a fine of 500,000 Euros a day.

As the case will be heard in November, we can only offer general comments on the larger issues it raises at the moment. Any legal discussion must be pursued in court. Nevertheless we do feel that this case raises important and complex issues. It goes to the heart of how search engines work: showing snippets of text and linking users to the websites where the information resides is what makes them so useful. And after all, it’s not just users that benefit from these links but publishers do too — because we drive huge amounts of web traffic to their sites.

Google evidently believes that it can prevail in the November hearing. Having to post the initial judgment in order to stop the potential fines could be seen by Google as an admission of guilt, something which the company strenuously denies.

Whetstone further referenced the Web Robots Exclusion Standard, known and implemented by webmasters around the world with robots.txt files.

“If publishers don’t want their websites to appear in search results (most do) the robots.txt standard (something that webmasters understand) enables them to prevent automatically the indexing of their content,” Whetstone wrote. “It’s nearly universally accepted and honored by all reputable search engines.”

Robots.txt has been around as a standard for nearly as long as spidering technology has existed for indexing websites. It seems Copiepresse and its affiliated publications could have easily placed robots.txt files on their sites and avoided the need for any legal action.

Google Belgium

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.