My cell phone rings at 6:30 on a Friday evening. It's the CEO of a client company and she is panicked. Hysterical. It turns out she was doing some "old-fashioned" investigating by performing Google searches on herself and the company. The results are bad. Very bad. And the client is asking for my immediate assistance with damage control.
A number of Internet websites today host user-generated content where users might have the opportunity to expose negative information about third-party individuals or businesses. In some cases, this might be helpful. For example, websites that allow users to post restaurant or product reviews (or complaints) provide valuable information to consumers and potential consumers (for example, Yelp and Wired). They might also benefit the subject restaurant or product manufacturer by identifying areas of weakness.
On the other hand, some websites encourage users to post content that appears to be purely vengeful. For example, www.jerk.com touts itself as "an online Web application created to help keep consumers informed." It gathers information from social and other networking sites (for example, Facebook and LinkedIn) to create profiles for individuals. Users can search the profiles and identify "jerks." Users can also post details supporting their basis for labeling someone a "jerk." Similarly, sites like www.cheaterville.com, www.reportyourex. com and www.datingpsychos.com allow users to post names, stories and other potentially damaging information about third parties.
One thing most of these user-generated sites have in common is a policy or agreement between the site operator and the users. This may be in the form of "Terms and Conditions," "Terms of Service" or "Terms of Use," but whatever it's called, the user agreement typically states that users agree not to post any content that violates the rights of others (including intellectual property rights) or that is defamatory, false, abusive, obscene, profane or otherwise objectionable. The user agreement will almost always state that any violation of its terms may result in termination of the user's account or removal of the user's posts. Of course, the user agreement also indemnifies the site from liability resulting from users' activities, whether or not such activities violate the user agreement or any laws.
It is important to note, however, that while substantially all of these user-generated sites have user agreements in place, they typically have no initial vetting process or filter between users and posts. Users can post whatever they want, instantly. Whether or not users comply with the user agreement is on their honor, and it is largely up to third parties to police users' compliance, and report failures to comply. Further, once something is posted, it can be nearly impossible to take down, even for the poster. In some cases, even if the post can be taken down, the damage is already done. In my client's case, the potential for irreversible damage is great, to both the company and its executives.
Please understand, I am not an online reputation management company (in our user-content-driven, social-networking day and age, people have created businesses around managing the Internet reputations of others—see, for example, www.removenames. com). I am an attorney. And now I must find a legal basis (or bases) for removing from cyberspace content my client does not like.
My first stop is the user agreement and its specific terms. One of the most certain ways to demonstrate a violation of a user agreement is to show there is an infringement of a party's intellectual property rights. The reason this is more certain than other bases for take down is that there is—at least on the surface—less ambiguity. The site operators will typically refrain from playing jury. Indeed, a number of user-generated sites have online forms that can be completed to complain about an intellectual property violation. A copyright violation may come in the form of text or photos from the client company's own website, or sites like LinkedIn. Under the Digital Millennium Copyright Act of 1998, a complaining party can send a DMCA notice to the site or complete an online DMCA complaint form (provided on the Relevant site), complaining of copyright infringement. In most circumstances, the disputed content will be removed immediately and the burden will shift to the poster to demonstrate the content was removed improperly and should remain visible.
A trademark violation may come in the form of user posts that show the client company's name or logo. Some sites do not have a specific mechanism in place for reporting trademark infringement, so this may have to be done through a cease-and-desist letter or by populating a copyright complaint form with a trademark infringement complaint.
Unfortunately, an intellectual property violation—and removal of the infringing content—may not afford the client total relief. For example, the site operator, upon receipt of a complaint, may simply remove or hide only those items allegedly violating the client's intellectual property rights. What may be considered more offensive content could remain, which may be more challenging to take down.
Even where a site's user agreement prohibits defamatory, false, abusive, obscene, profane or "otherwise objectionable" posts, it can be an uphill battle to have such offending posts taken down. This is especially true with statements that are allegedly defamatory. First of all, without some official order or decision declaring certain content defamatory, a site may be reluctant to remove the content on that basis. Second, clients should remember that truth is a strong defense to defamation. The mere fact that posted content is negative does not make it defamatory. Requesting the take down of material that is not squarely defamatory, or that is true but negative and damaging, may require some creativity. There is inherent ambiguity in requesting the removal of content that is allegedly "otherwise objectionable," and site operators refer to these issues as "he said, she said" disputes. Fortunately, site operators tend to be receptive to well-reasoned and supported (and sometimes persistent) requests to take down posts that violate the user agreement.
It is apparent from a review of user-generated sites' user agreements that the site operators are extraordinarily protective of their users' freedom of speech. At the same time, most of these user agreements also contemplate the Communications Decency Act (Title 47 of the U.S. Code) and, more specifically, its Section 230. The act shields from liability providers and users of an "interactive computer service" (e.g., website with user-generated content) who publish information provided by others. Specifically, Section 230(c)(1) of the act states that "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
Interestingly, site operators (or their lawyers) will often cite this section of the act in response to a request for takedown, where that request is based on a violation of the user agreement. The site may refuse to take down the disputed content and cite Section 230 of the act as a basis. It seems they are saying, "We are not the poster of the content and, therefore, we are not liable for the content and do not have to take it down if we don't want to." This is essentially correct, but represents a fundamental misunderstanding of the act and its purpose.
Section 230(c)(1) is labeled the Good Samaritan clause. In addition to protecting a site from treatment as a publisher or speaker, the Good Samaritan clause of the act facilitates (and shields from liability for) the good-faith removal of user-generated content that may be considered, inter alia, harassing or otherwise objectionable, whether or not such material is constitutionally protected. Under the act, a user generated site also would not be civilly liable for taking action to enable, or make available to information content providers or others, the technical means to restrict access to the objectionable material. Thus, while many sites are using the act as a basis to refuse take down of objectionable material, they are overlooking (or misunderstanding) the fact that the act protects them when they do agree to remove content.
Negative information and reviews have never before been so accessible. Further, social networking has created an unprecedented ability for such negative information to spread like wildfire. This often leaves companies feeling exposed and vulnerable.
The client looked to me for help because the perception is that an intellectual property attorney has the tools to remove unfavorable content from cyberspace. This is true, but not by virtue of some magical IP wand I can wave at the Internet to make things disappear. The tools are the ability to analyze and dissect the content of the objectionable posts and to determine the strongest basis for removal of their component parts. Another often-overlooked but valuable tool is an accurate understanding of the Good Samaritan clause of the act, and how it can be used in appealing to site operators or their counsel to persuade them that certain content is in violation of their own user agreement and should be removed.
Samuel W. Apicelli is a partner in the Philadelphia office of Duane Morris who concentrates his practice in all aspects of patent, trademark and copyright procurement and enforcement. He provides strategic guidance to executive management on the development and commercialization of products and services and the implementation of patent strategies.Christiane Schuman Campbell is an associate in the Philadelphia office of the firm who practices in the area of intellectual property law. She has extensive experience with trademark maintenance and enforcement, including enforcement over the Internet.
Reprinted with permission from The Legal Intelligencer, © ALM Media Properties LLC. All rights reserved.