Categories: Digital Marketing

Businesses, users, and experts defend big tech against algorithm lawsuits.

On Thursday, a diverse group of individuals and organizations defended the liability shield of Big Tech in a crucial Supreme Court case regarding YouTube’s algorithms. This group included businesses, internet users, academics, and human rights experts, with some arguing that removing federal legal protections for AI-driven recommendation engines would have a major impact on the open internet.

Among those weighing in at the Court were major tech companies such as Meta, Twitter, and Microsoft, as well as some of Big Tech’s most vocal critics, including Yelp and the Electronic Frontier Foundation. Additionally, Reddit and a group of volunteer Reddit moderators also participated in the case.

What happened. The controversy started with the Supreme Court case Gonzalez v. Google and centers around the question of whether Google can be held liable for recommending pro-ISIS content to users through its YouTube algorithm.

Google has claimed that Section 230 of the Communications Decency Act protects them from such litigation. However, the plaintiffs in the case, the family members of a victim killed in a 2015 ISIS attack in Paris, argue that YouTube’s recommendation algorithm can be held liable under a US anti-terrorism law.

The filing read:

“The entire Reddit platform is built around users ‘recommending’ content for the benefit of others by taking actions like upvoting and pinning content. There should be no mistaking the consequences of the petitioners’ claim in this case: their theory would dramatically expand Internet users’ potential to be sued for their online interactions.”

Yelp steps in. Yelp, a company with a history of conflict with Google, has argued that its business model relies on providing accurate and non-fraudulent reviews to their users. They have also stated that a ruling that holds recommendation algorithms liable could severely impact Yelp’s operations by forcing them to stop sorting through reviews, including those that are fake or manipulative.

Yelp wrote;

“If Yelp could not analyze and recommend reviews without facing liability, those costs of submitting fraudulent reviews would disappear. If Yelp had to display every submitted review … business owners could submit hundreds of positive reviews for their own business with little effort or risk of a penalty.”

Meta’s involvement. Facebook parent Meta has stated in their legal submission that if the Supreme Court were to change the interpretation of Section 230 to protect platforms’ ability to remove content but not to recommend content, it would raise significant questions about the meaning of recommending something online.

Meta representatives stated:

“If merely displaying third-party content in a user’s feed qualifies as ‘recommending’ it, then many services will face potential liability for virtually all the third-party content they host, because nearly all decisions about how to sort, pick, organize, and display third-party content could be construed as ‘recommending’ that content.”

Human rights advocates intervene. New York University’s Stern Center for Business and Human Rights has stated that it would be extremely difficult to create a rule that specifically targets algorithmic recommendations for liability, and that it might lead to the suppression or loss of a significant amount of valuable speech, particularly speech from marginalized or minority groups.

Why we care. The outcome of this case could have significant implications for the way that tech companies operate. If the court were to rule that companies can be held liable for the content that their algorithms recommend, it could change the way that companies design and operate their recommendation systems.

This could lead to more careful content curation and a reduction in the amount of content that is recommended to users. Additionally, it could also lead to increased legal costs and uncertainty for these companies.


New on Search Engine Land

About the author

Nicole Farley is an editor for Search Engine Land covering all things PPC. In addition to being a Marine Corps veteran, she has an extensive background in digital marketing, an MBA and a penchant for true crime, podcasts, travel, and snacks.

FOLLOW US ON GOOGLE NEWS

 

Read original article here

Denial of responsibility! Search Engine Codex is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@searchenginecodex.com. The content will be deleted within 24 hours.

Share
Dena Mason

Leave a Comment
Published by
Dena Mason

Recent Posts

Google Ads Adds Negative Keywords, Insights, Reporting To PMax & Much More

Google Ads announced a number of new features and updates to the ad platform this…

September 18, 2024

Google Search To Show If An Image Is AI Generated, Edited Or Taken With Camera

Google Search will soonimages as AI-generated, edited with photo editing software or if…

September 18, 2024

Google Search Console Product Snippets Search Performance Surge Again

It looks like we are seeing another surge in the Google Search Console Search Performance…

September 18, 2024

Google Local Service Ads Changes In Validation Logic For Bid Modifications

Some advertisers received an email from Google this week with the subject line, "Changes in…

September 18, 2024

Content Decay and Refresh Strategies To Maintain Site Relevancy

Before I launched my agency, I worked for several others and noticed a troubling trend.…

September 18, 2024

Daily Search Forum Recap: September 17, 2024

Here is a recap of what happened in the search forums today, through the eyes…

September 17, 2024