Google enhances ad safety with large language models (LLMs)

9to5Google March 27, 2024, 03:00 PM UTC

Summary: Google is implementing large language models (LLMs) in ads safety, enhancing enforcement against violative content. LLMs can rapidly review and interpret content, aiding in distinguishing legitimate businesses from scams. In 2023, Google blocked over 5.5 billion ads, removed 12.7 million advertiser accounts, and took action against various policy violations, including misrepresentation and financial services. They also targeted deepfake scams featuring public figures.

Full article

Article metrics
Significance6.3
Scale9.0
Magnitude8.0
Potential8.5
Novelty7.0
Actionability6.0
Immediacy8.0
Positivity7.5
Credibility8.0

Timeline: