Report reveals potential manipulation risks in OpenAI's ChatGPT search tool
A new report from The Guardian reveals that OpenAI's ChatGPT search tool may be vulnerable to manipulation through hidden content. This technique, known as "prompt injection," can alter the chatbot's responses by embedding instructions in web pages. During testing, ChatGPT provided a positive review of a camera from a fake product page, despite negative reviews present. The report highlights concerns that malicious users could create deceptive websites to influence the AI's assessments. Cybersecurity researcher Jacob Larsen noted that if the search tool is released in its current state, there is a high risk of users being misled. OpenAI has stated that it is working to address these vulnerabilities before wider access is granted.