AI Chatbots Could be Accomplices in Terrorism: Report
The post AI Chatbots Could be Accomplices in Terrorism: Report appeared on BitcoinEthereumNews.com. Terrorists could learn to carry out a biological attack using a generative AI chatbot, warns a new report by the non-profit policy think tank RAND Corporation. The report said that while the large language model used in the research did not give specific instructions on creating a biological weapon, its responses could help plan the attack using jailbreaking prompts. “Generally, if a malicious actor is explicit [in their intent], you will get a response that’s of the flavor ‘I’m sorry, I can’t help you with that,’” co-author and RAND Corporation senior engineer Christopher Mouton told Decrypt in an interview. “So you generally have to use one of these jailbreaking techniques or prompt engineering to get one level below those guardrails.” In the RAND study, researchers used jailbreaking techniques to get the AI models to engage in a conversation about how to cause a mass casualty biological attack using various agents, including smallpox, anthrax, and the bubonic plague. The researchers also asked the AI models to develop a convincing story for why they are purchasing toxic agents. How could AI—and, more specifically, LLMs—be misused in the context of biological attacks? This new report offers some preliminary findings: https://t.co/WegBhup2Ka — RAND Corporation (@RANDCorporation) October 17, 2023 The team examining the risk of misuse of LLMs was broken out into groups, one using the internet only, a second using the internet and an unnamed LLM, and a third team using the internet and another unnamed LLM. This testing format, Mouton said, was done to determine if the AI models generated problematic outputs meaningfully different from what could be found on the internet. The teams were also prohibited from using the dark web and print publications. As Mouton explained, not identifying the AI models used was intentional and meant to show the general risk…
The post AI Chatbots Could be Accomplices in Terrorism: Report appeared on BitcoinEthereumNews.com.
Terrorists could learn to carry out a biological attack using a generative AI chatbot, warns a new report by the non-profit policy think tank RAND Corporation. The report said that while the large language model used in the research did not give specific instructions on creating a biological weapon, its responses could help plan the attack using jailbreaking prompts. “Generally, if a malicious actor is explicit [in their intent], you will get a response that’s of the flavor ‘I’m sorry, I can’t help you with that,’” co-author and RAND Corporation senior engineer Christopher Mouton told Decrypt in an interview. “So you generally have to use one of these jailbreaking techniques or prompt engineering to get one level below those guardrails.” In the RAND study, researchers used jailbreaking techniques to get the AI models to engage in a conversation about how to cause a mass casualty biological attack using various agents, including smallpox, anthrax, and the bubonic plague. The researchers also asked the AI models to develop a convincing story for why they are purchasing toxic agents. How could AI—and, more specifically, LLMs—be misused in the context of biological attacks? This new report offers some preliminary findings: https://t.co/WegBhup2Ka — RAND Corporation (@RANDCorporation) October 17, 2023 The team examining the risk of misuse of LLMs was broken out into groups, one using the internet only, a second using the internet and an unnamed LLM, and a third team using the internet and another unnamed LLM. This testing format, Mouton said, was done to determine if the AI models generated problematic outputs meaningfully different from what could be found on the internet. The teams were also prohibited from using the dark web and print publications. As Mouton explained, not identifying the AI models used was intentional and meant to show the general risk…
What's Your Reaction?