A recent test conducted by Singapore’s State Technology Agency points out that an artificial intelligence (AI) system outperformed humans in writing better phishing emails. The study was presented during the Black Hat and Defcon security conference held in Los Angeles earlier this month.
Two hundred employees are tested with two phishing emails generated by the artificial intelligence system and humans, respectively. Surprisingly most of the employees fell for the phishing email generated by artificial intelligence.
Researchers used OpenAI’s deep learning model GPT-3 and other artificial intelligence technologies to build this AI program. A government cybersecurity specialist, Eugene Lim, said that researchers have found out that it takes millions of dollars to train such artificial intelligence models with high levels of accuracy.
“But once you put it on AI-as-a-service, it costs a couple of cents, and it’s really easy to use—just text in, text out. You don’t even have to run code, you just give it a prompt, and it will give you output. So that lowers the barrier of entry to a much bigger audience and increases the potential targets for spearphishing.”
The artificial intelligence algorithm focuses on personality analysis for generating phishing emails. OpenAI’s GPT 3 platform analyses an individual’s tendencies to react to something and generate results accordingly.
Leveraging this capability, researchers developed such advanced tools for creating phishing emails that surpass human intelligence to a certain extent.
OpenAI officials said, “We grant access to GPT-3 through our API, and we review every production use of GPT-3 before it goes live. We impose technical measures, such as rate limits, to reduce the likelihood and impact of malicious use by API users.”
They further mentioned that the misuse of language models is an industry wide issue, and they are meticulously working towards the deployment of safe and responsible artificial intelligence technologies.