Economy

AI-generated texts could increase people’s exposure to threats

Nearly universal access to models that deliver human-sounding text in seconds presents a turning point in human history, according to new research from WithSecure (formerly known as F-Secure Business).

The research details a series of experiments conducted using GPT-3 (Generative Pre-trained Transformer 3)–language models that use machine learning to generate text.

The experiments used prompt engineering–a concept related to large language models that involves discovering inputs that yield desirable or useful results–to produce a variety of content the researchers deemed harmful.

Numerous experiments assessed how changes in inputs to the current available models affected the synthetic text output. The goal was to identify how AI-language generation can be misused through malicious and creative prompt engineering, in hopes that the research could be used to direct the creation of safer large language models in the future.

The experiments covered phishing and spear-phishing, harassment, social validation for scams, the appropriation of a written style, the creation of deliberately divisive opinions, using the models to create prompts for malicious text, and fake news.

“The fact that anyone with an internet connection can now access powerful large language models has one very practical consequence: it’s now reasonable to assume any new communication you receive may have been written with the help of a robot,” said WithSecure Intelligence Researcher Andy Patel, who spearheaded the research. “Going forward, AI’s use to generate both harmful and useful content will require detection strategies capable of understanding the meaning and purpose of written content.”

The responses from the models in these use cases along with the general development of GPT-3 models led the researchers to several conclusions, including (but not limited to):

Prompt engineering will develop as a discipline, as will malicious prompt creation.
Adversaries will develop capabilities enabled by large language models in unpredictable ways.
Identifying malicious or abusive content will become more difficult for platform providers.
Large language models already give criminals the ability to make any targeted communication as part of an attack more effective.

“We began this research before ChatGPT made GPT-3 technology available to everyone,” Patel said. “This development increased our urgency and efforts. Because, to some degree, we are all Blade Runners now, trying to figure out if the intelligence we’re dealing with is ‘real,’ or artificial.”

The full research is now available at https://labs.withsecure.com/publications/creatively-malicious-prompt-engineering.

 

 

This work was supported by CC-DRIVER, a project funded by the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 883543.

 

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

Your daily news source covering investing ideas, market stocks, business, retirement tips from Wall St. to Silicon Valley.

Disclaimer:

TheProficientInvestor.com, its managers, its employees, and assigns (collectively "The Company") do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice.
The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.

Copyright © 2021 TheProficientInvestor. All Rights Reserved.

To Top