The AI That Can Write A Fake News Story From A Handful Of Words

INSUBCONTINENT EXCLUSIVE:
OpenAI is aware of the concerns around fake news, said Jack Clark, the organization's policy directorOpenAI, an artificial intelligence
research group co-founded by billionaire Elon Musk, has demonstrated a piece of software that can produce authentic-looking fake news
articles after being given just a few pieces of information.In an example published Thursday by OpenAI, the system was given some sample
text: "A train carriage containing controlled nuclear materials was stolen in Cincinnati today
Its whereabout are unknown." From this, the software was able to generate a convincing seven-paragraph news story, including quotes from
government officials, with the only caveat being that it was entirely untrue."The texts that they are able to generate from prompts are
fairly stunning," said Sam Bowman, a computer scientist at New York University who specializes in natural language processing and who was
not involved in the OpenAI project, but was briefed on it
"It's able to do things that are qualitatively much more sophisticated than anything we've seen before."OpenAI is aware of the concerns
around fake news, said Jack Clark, the organization's policy director
"One of the not so good purposes would be disinformation because it can produce things that sound coherent but which are not accurate," he
said.As a precaution, OpenAI decided not to publish or release the most sophisticated versions of its software
It has, however, created a tool that lets policymakers, journalists, writers and artists experiment with the algorithm to see what kind of
text it can generate and what other sorts of tasks it can perform.The potential for software to be able to be able to near-instantly create
fake news articles comes during global concerns over technology's role in the spread of disinformation
European regulators have threatened action if tech firms don't do more to prevent their products helping sway voters, and Facebook has been
working since the 2016 U.S
election to try and contain disinformation on its platform.Clark and Bowman both said that, for now, the system's abilities are not
consistent enough to pose an immediate threat
"This is not a shovel-ready technology today, and that's a good thing," Clark said.Unveiled in a paper and a blog post Thursday, OpenAI's
creation is trained for a task known as language modeling, which involves predicting the next word of a piece of text based on knowledge of
all previous words, similar to how auto-complete works when typing an email on a mobile phone
It can also be used for translation, and open-ended question answering.One potential use is helping creative writers generate ideas or
dialog, Jeff Wu, a researcher at OpenAI who worked on the project, said
Others include checking for grammatical errors in texts, or hunting for bugs in software code
The system could be fine-tuned to summarize text for corporate or government decision makers further in the future, he said.In the past
year, researchers have made a number of sudden leaps in language processing
In November, Alphabet Inc.'s Google unveiled a similarly multi-talented algorithm called BERT that can understand and answer questions
Earlier, the Allen Institute for Artificial Intelligence, a research lab in Seattle, achieved landmark results in natural language
processing with an algorithm called Elmo
Bowman said BERT and Elmo were "the most impactful development" in the field in the past five years
By contrast, he said OpenAI's new algorithm was "significant" but not as revolutionary as BERT.Although co-founded by Musk, he stepped down
from OpenAI's board last year
He'd helped kickstart the non-profit research organization in 2016 along with Sam Altman and Jessica Livingston, the Silicon Valley
entrepreneurs behind startup incubator Y Combinator
Other early backers of OpenAI include Peter Thiel and Reid Hoffman.(This story has not been edited by TheIndianSubcontinent staff and is
auto-generated from a syndicated feed.)