Open AI, the guys who earlier created an AI that could play and win a match of DOTA 2 on top human players
have released the last version of their GPT-2 AI that can make sound sections of writing and can make simple reading comprehension, machine reading,
question answering and summarization out the call for task-specific practice.
GPT-2 is also able to make decisions in Chinese, but the only reason OpenAI wrote the software as it is now is to give off to the world that it can be done to fool humans. The original GPT-2, published in 2015 and worked in tests of Go, Go-playing AI and others, was not a perfect piece of software, used some techniques to fool humans, very using a hidden Markov form to generate sentences.
So what’s so smart, serious about that you may ask? Well, in a blog post back in Feb, OpenAI told that they would be releasing a more modest model due to concerns about malicious use of the technology. It stated that the tech could be applied to create fake news articles, represent people, and automate the creation of fake as well as phishing content.
However, now though, it looks like OpenAI has changed their brain. They have published the full version of the AI to the public. This version uses the total 1.5 billion parameters that it originally trained under as opposed to the previously released models that make use of some settings in its blog support.
OpenAI notes that humans found the output of GPT-2 convincing.
It indicates that Cornell University surveyed forms to assign the GPT-2 text a probability score.
OPEN AI claims that people read the 1.5B model a rating of 6.91 out of 10.
However, the company also sees that GPT-2 can fine-tune for misuse. It says that the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) determined that radical groups can use GPT-2 for abuse. CTEC tuned GPT-2 on four ideological grounds, namely white supremacy, Marxism, jihadist Islamism and anarchism and discovered that it could be applied to generate ” false propaganda” for these ideas.
But, OPEN AI says that it hasn’t yet reached across any evidence of instances of GPT-2 miss.
“We think synthetic text dynamos have a higher chance of being abused if their outputs grow more reliable and coherent. We acknowledge that we package be aware of all threats, and that motivated players can replicate language models without a model statement,” Open AI records.
Of course, GPT 2 also has a range of real use cases. As OpenAI noted, it can use in creating AI writing agents. Better dialogue agents supersede translation & better language recognition systems. Does this scale-out the fact that it could be used to write compelling fake news & propaganda? We don’t know as of yet.
As for how good the system is, well, we fed the first paragraph of this piece in a web version of GPT-2 and well.. the second paragraph of this piece is entirely fake and generated by GPT-2 (although everything after that is factual). You can check it out for yourself here. Props to you if you weren’t fool. Anyway, it’s not like huge masses of people can be fooled by fake news, right? Oh, right…