2019-10-15 17:18 |
Bad actors are increasingly using more advanced methods to generate fake news and fool readers into thinking they are legitimate. AI-based text generators, including OpenAI’s GPT-2 model, which try and imitate human writers play a big part in this.
To mitigate this, researchers have developed tools to detect artificially generated text. However, new research from MIT suggests there might be a fundamental flaw in the way these detectors work. Traditionally, these tools trace back a text’s writing style to determine if it’s written by humans or a bot. They assume text written by humans is always legitimate and the text…
This story continues at The Next Web
.
Similar to Notcoin - Blum - Airdrops In 2024