With an AI-powered chatbot, Meta is heading for disaster

It seems that the big fashion for “talking robots” in the mid-2000s is gone. But on Friday August 5, Meta stated that work on this technology was continuing with the introduction of BlenderBot 3, the new “State-of-the-art chatbotAccording to the company, this text-based bot canSpeak naturally with people” on me “Almost any topic’, a promise made time and time again by the creators of chatbots but never fulfilled.

BlenderBot 3 is still in prototype state, and can be accessed for free (only in the US at the moment), so that a large number of voluntary testers can make progress through the discussion evaluation system. So it’s been questioned at length by the media and other curious people since it’s been posted online, and the first assessment sounds like a sad rumor: BlenderBot 3 quickly plagued Facebook, slamming Zuckerberg’s clothing style, and then circling with conspiratorial, even anti-Semitic remarks. Right before launching the tool, Meta warns users that the chatbot’You are likely to make false or offensive statementsBut he made it clear in his press release that he had put in place guarantees to liquidate the worst…

Meta chatbot, the first critic of Meta

BlenderBot’s long-term goal. Researchers don’t want to create a functional and marketable tool in the short term, they just want to improve the state of the art chatbots. Concretely, their tool aims to incorporate traits of human conversation (such as personality traits) into their responses. With a long-term memory, it must be able to adapt to the user as the exchanges progress. In a press release, the researchers specified that BlenderBot should develop conversational skills for chatbots”Avoid learning unnecessary or dangerous answers“.

The problem, as always, is that the chatbot will search the Internet for information to feed the conversation. However, it does not do enough sorting. When asked about leader Mark Zuckerberg, he can answer “He is a competent businessman, but his practices are not always ethical. It’s funny that he has all that money but he still wears the same clothes!‘, according to Business Insider. He doesn’t hesitate to recall the myriad of scandals that have plagued Facebook (and partly justified his change of identity) when it comes to his parent company. Or he says his life has been much better since he deleted Facebook.

If the bot is so negative about the Meta, it is simply because it will benefit from the most popular search results on Facebook, which relate to its history of setbacks. Through this process, it maintains a bias, which turns out to be not in favor of its creator. But these drifts aren’t limited to fun projections, which poses a problem. to journalist from The Wall Street JournalBlenderBot claimed that Donald Trump was still president, and “He will remain at the end of his second term in 2024“. Thus it conveys a conspiracy theory. On top of that, Vice notes that only BlenderBot’s answers are.”In general, it is neither realistic nor good“and that is”He often changes the subjectBrutally.

The story repeats itself

These slips from entertaining to dangerous have a deja vu feel. In 2016, Microsoft launched the Tay Twitter chatbot, which was supposed to learn in real time from discussions with users. Fail: A few hours later, the script bot conveyed conspiracy theories as well as racist and sexist remarks. Less than 24 hours later, Microsoft fired Tai and apologized profusely for the fiasco.

So Meta tried a similar approach, drawing on a massive learning model with more than 175 billion parameters. This algorithm was then trained on giant text databases (mostly accessible to the public), with the goal of extracting language understanding into mathematical form. For example, one of the data sets created by the researchers contained 20,000 conversations on more than 1,000 different topics.

The problem with these large models is that they reproduce the biases of the data that were fed into them, often with a magnifying effect. Meta was aware of these limitations: “Because all AI chatbots are known to occasionally imitate and generate dangerous, biased or offensive feedback, we have conducted large-scale studies, co-hosted workshops, and developed new techniques to create protections for BlenderBot 3. Despite this work, we may BlenderBot continues to provide rude or offensive comments, which is why we collect feedback.” It is clear that additional guarantees do not have the desired effect.

Faced with repeated failures of major language models, and a slew of abandoned projects, the industry has returned to less ambitious but more effective chatbots. Thus, the majority of customer assistance bots today follow a predetermined decision tree without ever leaving it, even if that means telling the customer they don’t have an answer or directing them to a human factor. The technical challenge then becomes understanding the questions users are asking, and asking the remaining questions without the most relevant answers.

transparent metadata

While BlenderBot3’s success is more than questionable, the Meta does at least exhibit rare transparency, a quality that AI-powered tools typically lack. The user can click on the answers of the chatbot to get the sources (in a more or less detailed way) about the origin of the information. In addition, the researchers share the code, data, and model used to operate the chatbot.

in guardiana Meta spokesperson also clarifies that “You areAnyone using the Blender Bot is asked to acknowledge that they understand that the discussion is for research and entertainment purposes only, that the bot may make false or offensive statements, and that they agree not to intentionally induce the bot to make offensive statements.

In other words, BlenderBot noted that the paradigm of sensitive chatbots capable of expressing themselves like humans is still a long way off, and that there are still many technical barriers to overcome. But Meta has taken enough precautions in her approach so that this time the story does not turn into a scandal.