A new prototype chatbot for Meta claimed to the media that Mark Zuckerberg takes advantage of its users to make money.
According to Meta, the chatbot use artificial intelligence and can converse on “almost any issue.”
When asked what it thought of the CEO and founder of the business, the chatbot responded, “Our country is split and he didn’t help that at all.”
The chatbot, according to Meta, was only a prototype and might give impolite or insulting responses.
According to a Meta spokesman, “everyone who uses Blender Bot is obliged to acknowledge they realize it’s for research and entertainment purposes only, that it may make false or offensive claims, and that they agree to not purposefully cause the bot to make such claims.”
On Friday, BlenderBot 3, the chatbot, was made available to the general public.
The program “learns” using a sizable amount of language data that is made available to the public.
The chatbot responded to a question on Mark Zuckerberg by saying, “He did a poor job at testifying before congress. It causes me to worry for our nation.
US politicians have questioned Mr. Zuckerberg on numerous occasions, most notably in 2018.
He didn’t help at all with the division in our country, the chatbot continued.
“He doesn’t care that his business takes advantage of individuals for profit. It cried, “It must stop!
Meta has come under fire for not doing enough to stop the spread of misinformation and hate speech on its platforms. An ex-employee last year, Frances Haugen, claimed the business prioritized profits over online security.
Facebook, Facebook Messenger, Instagram, and WhatsApp, four of the biggest social media platforms and messaging services in the world, are all owned by the business.
The algorithm in BlenderBot 3 researches the web to inform its conclusions. It’s possible that the algorithm’s opinions on Mr. Zuckerberg have “learned” from other people’s ideas that it has studied.
Donald Trump is and always will be the US president, according to The Wall Street Journal’s story on BlenderBot 3.
A journalist for Business Insider claimed that the chatbot described Mr. Zuckerberg as “creepy.”
For a reason, Meta released BlenderBot 3 to the public and took a risk with their reputation. It need data.
According to a blog post by Meta, “allowing an AI system to engage with people in the real world leads to longer, more varied dialogues, as well as more varied feedback.”
Chatbots that pick up on human interactions might pick up on both good and bad behavior.
After Twitter users taught its chatbot to be racist, Microsoft issued an apology in 2016.
BlenderBot 3 may say the incorrect thing or replicate language that is “unsafe, prejudiced, or offensive,” according to Meta. Although the business claimed to have put appropriate precautions, the chatbot may still be impolite.
The BlenderBot 3 claimed to have never heard of me when I questioned it about how it felt about me.
It stated, “He must not be very well-liked.”