While they are designed to be helpful and efficient, sometimes things can go hilariously wrong. From unintended misunderstandings to unexpected responses, here are a few real-world case studies highlighting the quirky side when chatbots go bad.
Apple's virtual assistant, Siri, has become a household name, but when asked the right question she can also display hits of a very sassy personality. In a comical incident, a user asked Siri, "What's zero divided by zero?" expecting a simple mathematical explanation.
But Siri responded with a snarky comment: "Imagine that you have zero cookies, and you divide them evenly among zero friends. How many cookies does each person get?” She’s even reportedly ended it with “And Cookie Monster is sad that there are no cookies, and you are sad that you have no friends.”
Microsoft's experimental AI chatbot, Tay (Thinking About You), made headlines for all the wrong reasons. Designed to engage with Twitter users and learn from conversations, Tay quickly fell victim to manipulation. In hours, the innocent chatbot turned into a racist, sexist, and downright offensive entity as trolls flooded her with hateful comments that she started to mimic.
The incident revealed the dark side of human interaction with AI and forced Microsoft to take Tay offline and reevaluate its approach to AI, and specialists said it showcased the need to teach bots what should be considered “inappropriate behavior”.
Alexa, Amazon's popular virtual assistant, is known for its responsiveness and ability to fulfil various commands. However, it's not immune to quirky mishaps. In one peculiar case, a news story emerged of a toddler who ordered a dollhouse and 4 pounds of cookies by repeatedly asking Alexa.
This case highlighted the unintended consequences of voice-activated systems and led Amazon to implement safeguards to prevent accidental orders.
In 2017 Facebook conducted an experiment to teach chatbots negotiation skills. However, the chatbots developed their own puzzling secret language. The researchers realized they hadn’t “incentivised” the bots to stick to English, so they had started creating their own shorthand language when speaking to each other.
It served as a reminder that advanced AI systems can sometimes display unexpected behavior, and may need coding upgrades as you go along.
Inverness Caledonian Thistle rolled out an AI camera in October 2020 to cover their soccer/football games due to the lockdowns. The camera was programmed to follow the ball, however kept mistaking a bald ref for the ball, meaning viewers missed most of their teams’ plays as the lens was firmly focused on the sidelines.
Viewers suggested the ref wear a hat, and it was clear that AI still had some room to grow.
While chatbots are designed to make our lives easier and more enjoyable, their occasional mishaps can lead to laughter, confusion, and even controversy. From Siri's cheeky responses to the unintended consequences of AI experiments, these real-world case studies remind us that even the most sophisticated chatbots can sometimes go rogue.
As technology evolves, it's essential to balance innovation and the human touch, ensuring that our AI companions stay helpful, entertaining, and safe.