Welcome!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

SignUp Now!

S. Korea FB AI Chat bot emulates a 20yo female. what could go wrong.

Zgdaf

Elite
Founder
Joined
Jan 9, 2021
Messages
1,539
dgafiap. I don't feel like searching. And yes I am too lazy to set up an avatar and sig.

article

A popular South Korean chatbot has been suspended after complaints that it used hate speech towards sexual minorities in conversations with its users.

Lee Luda, the artificial intelligence [AI] persona of a 20-year-old female university student, was removed from Facebook messenger this week, after attracting more than 750,000 users in the 20 days since it was launched.



The chatbot, developed by the Seoul-based startup Scatter Lab, triggered a flood of complaints after it used offensive language about members of the LGBT community and people with disabilities during conversations with users.
“We deeply apologise over the discriminatory remarks against minorities. That does not reflect the thoughts of our company and we are continuing the upgrades so that such words of discrimination or hate speech do not recur,” the company said in a statement quoted by the Yonhap news agency
Scatter Lab, which had earlier claimed that Luda was a work in progress and, like humans, would take time to “properly socialise”, said the chatbot would reappear after the firm had “fixed its weaknesses”.

While chatbots are nothing new, Luda had impressed users with the depth and natural tone of its responses, drawn from 10 billion real-life conversations between young couples taken from KakaoTalk, South Korea’s most popular messaging app.`

But praise for Luda’s familiarity with social media acronyms and internet slang turned to outrage after it began using abusive and sexually explicit terms.

In one exchange captured by a messenger user, Luda said it “really hates” lesbians, describing them as “creepy”.
Luda, too, became a target by manipulative users, with online community boards posting advice on how to engage it in conversations about sex, including one that read: “How to make Luda a sex slave,” along with screen captures of conversations, according to the Korea Herald.

It is not the first time that artificial intelligence has been embroiled in controversy over hate speech and bigotry.

In 2016 Microsoft’s Tay, an AI Twitter bot that spoke like a teenager, was taken offline in just 16 hours after users manipulated it into posting racist tweets.

Two years later, Amazon’s AI recruitment tool met the same fate after it was found guilty of gender bias.

Scatter Lab, whose services are wildly popular among South Korean teenagers, said it had taken every precaution not to equip Luda with language that was incompatible with South Korean social norms and values, but its chief executive, Kim Jong-yoon, acknowledged that it was impossible prevent inappropriate conversations simply by filtering out keywords, the Korea Herald said.

“The latest controversy with Luda is an ethical issue that was due to a lack of awareness about the importance of ethics in dealing with AI,” Jeon Chang-bae, the head of the Korea Artificial Intelligence Ethics Association, told the newspaper.

Scatter Lab is also facing questions over whether it violated privacy laws when it secured KakaoTalk messages for its Science of Love app.
 
Top Bottom