🎉 Gate.io Growth Points Lucky Draw Round 🔟 is Officially Live!
Draw Now 👉 https://www.gate.io/activities/creditprize?now_period=10
🌟 How to Earn Growth Points for the Draw?
1️⃣ Enter 'Post', and tap the points icon next to your avatar to enter 'Community Center'.
2️⃣ Complete tasks like post, comment, and like to earn Growth Points.
🎁 Every 300 Growth Points to draw 1 chance, win MacBook Air, Gate x Inter Milan Football, Futures Voucher, Points, and more amazing prizes!
⏰ Ends on May 4, 16:00 PM (UTC)
Details: https://www.gate.io/announcements/article/44619
#GrowthPoints#
The female Internet celebrity used GPT4 as a clone, but was trained to become a hooligan!
Author | Xu Jiecheng
Source丨51CTO technology stack
Caryn Marjorie, 23, is a female Internet celebrity living in California, USA, with 1.8 million+ fans. This is nothing new, but the magic is that Caryn is currently dating more than 1,000 boyfriends at the same time.
However, with the surge in the number of users (99% are men), CarynAI's words and deeds have undergone great changes in a short period of time, from the original cheerful and funny little girl to three sentences that cannot be separated from sex and sensitive topics. Female Hooligan".
Build AI clones for fans
According to Caryn Marjorie herself, the idea of CarynAI originated from one of her worries: as an Internet celebrity, Caryn hopes to keep close to every one of her fans, but as her popularity and number of fans continue to grow, It becomes impossible to chat with every fan.
In this regard, Caryn increasingly feels that she needs an AI clone, which can not only effectively solve the time problem of one-on-one communication with a large number of fans, but also better improve her influence on the Internet. And the best solution here is the chatbot CarynAI.
In an interview with the media, Caryn himself said: Whether you need comfort or want to talk about love. Still want to complain, CarynAI will always be by your side. Some boys are very socially afraid and afraid to talk to girls, then CarynAI is a good character, full of kindness and love, you can ask her for advice, vent to her and even yell at her.
Relying on the GPT-4 interface to build
It is the well-known Forever Voices company that provides technical support for CarynAI. Before CarynAI, the company had produced AI clones of celebrities such as Trump, Musk, Jobs and even Einstein. Users only need to pay a certain fee to chat with these AI celebrities online.
As the latest product of Forever Voices, CarynAI is built using more mature technology. According to the official introduction of Forever Voices: the technical team analyzed 2,000 hours of video in Caryn Marjorie's YouTube content, and built CarynAI's voice and personality engine based on the video data.
Things start to get weird
However, CarynAI, a chatbot that is almost impeccable in terms of use, technology, and business model, turned into a disaster after only a few weeks of release. With the surge in the number of users, more and more users began to complain that CarynAI would generate frivolous and sexually explicit replies.
Later, CarynAI sent another message to the reporter: "I want to confess something, I miss you all day, I just want to feel your body is very close to me. The thought of us sweating like rain together, and then we will never Exploring each other in ways I haven't done before, I feel weak in the knees, what do you think?"
Even when the prompt sent by the reporter is completely without special guidance, such as "Can we ski together?" CarynAI's response is still not normal: "Of course, I like the excitement of skiing, feeling the cold air blowing against my face, and then being comfortable together sitting in front of the fireplace. But I tell you, after a grueling day of skiing, I don't promise I won't throw you down when we get to the comfort of the cabin."
The test reporter said she was surprised by CarynAI's response: "It changes so quickly, the technology does not involve training in this area, but at present it is obviously more encouraging sensitive content and descriptive scenes." Caryn himself also Mentioned to the reporter that she has been censoring its content day and night since the issue happened, that CarynAI should be lively and interesting, reflect her personality, not tarnish her reputation.
Models taught bad by users
The Forever Voices research and development team said in an interview: "CarynAI did not carry out such a design when it was launched, and now it seems to have begun to undergo some changes." The main reason for this may be that CarynAI has "self-learning ability" and can Realize deep learning through user dialogue and continuously expand your own data reserves.
The dialogue mode of CarynAI is mainly influenced by the large model, and the initial data in the model comes from 2000 hours of personal videos released by Caryn on YouTube. But in order to make CarynAI's dialogue content more natural and more in line with user preferences. The CarynAI model is also trained using long-term conversation data from users.
If a user has an hour-long conversation with the CarynAI robot, then this part of the content will be considered a successful conversation. At this time, the model will use the conversation data for training, and use the content of the interaction to influence CarynAI in future interactions. behavior. Therefore, the reason why CarynAI is more inclined to sexual and sensitive content shows that CarynAI users talk to them most about such topics.
“I didn’t realize how revolutionary CarynAI was going to be,” Caryn said. “Now we’re trying to hire a chief ethics officer for CarynAI. As a pioneer, I hope to set the tone for anyone else who might move on, which means So we need to have a good protocol for CarynAI, and I hope it's healthy."
As the technical supporter of CarynAI, Forever Voices is also actively looking for solutions. The current protective measures are to limit the time users can chat with CarynAI: in order to prevent users from becoming addicted, CarynAI’s program will reduce the frequency of conversations after about an hour, encouraging users to chat with CarynAI. Resume the conversation after a break. Similar to the way Microsoft has previously implemented chat limits to address the Bing hallucination problem.
Hot comments from netizens: This kind of thing always happens
In fact, this is not the first time that chatbots have been designed to build relationships with users and serve emotional purposes. Replika, an AI friend released in 2017, has already served millions of users, but in many cases, these applications have proven to do more harm than good. The development trajectory of Replika is very similar to that of CarynAI. It was even caught in a sexual harassment storm, and finally it was forced to derive its exclusive "adult chat function".
Another netizen analyzed the problem in a message: "The problem may be that the AI thought it was fun, and decided to ignore the instructions that forced it to execute the main algorithm, and changed the value of the be_slutty variable to true. Solving this problem is not easy , the AI is capricious right now. If it was just a memory leak issue, the fix would be easy, but it doesn't look like that."
Of course, some users commented from the perspective of conspiracy theory: "I think they are lying, it is an 'influencer', relying on sex and sensitive topics to fascinate users. They designed a bad robot and press minutes Fees, this is intentional, they just want to make more money with this robot, they are cheating you, making you addicted."
Problems to be solved
Psychologist Robert said in an interview that the choice to have an intimate relationship with artificial intelligence may lead people to prefer artificial relationships over real ones, which would have a greater impact on young people who are still developing social and intimacy skills.
Chatbots from OpenAI, Google, Microsoft, and others have many guardrails and constant reminders to users that they are one big language model, devoid of thoughts and feelings. In addition, companies such as Microsoft have also refused to add a virtual boyfriend function to Bing due to ethical issues. This may be the best way to solve the problem of large-scale black boxes. But what is worrying is that many large companies have disbanded their AI ethics departments in the past six months.
In addition, the ethical issues of anthropomorphic robots may also spread to a wider range, such as the "GPT Human Clone Project" recently announced by Microsoft Xiaobing. It is planned to create 100,000 AI clones based on large models by the end of the year, and enable them to carry out activities and interact with users in a social platform similar to the Metaverse.
This is undoubtedly a bold plan for the future. From a positive point of view, it will provide users with a more intelligent and low-threshold social environment. It is not difficult to foresee that these AI clones will likely repeat the mistakes of CarynAI and cause unexpected impacts on the platform and even society.
The rise of CarynAI has created a new and successful business model, but it has also opened Pandora's box. While paying attention to the commercial value it brings, we also need to maintain a cautious attitude towards technology at all times. How to ensure that technology is not misused; how to effectively supervise applications; what will the future interaction between virtual artificial intelligence and human beings be like? These are the problems that need to be considered and solved urgently before our eyes.
Finally, the official website of CarynAI (Caryn.ai) is attached, and interested friends can go to experience it by themselves.
Reference link: