- Anthropomorphism is the attribution of human-like characteristics to AI
- The psychological effects on users can be positive but also put users at risk
- Organizations developing AI apps must test for psychological safety, risk, and privacy
This article was updated on November 9, 2024 with the following information:
Come 2024, Meta will require disclosures for AI political ads. As part of their commitment to transparency, Meta has announced that advertisers running political or issue ads on their platforms will now be required to disclose any digital alterations made through the use of AI. Meta said it will require disclosures on ads featuring deepfakes, photo-realistic people that don't exist, events that never happened, and realistic events that are not true recordings. This new requirement will also extend to Facebook and Instagram ads related to elections, politics, and social issues. This move aims to ensure that users are aware of any AI-generated content and promotes a more transparent advertising environment. This should make for a really interesting election year ahead.
-- and
OpenAI announced GPT-4 Turbo this week, the latest iteration of their popular large language model chat. GPT-4 Turbo is based on content from April 2023 (so it's no longer so out of date), boasts enhanced capabilities such as being able to parse 128k of text, which is about 300 pages, call APIs, handling of thread states, and the ability to generate human-quality speech - all with a price drop and copyright infringement protection.
Take our AI & Tech Risk Survey to help us track safety in AI
Anthropomorphism refers to the attribution of human characteristics or behavior to artificial intelligence systems. It is the tendency for humans to perceive AI as having human-like qualities, such as emotions, intentions, or consciousness. This phenomenon has become increasingly common as AI technologies continue to advance. We've seen it for a while now with Siri, Alexa, and other voice systems.
Researching anthropomorphism in AI is crucial as it allows us to gain insights into how humans interact with these types of experiences in AI systems and how it impacts them. This knowledge is particularly important for app designers and developers as it has implications for user experience, trust, decision-making, and risk assessment. By acknowledging the natural tendency of humans to anthropomorphize AI, we can design AI systems that meet users' expectations, enhance their interactions, keep them safe, and prevent the inadvertent introduction of bias into these systems.
The Psychological Effects of Anthropomorphism
Anthropomorphism in AI can have significant psychological effects on users - especially those in vulnerable populations. When AI systems exhibit human-like characteristics, users tend to develop emotional connections and form social bonds with them. This can lead to increased trust, satisfaction, and engagement with AI technology, but also lead to increased risk.
Aslan in Narnia is a great example of using anthropomorphism.
Users may overattribute human qualities to AI, leading to unrealistic expectations and disappointment when the AI fails to meet those expectations. Additionally, anthropomorphism can blur the line between human and AI responsibilities, raising ethical concerns and potential misuse of AI technology.
The constant stream of fake information during election cycles is a great example. It is easier for a human to believe things that are presented as truth by humans that look trustworthy, even if the information isn't true. This is especially alarming given the rise in fake humans (synthetics) and fake news often found in AI-generated content.
Ethical Considerations of Anthropomorphism in AI
The ethical considerations of anthropomorphism in AI revolve around issues of transparency, accountability, and the potential for AI to manipulate or deceive users. When AI systems are designed to appear human-like, there is a responsibility to clearly communicate their limitations and capabilities to users. This is no different than the ethics of design in product design, but with the rise in AI and deep fakes, it's becoming an even greater concern.
The use of anthropomorphism raises questions about the ethical implications of AI's role in decision-making processes. Should AI systems be held accountable for their actions? How should AI be regulated to prevent misuse or harm? These ethical considerations need to be addressed to ensure the responsible development and deployment of anthropomorphic AI.
Just this week, Meta announced that they will require disclosures for AI political ads. They are addressing deepfakes, photo-realistic people that don't exist, events that never happened, and realistic events that are not true recordings. This is huge news ahead of the 2024 election cycle, and a step in the right direction. I expect we will see more watermarking and disclosure statements attached to AI-generated content in the coming year.
Enhancing User Experience through Human-Like AI Interactions
On the flip side, one of the key advantages of anthropomorphism in AI is its potential to enhance user experiences. Human-like AI interactions can make users feel more comfortable, understood, and engaged. This can lead to improved user satisfaction, trust, and adoption of AI technology. A great use case for this is in classroom environments, especially for neurodiverse students who may benefit from an AI-enhanced learning experience.
To create AI experiences that feel more human while keeping real humans safe, designers and developers must consider how far to go with personalization and large language models and how to design systems with emotional intelligence. The first two of these areas are already commonplace in the design of AI technologies; however, the latter is only more recently gaining traction.
As large language models (LLMs) and anthropomorphism continue to advance, whether they are open-source or private enterprise models, it is crucial for organizations to prioritize the testing of psychological safety and risk, as well as the privacy of their platforms. Additionally, it is important for designers and developers to incorporate a human-in-the-loop flow and establish effective methods for users to report any adverse effects resulting from the use of anthropomorphized AI.
OpenAI announced GPTs are being released soon allowing users to easily create their own personalized chatbots. I am curious how long it will take for these chatbots to have anthropomorphic features. Once they do, it will be possible for anyone to create a GPT and give it human-like trustworthy characteristics, post it on the web, and interact with literally anyone. This is a really incredible advancement as it democratizes access to ChatGPT chatbots for creators, but also potentially poses a significant threat to those who consume chatbot content from creators as well as send back their own data to a chatbot. Things could get dicey quickly, and while I am excited for this new feature, I also worry for the safety of young children, teens, and other vulnerable groups.The Future of Anthropomorphism in AI
The future of anthropomorphism in AI holds both exciting possibilities and challenges. As AI technology continues to advance, there will be increasing opportunities to create more AI systems that closely resemble human-like interactions.
However, the future also brings challenges in terms of ethics, regulation, and the potential impact on human society. As AI advances, and becomes more anthropomorphic, society needs to address questions regarding privacy, data security, the boundaries between human and AI responsibilities, and the potential threat to humans.
Ultimately, the future of AI depends on responsible development, thoughtful regulation, and ongoing dialogue between AI developers, policymakers, and society as a whole.
In response to these growing concerns, Predictive UX has developed a complimentary AI Risk Index. We trust that you will find it beneficial as you seek to understand the safety of your AI applications. Reach out to us for any assistance in using the AI Risk Index or implementing AI responsibly within your organization.