Artificial Emotional Intelligence




In the race for more effective marketing strategies, an enormous step forward came with artificial emotional intelligence (emotion AI). Companies have developed software that can track someone’s emotions over a given period of time. Affectiva is a company that develops emotion AI for companies to facilitate more directed marketing for consumers. Media companies and product brands can use this information to show consumers more of what they want to see based on products that made them feel positive emotions in the past.
Emotion tracking is accomplished by recording slight changes in facial expression and movement. The technology relies on algorithms that can be trained to recognize features of specific expressions (1). Companies such as Unilever are already using Affectiva software now for online focus groups to judge reactions to advertisements. Hershey is also partnering with Affectiva to develop a device for stores that tells users to smile in exchange for a treat (2). Facial emotion recognition usually works either through machine learning or the geometric feature-based approach. The machine learning approach involves feature selection for the training of the machine learning algorithms, feature classification, feature extraction, and data classification. In contrast, the geometric feature-based approach standardizes the images before facial component detection and the decision function. Some investigators have reached over 90% emotion recognition accuracy (3). Emotion AI can even measure heart rate by monitoring slight fluctuations in the color of a person’s face. Affectiva has developed software that would work through web cameras in stores or in computers, in the case of online shopping. Affectiva also created Affdex for Market Research, which provides companies with calculations based on the Affectiva database, so companies have points of comparison when making marketing decisions.
In the future, Affectiva wants to expand into the healthcare field, as monitoring emotions has the potential to help people who are at risk for certain mental illnesses, such as depression (4). A mental health services researcher, Steven Vannoy, is developing an app using Affectiva’s software that would monitor a user’s emotions through check-ins. These check-ins would have the user describe how they are feeling about the future, who they are with, and what they are doing. This information would be used to predict a user’s short-term risk of suicide (5).
Affectiva has the largest emotion database in the world, containing over 6.5 million faces from 87 countries (6). The data comes from video recordings of people in natural environments, such as in a car, home, or office. Each person in the database opted in to have their faces recorded, with the option to opt out at any time. The top three countries represented in the dataset are India, the USA, and China. The data is used to find more examples and variations of expressions that the active learning algorithms can learn from. The dataset also provides opportunities to understand how emotions are expressed differently across cultures (7).
Emotion AI brings up many ethical concerns that should be addressed before this technology is implemented further, given the personal nature of the data that is collected by this software. One ethical concern of emotion AI is derived from the way the technology works. Using data on emotions to give consumers more personalized marketing could play on emotions in a potentially harmful way. For example, advertisements that appeal to the conscious or subconscious emotional response to food can lead many to consume unhealthier food. Children would be particularly vulnerable to this type of marketing if they are targeted by advertisements for low-nutrition, high fat foods. Therefore, emotion AI can have negative impacts on health outcomes for future generations in the long run (8).
Image courtesy of Pixabay.

Children are not the only vulnerable population that would need to be protected from emotion AI. Using emotions to customize advertisement experiences can also have harmful effects when products such as cigarettes are featured, especially if consumers are smokers or have smoked regularly in the past. People who are addicted to any substance and patients with psychiatric disorders would also fall into this vulnerable population (9). Lawmakers, neuroscientists, ethics experts, and developers of emotion AI need to consider how these vulnerable populations can be protected due to the greater capacity to manipulate when using emotion data in marketing.

Another major ethical concern surrounding emotion AI is that the data could fall into the wrong hands. Affectiva’s products are available to developers, so it is important to regulate how far the data can go and what developers can do with it. For example, personal data on emotions could have negative consequences if this information was sold to insurance companies. Insurance companies could lower premiums for certain people who show more positive emotions or even use Affectiva software to start tracking, on a daily basis, emotions of insured individuals who give consent to do so (10). This system could lower insurance premiums for people who consent to using emotion AI and create an unfair disadvantage for those who do not consent.
A third ethical concern with implementing emotion AI relates to consent. If stores install video cameras that measure customer emotions, there would be implicit coercion towards consumers who do not feel comfortable with this technology. If people do not consent, the only way they can avoid being filmed by AI cameras is to not enter the store at all. This acts as implicit coercion for people to consent to using this technology. Similarly, would consent to be recorded be required only from the online shopper? Or would friends or family who are close to the shopper also need to give consent to be filmed? Lawmakers and regulators need to draw strict boundaries to ensure privacy and to only capture emotions of people who have given informed consent.
Overall, artificial emotional intelligence has the potential to increase efficiency of marketing strategies. This technology can also be used to potentially save lives, by helping people who are at risk for suicide by analyzing their emotions throughout the day. However, artificial emotional intelligence should still be regulated to ensure that it is implemented in an ethical manner.


1. Morsy, A. (2016). Emotional Matters: Innovative software brings emotional intelligence to our digital devices. IEEE Pulse, 7(6), 38-41.

2. Darrow, B. (2015, September 11). Computers can’t read your mind yet, but they’re getting closer. Retrieved from

3. Mehta, D., Siddiqui, M., & Javaid, A. (2018). Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality. Sensors, 18(2), Sensors, 2018, Vol.18(2).

4. Jarboe, Greg. (2018, June 11). What is Artificial Emotional Intelligence & How Does Emotion AI Work? Retrieved from

5. Affectiva. (2017, August 14). SDK on the Spot: Suicide Prevention Project with Emotion Recognition. Retrieved from

6. SDK. (n.d.) Retrieved from

7. Zijderveld, G. (2017, April 14). The World’s Largest Emotion Database: 5.3 Million Faces and Counting. Retrieved from

8. Jain, A. (2010). Temptations in cyberspace: New battlefields in childhood obesity. Health Affairs., 29(3), 425-429.

9. Ulman, Y., Cakar, T., & Yildiz, G. (2015). Ethical Issues in Neuromarketing: “I Consume, Therefore I am!”. Science and Engineering Ethics, 21(5), 1271-1284.

10. Libarikaian, A., Javanmardian, K., McElHaney, & Majumder, A. (2017, May). Harnessing the potential of data in insurance. Retrieved from

The text above is owned by the site above referred.

Here is only a small part of the article, for more please follow the link

Also see:



One Reply to “Artificial Emotional Intelligence”

Leave a Reply

Your email address will not be published. Required fields are marked *