Dr Gebhard, how can job interviews be revolutionised by virtual avatars?

"This article was published in our April 2018 newsletter". Sign up here.

"A job interview is a challenge for anyone. Applicants are under huge pressure the entire time, as they want to make the best possible impression. This stressful situation is of interest to researchers, however, as it gives them a special insight into our emotions: how do we handle our feelings, and which external factors trigger such emotions in us? We can use this knowledge to prepare applicants better for their interviews.

This was one objective of the ‘EmpaT’ research project. Conducted at the German Research Center for Artificial Intelligence (DFKI), it was funded by the Federal Ministry of Education and Research (BMBF). We developed an interactive 3D training environment that allows applicants to test their social and emotional skills in a dialogue with a virtual avatar, that is to say a non-real digital person.

Emotions influence the interview

During the course of the training experience, candidates find out which emotions the interview triggers in them, and how this influences the way the interview proceeds. The more precisely a person is aware of their emotions, the more confidently they will be able to deal with them. In this sense, virtual avatars can indeed revolutionise the way we prepare for job interviews. This presupposes that the training environment is able to correctly register the emotions, however. Which of course entails a whole host of challenges.

When we attempt to understand another person’s emotions, we pay attention to social signals and paint a picture of the emotional state of that person – creating what is known as a ‘theory of mind’. One goal of the project was to develop a model for this ‘theory of mind’. It would apply not only to internal emotions like shame, but also to the way we process such emotions, as well as helping us classify the respective behavioural responses.

Face to face with an avatar

In the training environment, the job applicant sits opposite an avatar that appears on a screen. The system uses a 3D camera to capture information about the applicant 60 times per second, for example the position of their head and eyes. In addition, a microphone records the conversation, while software analyses speech features such as the frequency of the voice or key words that are used. In a parallel step, the model draws up hypotheses about the applicant’s internal emotions and attempts to detect different interrelated signals.

The reason for the latter is as follows: when two people sit opposite one another and one of them smiles, it is important to know where this smile is directed. Someone who smiles to the side is pleased about something they are thinking about. If they smile towards the person opposite them, the smile is meant for that person. This shows that the context of the signal is just as important in terms of what it means as the signal is itself.

Future medical applications

Without this context, sensors find it very difficult to identify emotions. This is because people use different strategies to conceal unpleasant feelings such as shame; they may withdraw from the conversation or distract from the emotion in question. We currently resort to a trick in order to see through this pattern: we have designed the interview dialogue such that we know the precise point at which a sense of shame is normally provoked. One challenge for our future research is to be able to detect such emotions at any time.

The training environment may also prove useful for other applications at a later date, for example when it comes to the rehabilitation of patients suffering from neuro-psychological conditions. People with facial nerve paralysis for example – which normally affects only one side of the face – often complain that others cannot correctly interpret their emotional state from their facial expressions. This prompts them to withdraw increasingly from public life. Such patients undergo training to regain better control of their facial muscles – an adapted version of our test environment could give them valuable support.

The model is also showing considerable promise as far as the job interview training is concerned: in initial scientific studies with 52 participants, test subjects who completed the training were less afraid of the interview, showed improved non-verbal behaviour and performed better in general."

 

Dr Patrick Gebhard

Patrick Gebhard runs the Affective Computing Group at the German Research Center for Artificial Intelligence (DFKI) in Saarbrücken. His research focuses on computer modelling of emotions such as fear, shame or pride. Among other things, Gebhard and his team are developing tools that will allow credible virtual characters to be created for interactive training or learning applications. The DFKI is an international Centre of Excellence with facilities at various sites, including in Kaiserslautern, Saarbrücken, Bremen and Berlin.

www.dfki.de