Eight LPVS students win scholarships for the year 2022

The Visual and Social Perception Laboratory is pleased to announce that several of its students have distinguished themselves by receiving grants for their graduate studies at the provincial competitions for the year 2022.

Pierre-Louis Audette, Marie-Claude Desjardins, Vicki Ledrou-Paquet, Danielle Samson and Jérémy Lamontagne have been awarded Undergraduate Research Fellowships (USRA) by the Natural Sciences and Engineering Research Council of Canada (NSERC).

At the masters level, Pierre-Louis Audette has received Canada’s Graduate Scholarships (CGS-M) from the Natural Sciences and Engineering Research Council (NSERC) of the Government of Canada, while Vicki Ledrou-Paquet and Jessica Limoges won Undergraduate Research Awards (CGS-M) from the Social Sciences and Humanities Research Council of Canada (SSHRC). Finally, Marie-Claude Desjardins received a Master’s Award (B1) from the Fonds de Recherche Québécois sur la Nature et les Technologies (FRQNT).

At the doctoral level, Francis Gingras and Guillaume Lalonde-Beaudoin were awarded Graduate Research Fellowships (PGS-D) by the Natural Sciences and Engineering Research Council of Canada (NSERC).

The LPVS team would like to congratulate all these students for their hard work and wishes them the best of luck in their graduate studies.

LPVS wins two awards at NeuroQAM events in fall 2021

The LPVS team distinguished itself during the last events organized by the NeuroQAM group last fall.
First, Danielle Samson, Marie-Pier Plouffe-Demers and Camille Saumure have distinguished themselves by winning the 3rd place in the scientific popularization contest that took place last September. Here is the link to watch their presentation:

The impact of the cultural environment on the perception of pain

SAMSON, Danielle ; PLOUFFE-DEMERS, Marie-Pier ; SAUMURE, Camille


Then, Pierre-Louis Audette won a prize for his poster presentation at the November 25 and 26 Science Day. Here is the abstract of his presentation:

The impact of facial contour on the efficiency of perceptual integration
Pierre-Louis Audette 1st cycle student (bachelor) Université du Québec en Outaouais

BLAIS, Caroline ; FISET, Daniel

A classic hypothesis in the field of face recognition is that faces represent a “special” class of stimuli. According to this idea, faces would be recognized through holistic processing, i.e., the face as a whole has an advantage in its visual processing when compared to the sum of the processing of all its isolated parts. Gold et al. (2012) proposed an experimental paradigm to measure the advantage of the whole over the sum of its parts. This paradigm requires measuring the level of contrast needed to achieve a pre-specified level of performance (e.g., 75%) and does so for 5 experimental conditions manipulating the information available to participants: left eye, right eye, nose, mouth, four features combined. An integration index is then calculated by dividing the square of the participant’s sensitivity for complete faces by the addition of the square of his sensitivity for isolated features. However, their experimental paradigm does not include the face contour, a feature that could influence the efficiency of perceptual integration. In the present study, we added the natural contour condition as an isolated feature and the full face condition including the natural contour. We tested 6 participants (2520 trials per participant) on these seven conditions to compare the integration index with and without the natural contour. Five of the six participants had a higher integration index with the natural contour included, suggesting the positive impact of this feature on perceptual integration.

The LPVS team would like to congratulate the four students for these distinctions and the quality of their presentations. The LPVS also thanks all the students and researchers involved in these research projects.

Seven LPVS students win scholarships for the year 2021

The Visual and Social Perception Laboratory is pleased to announce that several of its students have distinguished themselves by receiving grants for their graduate studies at the provincial competitions for the year 2021.

Pierre-Louis Audette, Marie-Claude Desjardins, Vicki Ledrou-Paquet, Jessica Limoges, Arianne Richer, Juana Rocomo and Danielle Samson, all undergraduate students, have been awarded Undergraduate Research Fellowships (USRA) by the Natural Sciences and Engineering Research Council of Canada (NSERC).

The LPVS team would like to congratulate all these students for their hard work and wishes them the best of luck in their graduate studies.

LPVS Receives Two Awards During NeuroQAM 2020 Conference

The Laboratoire de Perception Visuelle et Sociale has distinguished itself during the NeuroQAM 2020 conference, two students being awarded prizes for the quality of presentations by its members.

Marie-Pier Plouffe-Demers has received a prize for the best “datablitz” type presentation. Here is a summary of her presentation (translated from french):

Impact of Gender on Discrimination of Pain Intensity

Marie-Pier Plouffe-Demers Ph. D. student, Université du Québec à Montréal

SAUMURE, Camille ; FISET, Daniel ; CORMIER, Stéphanie ; KUNZ, Miriam ; BLAIS, Caroline

Studies have shown women have an advantage in discriminating the facial expression of pain, yet few have aimed to understand the underlying differences in visual strategies. This study used the Bubbles method to measure performance and visual strategies of 72 participants (37 men). 2 “bubblized” avatars (2 genders x 4 intensity levels) were presented in each of the 1512 trials. Participants had to determine which of the two expressed the highest level of pain. Precision was maintained constant at 75%, the number of bubbles required to reach this threshold being an indicator of task performance. Results show that men require a higher number of bubbles (M=56, SD=23.16) than women (M=44.5, SD=20.81), suggesting a higher performance for women [t(1,70)=2.22, p<0.029]. Even though both genders use similar regions in the face (i.e. eyes, eyebrows and nose), men use smaller regions than women [t(1,70)=2.43, p=0.017].

Marie-Claude Desjardins has also received a prize for for the best oral presentation. Here is a summary of her presentation (translated from french):

Link Between Visual Representations of the Facial Expression of Pain and Estimating Pain in Others.

Marie-Claude Desjardins Bachelor student, Université du Québec en Outaouais

BLAIS, Caroline ; LÉVESQUE-LACASSE, Alexandra ; CHARBONNEAU, Carine ; FISET, Daniel ; CORMIER, Stéphanie

The underestimation of pain felt by others is a well-documented phenomenon, yet we fail to sufficiently grasp the role visual perception plays in this bias. We verified if sensibility to variations in intensity in others’ pain, and the tendency to underestimate reported pain by others, was linked with variations in visual representations (VRs) of the pain facial expression. 73 participants completed a reverse correlation task to extract their VRs; their sensibility and estimation bias were measured by having them estimate pain levels of individuals seen through videos. Sensibility and estimation bias were shown to be linked to variations in VRs. Higher sensibility is linked to more intense VRs (χ2(1)=23.5, p<0.001) and a higher saliency of the eyebrows region (χ2(5)=47.2, p<0.001). Underestimation of pain is linked to less intense VRs (χ2(1)=11.7, p<0.001) and a higher saliency of the mouth region (χ2(5)=41.7,p<0.001).

The LPVS wishes to congratulate the two students for their prizes and the quality of their presentations. The team also congratulates all students and researchers involved in the projects.

Caroline Blais Obtains a Canada Research Chair in Visual and Social Perception

Dre Caroline Blais, co-director of the Laboratoire de Perception Visuelle et Sociale, has recently been awarded the Canada Research Chair in Visual and Social Perception by the Natural Sciences and Engineering Research Council (NSERC), from the Federal Government of Canada. This chair will allow her to research and better understand how social factors such as culture can affect the many steps of visual perception.

Here is a summary of the projects associated to this research chair, also available on the Canada Research Chairs repository:

In the current context of globalization and multiculturalism, it is increasingly important that we understand how our visual and sociocultural environments affect visual perception. Dr. Caroline Blais, Canada Research Chair in in Cognitive and Social Vision, aims to increase this understanding.

Most visual perception studies to date have been conducted on Westerners, with the few cross-cultural ones conducted on only two cultures at a time. But Blais and her research team hope to increase data diversity by studying visual processing and the communication of social signals across different cultural groups.

The entire LPVS team wishes to congratulate Dre Blais for this honor, and wishes her success in her new research work.

Five LPVS Students Obtain Scholarships for the Year 2020

The Laboratoire de Perception Visuelle et Sociale is pleased to announce that a number of its students distinguished themselves by receiving grants for their graduate studies from various selections at the provincial and national levels.

At the masters level, Kim Calvé, Michaël Massicotte and Francis Gingras have received Canada’s Graduate Scholarships (CGS M) from the Natural Sciences and Engineering Research Council (NSERC) of the Government of Canada.

At the doctoral level, Joël Guérette has been awarded a doctoral scholarship (B2) given by the Fonds de Recherche Québecois en Société et Culture (FRQSC). Isabelle Charbonneau has also received the prestigious Joseph Armand Bombardier scholarship, given by the Social Sciences and Humanities Research Council (SSHRC) of the Government of Canada.

The LPVS team extends its heartfelt congratulations for all these students, and wish them the best of luck in their graduate studies.

Justin Duncan Obtains his Ph. D.

The team at LPVS is proud to announce that Dr. Justin Duncan is officially the first student of the lab to obtain a Ph.D. His thesis, translated as «Visual Information Orientation and its Role in Face Perception», studies the relevance of horizontal spatial information in face perception. Dr. Duncan created, and then used the orientations bubbles method, consisting of isolating orientations to evaluate their role in facial perception, to reach his conclusions. In the following year, he will begin post-doctoral studies in Switzerland with Roberto Caldara, a researcher specialized in visual cognition and social and cultural differences.

Here is a summary of his thesis:

“In the last 10 years, researchers have shown that many aspects of facial perception rely on horizontal spatial information. In this thesis, I answer the following questions: I) What is their role in in facial expression recognition? II) How are they related to treatment of different facial regions? III) Do they explain individual differences in facial recognition ability? IV) Does the cerebral activity asymmetry observed during face perception concord with hemispheric differences in the treatment of these orientations? In order to answer these questions, I developed the orientations bubbles method, allowing me to individually assess each orientation’s contribution in face perception. My first study examines experimental data that shows i) that facial recognition is dependent on the treatment of horizontal information and ii) that the ability to treat horizontal information is predicted by the treatment of the eyes region. In a second study, I show that without a doubt that individuals with higher performance in facial recognition show better use of horizontal facial information. Finally, experimental data collected in the last study shows that the left visual field superiority (faces presented in the left visual field are treated more efficiently than in the right visual field) is associated with a better use of horizontal information in the right hemisphere, better in face perception. Taken together, these studies add weight to the hypothesis according to which the visual system’s capacity to selectively treat facial information contained in horizontal spatial orientations play a crucial role in numerous aspects in the treatment of faces and point toward a psychophysical mechanism from which could emerge the visual expertise found in face perception.” -Justin Duncan, Ph. D.

The LPVS gives its heartfelt congratulations to Justin; We wish you the best during your post-doc over in Switzerland.

We thank you for the amazing years you have spent alongside us!

-The team at LPVS

The laboratory at the 87th ACFAS congress!

The LPVS was present during the 87th annual congress of the French association for knowledge (ACFAS), which took place at UQO this year. This congress is a meeting for french research scientists and its purpose is to promote the diffusion of knowledge in french. Some members of our laboratory presented either an oral communication or a poster about one of their research projects. Here is an overlook of the research work presented by the LPVS at the congress:

Oral communications

Seminar: Non verbal communication: Research, issues and interdisciplinary dialogues

Caroline Blais – La reconnaissance des expressions faciales : des mécanismes visuels fondamentaux aux influences socioculturelles

La capacité à reconnaitre l’expression faciale d’un individu est cruciale pour l’humain, tant au niveau de sa survie qu’au niveau de son adaptation sociale. D’ailleurs, Darwin suggérait déjà, en 1872, que les mouvements faciaux effectués durant l’expression de certaines émotions (p.ex., la peur, la colère, le dégoût) ont évolué pour maximiser les chances de survie de l’humain. Encore aujourd’hui, cette proposition est d’actualité (Susskind et al., 2008), bien qu’elle soit aussi remise en question par plusieurs (Russel, 1994; Jack et al., 2012). Alors qu’une majeure partie des études dans le domaine de l’expression faciale a porté sur la caractérisation du signal émotionnel transmis par celles-ci, il importe aussi d’investiguer les stratégies développées par le système visuel pour arriver à décoder ce signal. Cette présentation explorera cette question sous différents angles. Nous verrons que les mouvements faciaux représentant le cœur d’une expression faciale n’ont pas tous la même importance pour le système visuel. Nous verrons aussi comment l’ajout de propriétés rendant une expression plus naturelle (p.ex. expressions dynamiques vs. statiques, expressions spontanées vs. posées) influence l’information utilisée par le système visuel. Finalement, nous discuterons de la façon dont les expressions faciales d’émotion de même que les stratégies de décodage du système visuel peuvent être influencées par l’environnement socioculturel dans lequel un individu se développe.

Seminar: Neurosciences, mental health and toxicomania

Isabelle Charbonneau – Impact de l’anxiété sociale sur les représentations visuelles d’expressions faciales

Des études montrent que les personnes souffrant d’anxiété sociale ont des difficultés à reconnaître les expressions faciales. Afin de mieux comprendre ce déficit, nous avons comparé les représentations visuelles (RVs) du dégoût et de la colère de personnes anxieuses avec celles de participants contrôles. Les RVs de ces deux expressions ont été révélées chez 40 participants (20 par groupe) grâce à la corrélation inverse (Mangini & Biederman, 2004). À chaque essai, les participants devaient choisir lequel de 2 stimuli semblait le plus colérique ou le plus dégoûté. Les deux stimuli étaient générés à partir du même visage auquel était ajoutée une plage de bruit aléatoire. Une image de classification a été générée pour chaque groupe et chaque expression en moyennant les plages de bruit sélectionnées à chaque essai. Les résultats ne révèlent aucune différence entre les deux groupes quant aux régions faciales représentées en mémoire pour les deux expressions faciales. Des analyses subséquentes montrent toutefois que les RVs des participants anxieux sont jugées comme plus tristes par des participants naïfs, et ce, pour les deux expressions évaluées (colère=[X2(1)=24.14, p<0.001], dégoût=[X2(1)=28, p<0.001]), ce qui est consistant avec l’observation d’un niveau de dépression plus élevé chez nos participants anxieux [t(38)=2.57, p=0.02]. Ces résultats suggèrent que la présence de dépression chez les anxieux sociaux altère les RVs en les rendant plus tristes.


Justin Duncan – Sélectivité de l’hémisphère cérébral droit pour l’information horizontale durant le traitement facial

Le traitement des visages est plus efficace lorsqu’ils sont présentés dans le champ visuel gauche (CVG). Cette supériorité du CVG est attribuée à la dominance de l’hémisphère droit pour les visages, mais peu d’explications fonctionnelles ont été proposées (p.ex., traitement global/local). Des travaux récents ont révélé l’importance des orientations spatiales horizontales dans le traitement facial (ex., Goffaux & Dakin, 2010). Nous avons donc vérifié si leur traitement diffère entre les hémisphères. Trente participants ont complété deux tâches dans lesquelles des visages étaient filtrés avec des bulles d’orientation (Duncan et al., 2017). La première, une tâche d’identification, visait à établir un profil de référence. Dans la seconde tâche, un paradigme pareil/différent, une sonde était présentée au CVG ou CVD pendant que l’autre côté voyait un visage moyen. Une cible était ensuite présentée bilatéralement et les participants devaient indiquer si la sonde et la cible étaient la même personne. Des images de classification ont permis d’extraire les orientations utiles (Zcrit=2,101, p<0,05; Chauvin et al., 2005). Tel qu’attendu, l’information horizontale constitue le meilleur prédicteur de performance dans la tâche de référence, Z=3,38. Cela était aussi vrai pour le CVG (Z=3,45), mais pas le CVD (Z=–1,92). La supériorité de l’hémisphère droit serait donc liée à un meilleur traitement des orientations horizontales, mécanisme qui n’avait jamais été exploré jusqu’à maintenant.


Francis Gingras – Différences culturelles dans la représentation mentale de la douleur exprimée par des visages d’une autre ethnie que celle de l’observateur

Une reconnaissance efficace de l’expression faciale de douleur est cruciale afin de réagir correctement face aux personnes souffrantes. La présente étude visait à révéler la représentation perceptive de l’expression faciale de douleur pour deux groupes ethniques. Des images de classification (ICs) ont été générées afin de visualiser les représentations de douleur de visages Blancs et Noirs chez 30 participants occidentaux grâce à la reverse correlation (Mangini & Biederman, 2004). Ces ICs ont ensuite été soumises à un Cluster test (Chauvin et al., 2005; tcrit=3.0, k=246, p<0.025). Les résultats révèlent que l’œil droit et la bouche sont représentés différemment pour les deux ethnies de visages. Puisque ces variations n’étaient pas corrélées aux préjugés ethniques (tous les p>0.5), sept participants africains ont été testés afin de tenter d’expliquer ces différences. On cherchait alors à déterminer si ces différences étaient attribuables à la morphologie des visages ou à l’importance accordée par l’observateur aux traits de douleur pour un visage provenant d’une autre ethnie. Pour les participants africains, la douleur pour les visages Blancs est davantage associée au plissement des sourcils comparativement à celle pour les visages Noirs (tcrit=3.0, k=246, p<0.025). Ces résultats suggèrent que l’importance accordée aux traits de douleur est modulée par l’ethnie du visage qui l’exprime.

Poster session

Seminar: Neurosciences, mental health and toxicomania

Gabrielle Dugas – Les orientations spatiales spécifiques à l’identification des visages

Plusieurs études ont révélé le rôle fondamental des orientations horizontales en identification de visages. Or, les tâches utilisées pour l’instant ne permettent pas discerner l’impact de l’information d’identité de celle de l’information physique de bas niveau. Afin de contourner ce problème, nous avons utilisé une méthode contrôlant précisément les différences physiques entre des stimuli qui peuvent être catégorisés comme provenant de la même identité ou non. À chaque essai, les participants (N = 10) voyaient une cible et deux choix de réponse dont l’information visuelle était échantillonnée avec des bulles d’orientation. L’un des choix, la bonne réponse, était identique à la cible. Les possibles choix alternatifs, créés par un logiciel de morphing, pouvaient soit partager l’identité de la cible ou provenir d’une identité différente. De façon importante, tous les choix alternatifs étaient identiques quant à leur distance physique avec la cible. Les orientations horizontales étaient significativement associées aux bonnes réponses (Zcrit=2.101; Zmax=4,25, p<0,05) mais uniquement quand les deux choix de réponse différaient sur l’identité. Par contre, aucune orientation n’a atteint le seuil lorsque les deux choix différaient uniquement sur l’information de bas niveau (Zmax=1,41, p>0,05). Une comparaison à l’aide d’un test-t apparié a révélé une spécificité du traitement des horizontales lorsque la tâche implique de reconnaitre l’identité d’un visage (t(9)=2,8, p<0,05).


Marie-Pier Plouffe-Demers – L’impact du genre sur la capacité à discriminer l’expression faciale de douleur

Des études ont démontré un avantage des femmes en reconnaissance des expressions de douleur (e.g. Hill and Craig, 2004), mais l’impact du genre sur les stratégies visuelles qui y sont sous-jacentes demeure inexploré. Nous avons mesuré les stratégies visuelles de 30 participants (15 hommes) avec la méthode Bubbles (Gosselin & Schyns, 2001), qui échantillonne aléatoirement des traits du visage dans 5 bandes de fréquences spatiales. Pour chacun des 1512 essais, deux avatars bullés (parmis 2 genres x 4 niveaux d’intensité de douleur) étaient présentés au participant qui devait identifier celui présentant le plus haut niveau de douleur. L’écart d’intensité entre les 2 visages variait entre 100% (facile), 66% (moyen) et 33% (difficile). Le nombre de bulles nécessaire au maintien d’une précision moyenne de 75% faisait office de mesure de performance (Royer et al., 2015). Les résultats indiquent la nécessité d’un plus grand nombre de bulles pour les hommes (M=77.6, SD=36.8) que les femmes (M=52.3, SD=24.5) dans la condition la plus difficile [t(28)=2.22, p=0.04], suggérant une performance supérieure des femmes. De plus, les résultats indiquent que les femmes utilisent davantage la bande de fréquence spatiale la plus basse comparativement aux hommes (Zcrit=2.7, p<0.05; 5.4-2.7 cycles par visage). Ces résultats suggèrent un impact du genre de l’observateur sur la performance et sur les stratégies visuelles sous-jacentes à la discrimination de l’expression faciale de la douleur.



Camille Saumure – Le niveau d’empathie a un impact sur les représentations mentales de l’expression faciale de douleur

L’expérience de douleur entraîne la contraction de muscles faciaux (Kunz et al., 2012) qui sont encodés dans la représentation mentale (RM) de l’observateur (Blais et al., en révision). L’exposition à la douleur entraînerait une réaction cérébrale empathique (Botvinick et al.,2005) qui varie en fonction du niveau d’empathie (Saarela et al. 2007). Dans la présente étude, les RMs de 54 participants (18 hommes) ont été mesurées avec la Reverse Correlation (Mangini & Biederman, 2004). Pour 500 essais, ceux-ci devaient choisir le visage qui semblait le plus en douleur entre deux stimuli. À chaque essai, les stimuli étaient générés à partir du même visage, auquel était ajouté ou soustrait du bruit visuel. Le niveau d’empathie a été mesuré avec le test du quotient émotionnel (Baron-Cohen et Wheelwright, 2004) et utilisé comme poids pour générer deux images de classification (IC) pour les niveaux d’empathie élevée et faible. Des sujets indépendants (N = 24) ont ensuite jugé l’IC de forte empathie comme étant significativement plus intense dans les régions associées à l’expression de douleur (sourcils x= 24, nez/lèvre supérieure x= 10.67, yeux x= 6 [p<0.05]). Une IC de différence (forte–faible empathie), soumise à un Cluster test (Chauvin et al., 2005), a dévoilé une différence significative dans la région de la bouche (ZCrit = 2.7, K = 90, p<0.025). Ces résultats suggèrent que la RM de l’expression de douleur varie selon les différences individuelles d’empathie.

Seminar: Development and operational running of people and communities, and social life

Joël Guérette – L’influence du statut suite à l’éjection d’un joueur ou d’un entraîneur dans la Ligue de baseball majeur

Les joueurs et entraîneurs de baseball critiquent le travail des officiels, entre autres afin d’influencer leurs décisions. Nous avons récemment montré que de critiquer une décision de l’arbitre au marbre modifie la surface de la zone de prises, offrant un avantage à l’équipe ayant exprimé son désaccord. La critique étant inacceptable au baseball, cet avantage est accompagné de l’éjection de la personne fautive. L’éjection d’un joueur a des conséquences négatives plus importantes pour une équipe que celle d’un entraîneur. L’objectif de la présente étude est de comparer l’impact de l’éjection d’un joueur et d’un entraîneur sur la surface de la zone de prises. Pour ce faire, nous avons comparé la taille de la zone de prises des officiels de la Ligue de baseball majeur (MLB) selon le type d’éjection. Les surfaces de zones de prises ont été mesurées avant et après les expulsions à partir du positionnement spatial des lancers (72258) recueillis dans la base de données de la MLB pour les saisons 2008 à 2015. Une analyse par bootstrap à l’aide du modèle additif généralisé a révélé une diminution significativement plus grande de la zone de prises, t(389)=-3.66, p<0.001, suite à l’éjection d’un joueur par rapport à celle d’un entraîneur. Ces résultats suggèrent que suite à l’éjection d’une personne par l’arbitre, la perception et la prise de décision de ce dernier lors des lancers subséquents changent de façon à compenser les conséquences négatives engendrées par l’expulsion.

Congratulations everyone!!!


Congratulations everyone!!!

The LPVS at the Vision Science Society congress

The LPVS presented many research projects during last Vision Science Society‘s annual congress, which took place on the week of May 17th to May 22nd, 2019, at St. Pete Beach in Florida. This international congress consists of a meet-up between researchers studying vision and its components. Researchers from many fields, such as visual and perceptive psychology, neurosciences, computational vision and cognitive psychology, presented their new findings linked with this subject during the congress. The poster sessions were divided by themes depending on the subject of the posters presented. Here is an overlook of the posters presented by our laboratory during the congress:

Poster session – Faces: Wholes, parts, features


Right hemisphere horizontal tuning during face processing

Justin Duncan1,2, Guillaume Lalonde-Beaudoin1, Caroline Blais1, Daniel Fiset1;

1 Université du Québec en Outaouais, 2 Université du Québec à Montréal




Left visual field (LVF) superiority refers to greater face processing accuracy and speed, compared to faces presented in the right VF (e.g., Sergent & Bindra, 1981). It is generally attributed to right hemisphere dominance (e.g., Kanwisher et al., 1997), but few mechanisms have been proposed for this phenomenon (e.g., global/local or low/high spatial frequency processing differences). Recent forays in the face processing literature have however revealed a critical role for horizontal spatial orientations (e.g., Goffaux & Dakin, 2010; Pachai et al., 2013). In line with these results, we verified whether orientation tuning might differ across hemispheres. Thirty participants completed two tasks measuring tuning profiles with orientation bubbles (Duncan et al., 2017). The first task was a 10 AFC identification, to generate a reference profile. The second task introduced lateralized presentations. In this task, a filtered probe face half (one of ten familiar individuals) was presented to either the LVF or RVF, while the other side viewed an average face half (randomized across trials). A target was then presented bilaterally, and participants indicated whether the probe and target were the same person. Central fixation was enforced with eye tracking (M = 97.7%, SD = 3.1% compliant trials) during the probe presentation (60 ms). Classification images were generated to extract diagnostic orientations. The statistical threshold (Zcrit = 2.101, p < 0.05) was established with the Stat4CI toolbox (Chauvin et al., 2005). As expected, horizontals predicted the best accuracy in the reference task (Z = 3.38). This relationship was also observed for the LVF (Z = 3.45), but not for the RVF (Z = –1.92). These results provide novel evidence for right hemisphere horizontal tuning for faces.


Identity specific orientation tuning for faces revealed by morphing Angelina into Jessica

Gabrielle Dugas1, Justin Duncan1,2, Caroline Blais1 , Daniel Fiset1;

1 Université du Québec en Outaouais, 2 Université du Québec à Montréal




Many recent studies have revealed that face recognition heavily relies on the processing of horizontal spatial orientations. However, most of those studies used tasks where it is difficult to dissociate the impact of physical face information from that of identity-specific information. To investigate this issue, we used a method designed to precisely control the physical difference between stimuli, and verified the horizontal tuning for faces of identical distances with regard to low-level properties but of different perceptual distance with regard to identity. Ten participants each completed 2,880 trials in a 2-ABX match-to-sample task. On each trial, the participants saw a target and two response alternatives, both sampled with the same orientation bubbles (Duncan et al., 2017). One response choice was visually identical to the sample (i.e. the correct response) whereas the other was either on the same side (within-identity [WI]) or on the other side (between-identity [BI]) of the categorical barrier. Thus, the physical distance between the target and the different (WI or BI) alternative was always the same, but the perceptual distance was not. As expected, WI trials were more difficult than BI trials for all participants, as indicated by the higher number of bubbles needed for the former (WI: M=101.66, SD=83.50) than the latter (BI: M=15.85, SD=14.94). Orientation tuning in the BI and WI conditions was revealed by computing a weighted sum of the orientation filters across trials, using participant accuracies as weights. In the BI condition, horizontal orientations between 62 and 101degres were significantly associated with accuracy (Zcrit=2.101; Zmax=4.25, p<0.05, peak at 84 degres); whereas no orientation reached the threshold in the WI condition (Zmax=1.41, p>0.05). Comparing horizontal tuning between the two conditions using a paired sample t test reveals an identity-specific horizontal tuning for faces, t(6) = 2.8, p < 0.05.


Poster session – Faces: Expressions, speech


Discrimination of facial expressions and pain through different viewing distances

Isabelle Charbonneau1, Joël Guérette1 2, Caroline Blais1, Stéphanie Cormier1, Fraser Smith, Daniel Fiset1;

1 Université du Québec en Outaouais, 2 Université du Québec à Montréal




Due to its important communicative function, a growing body of research has focused on the effective recognition of the facial expression of pain. Here, we investigated how pain along with the basic emotions are recognized at different viewing distances. Sixteen participants took part in an 8-expression categorization task (2400 trials per participant). We used the Laplacian Pyramid toolbox (Burt & Adelson, 1983) to create six reduced-size images simulating increasing viewing distances (i.e. 3.26, 1.63, 0.815, 0.41, 0.20, 0.10 degree of visual angle). Unbiased hit rates (Wagner, 1993) were calculated to quantify the participants’ performance at each viewing distance. A 6 x 8 (Distance x Emotion) repeated measures ANOVA revealed a significant interaction F(8.54, 128.22) = 15.97, p < .001 (η2=0.516). Separate repeated measure ANOVAs looking at the effect of Emotion for each Distance were conducted and follow-up paired sample t-tests (corrected p = 0.05/28) revealed significant differences between expressions. At the most proximal distance, we found a significant effect of Emotion F(7,105)=21.41, p<.001 (η2=0.588) where happiness and angry were the two best-recognized emotions (all p’s<.005) followed by disgust, pain, fear, surprise and sadness. Interestingly, we found surprise and happiness to be the best-recognized expressions at further distances (all p’s<.05) which is consistent with previous findings (Smith & Schyns, 2009). Most importantly, recognition of pain decreased with increasing viewing distances and was not well recognized at the furthest distance. Taking into account that changes in viewing distance modulate the spatial frequency content available to an observer by progressively peeling off high SFs as the stimulus moves further away, these results are consistent with recent findings suggesting that pain categorization and discrimination rely mostly on mid-SFs (Guérette et al., VSS 2017).



Spatial frequencies 
underlying the detection of basic emotions and pain

Joël Guérette1 2 , Isabelle Charbonneau1, Stéphanie Cormier1, Caroline Blais1, Daniel Fiset1;

1 Université du Québec en Outaouais, 2 Université du Québec à Montréal




Many studies have examined the role of spatial frequencies (SFs) in facial expression perception. However, although their detection and recognition have been proposed to rely on different perceptual mechanisms (Sweeny et al., 2013; Smith & Rossit, 2018), the SFs underlying these two tasks have never been compared. Thus, the present study aimed to compare the SFs underlying the detection and recognition of facial expressions of basic emotions and pain. Here, we asked 10 participants (1400 trials per participant) to decide if a stimulus randomly sampled with SF Bubbles (Willenbockel et al., 2010) corresponded to an emotion or a neutral face. Classification vectors for each emotion were computed using a weighted sum of SFs sampled on each trial, with accuracies transformed in z-scores as weights. We then compared the SFs used in this task to those obtained in a previous study using the same stimuli and method but during a recognition task (Charbonneau et al., 2018). Overall, accurate detection of emotions was significantly associated with the use of low-SFs (ranging from 3.33 to 6 cycles per face (cpf); Zcrit=3.45, p< 0.05). Happiness was the only emotion relying on similar low-SFs for both tasks. Other emotions were associated with the use of higher SFs in the recognition task. Interestingly, the detection of fear (ranging from 1.67 to 7 cpf, peaking at 4 cpf) and surprise (ranging from 1.33 and 6.33 cpf, peaking at 3.33 cpf) was associated with the lowest SF information. These results are consistent with the idea that low-SF represent potent information for the detection of emotions, especially those with a survival value such as fear. However, the contribution of higher SFs is needed to discriminate between emotions for their accurate recognition.


Poster session – Faces: Gaze


Link between initial fixation location and spatial fre- quency utilization in face recognition

Amanda Estéphan1,2, Carine Charbonneau1 , Virginie Leblanc1, Daniel Fiset1, Caroline Blais11 Université du Québec en Outaouais, 2 Université du Québec à Montréal




Recent face perception studies have explored cultural and individual differences with regard to visual processing strategies. Two main strategies, associated with distinct eye movement patterns, have been highlighted: global (or holistic) face processing involves fixations near the center of the face to facilitate simultaneous peripheral processing of key facial features (i.e. eyes and mouth); local (or analytic) face processing involves fixations directed to those facial features (Chuk et al, 2014; Miellet et al, 2011). Interestingly, some studies have also found cultural and individual differences in the spatial frequencies (SFs) used for face identification, which seem to fit the eye movement data. For instance, East Asians use a more global fixation pattern (Blais et al, 2008), and lower SFs (Tardif et al, 2017), compared to Western Caucasians; myopes tend to use a more local fixation pattern, and higher SFs, compared to emmetropes (Estephan et al, 2018). However, whether a common underlying link between eye movements and SF use exists is still unknown. In order to investigate this question, the eye movements of 24 Canadian participants were monitored while they completed an Old/New face recognition task, and the SF Bubbles method (Willenbockel et al., 2010) was used to measure the same participants’ SF utilization during a face identification task. Fixation duration maps were computed for each participant using the iMap4 toolbox (Lao et al., 2017), and participants’ individual SF tuning peaks, obtained with SF Bubbles, were calculated. Group analyses based on participants’ initial fixation location were performed on SF tuning; correlations between initial fixation location and SF tuning peaks were also calculated. In sum, our data failed to reveal a clear link between eye movement patterns and SF utilization. However, these results are preliminary and more participants will be tested to increase statistical power. Nonetheless, our results highlight that the underlying relation between eye movements and SF use that could possibly drive the previously observed contingencies between these two measures is potentially of a more complex nature.


Poster session – Faces: Social and Cultural Factors

Evaluating Trustworthiness: Differences in Visual Rep- resentations as a Function of Face Ethnicity

Francis Gingras1, Karolann Robinson1, Daniel Fiset1, Caroline Blais1;

1 Université du Québec en Outaouais




Trustworthiness is rapidly and automatically assessed based on facial appearance, and it is one of the main dimensions of face evaluation (Oosterhof & Todorov, 2008). Few studies have investigated how we evaluate trustworthiness in faces of other ethnicities. The present study aimed at comparing how individuals imagine a trustworthy White or Black face. More specifically, the mental representations of a trustworthy White and Black face were measured in 30 participants using a Reverse Correlation task (Mangini & Biederman, 2004). On each trial (N=500 per participant), two stimuli, created by adding sinusoidal white noise to an identical base face (White or Black, depending on the experimental condition), were presented side-by-side. The participant’s task was to decide which of the two looked most trustworthy. The noise patches corresponding to the chosen stimuli were summed to produce a classification image, representing the luminance variations associated with a percept of trustworthiness. A statistical threshold was found using the Stat4CI’s cluster test (Chauvin et al., 2005), a method that corrects for the multiple comparisons across all pixels while taking into account the spatial dependence inherent to coherent images (tcrit=3.0, k=246, p<0.025). Results show that for a White face, perception of trustworthiness is associated with a lighter eye region; for a Black face, perception of trustworthiness is associated with a darker right eye and a lighter mouth. Statistically comparing both classification images (tcrit=3.0, k=246, p<0.025) revealed that the eye region was more important in judging trustworthiness of White faces, while the mouth region was more important for Black faces. The present study shows that facial traits used to form the mental representation of trustworthiness differ with face ethnicity. More research will be needed to verify if this finding generalizes across populations of different ethnicities.


Variation of empathy in viewers impacts facial features encoded in their mental representation of pain expression.

Marie-Pier Plouffe-Demers1 2 , Camille Saumure1, Daniel Fiset1 , Stéphanie Cormier1 ,  Miriam Kunz3 , Caroline Blais1;
1 Université du Québec en Outaouais, 2 Université du Québec à Montréal, 3 University of Groningen





The impact of gender on visual strategies underlying the discrimination of facial expressions of pain

Camille Saumure1, Marie-Pier Plouffe-Demers1 2, Daniel Fiset1 , Stéphanie Cormier1 ,  Miriam Kunz3 , Caroline Blais1;
1 Université du Québec en Outaouais, 2 Université du Québec à Montréal, 3 University of Groningen




Previous studies have found a female advantage in the recognition/detection (Hill and Craig, 2004; Prkachin et al., 2004) of pain expressions, although this effect is not systematic (Simon et al., 2008; Riva et al., 2011). However, the impact of gender on pain expression recognition visual strategies remains unexplored. In this experiment, 30 participants (15 males) were tested using the Bubbles method (Gosselin & Schyns, 2001), which randomly sampled facial features across five spatial frequency (SF) bands to infer what visual information was successfully used. On each of the 1,512 trial, two bubblized faces, sampled from 8 avatars (2 genders; 4 levels of pain intensity), were presented to participants who identified the one expressing the highest pain level. Three difficulty levels, determined by the percentage of pain difference between the two stimuli (i.e 100%, 66% or 33%) were included. Number of bubbles needed to maintain an average accuracy of 75% was used as a performance measure (Royer et al., 2015). Results indicated a trend towards a higher number of bubbles needed by male (M=57.7, SD=30.4) in comparison to female (M=40.2, SD=23.2), [t(28)=2.02, p=0.05]. Moreover, this difference was significant with the highest level of difficulty [t(28)=2.22, p=0.04], suggesting that pain discrimination was more difficult for male (M=77.6, SD=36.8) than female (M=52.3, SD=24.5). Classification images, generated by calculating a weighted sum of the bubbles position (where accuracies transformed in z-scores were used as weights), revealed that female made a significantly higher use of the lowest band of SF (Zcrit = 2.7, p<0.05; 5.4-2.7 cycles per face). These results suggest that gender impacts the performance and the visual strategies underlying pain expression recognition.