Encore une fois cette année, plusieurs étudiants du LPVS auront pris part au congrès annuel de Vision Science Society qui rassemble des chercheurs d’un large éventail de disciplines contribuant à l’avancement scientifique en vision dont la psychologie visuelle et perceptive, les neurosciences, la vision computationnelle ainsi que la psychologie cognitive. Le contenu scientifique des présentations reflète la diversité des sujets du domaine de la vision allant du codage visuel à la perception, en passant par le contrôle visuel de l’action et le développement de nouvelles méthodologies en psychologie cognitive, en vision par ordinateur ou en neuroimagerie.
Voici un aperçu des projets qui y ont été présentés par le LPVS.
Individual differences in face processing ability and consistency in visual strategies
Jessica Royer1, Isabelle Charbonneau1, Gabrielle Dugas1, Valerie Plouffe1, Caroline Blais1, Daniel Fiset1;
1Departement de Psychoeducation et de Psychologie, Universite du Quebec en Outaouais
Individual differences in face processing ability are a useful tool to better understand the cognitive and perceptual mechanisms involved in optimal face processing (e.g. see Yovel et al., 2014). We recently showed using the Bubbles technique (Gosselin & Schyns, 2001) that these individual differences are linked to a quantitative increase in the use of the eye area of faces, a feature known to be highly diagnostic for accurate face recognition (Royer et al., VSS 2016 meeting). However, no specific visual strategy was found in the lower recognition ability observers, possibly due to the use of inconsistent visual strategies in these individuals. This inconsistency could manifest at different levels, namely (1) between subjects, i.e. lower ability individuals rely on idiosyncratic recognition strategies, or (2) within subjects, i.e. lower ability individuals show an unstable pattern of diagnostic information throughout the bubbles task. The present experiment directly investigates these propositions. Fifty participants (28 women) were first asked to complete 2000 trials of a 10-alternative forced choice face recognition task in which the stimuli were randomly sampled using Bubbles. All participants also completed three common face matching and recognition tests to quantify their face processing ability. First, between-subject consistency in visual strategies in observers with similar levels of identification performance was strongly correlated with general face processing ability (r = .69; p < .001). Moreover, this inconsistency in visual strategies was also present at the within-subject level. Indeed, face processing ability was also significantly correlated with each observer’s level of consistency in their own visual strategies throughout the bubbles task (r = .42; p = .002). These results demonstrate that while higher ability face recognizers consistently use a similar and stable strategy to recognize faces, lower ability individuals instead rely on idiosyncratic and varying strategies, possibly reflecting the imprecision of their facial representations.
Voir le poster
Eye Left the Right Face: The Impact of Central Attentional Resource Modulation on Visual Strategies During Facial Expression Categorization
Justin Duncan1,2,Gabrielle Dugas1, Benoit Brisson3,Caroline Blais1,Daniel Fiset1;
1Université du Québec en Outaouais, 2Université du Québec À Montréal, 3Université du Québec à Trois-Rivières
The categorization of facial expressions is impaired when central attentional resources are shared with an overlapping task (Tomasik et al., 2009). Using the psychological refractory period (PRP) dual-task paradigm, we verified if unavailability of central resources precludes the utilization of normal visual strategies. Twenty subjects took part in the study. In the first task (T1), they categorized a sound (150ms) as either low (200Hz or 400Hz) or high (800Hz or 1,600Hz) frequency. In the second task (T2), participants categorized the facial expressions of anger, disgust, fear, happiness, sadness, and surprise taken from the Karolinska face database (Lundqvist, Flykt & Öhman, 1998). External facial cues were hidden with an oval that blended with the background. Faces were sampled with Bubbles (Gosselin & Schyns, 2001) and presented for 150ms. T1 and T2 presentation was separated by a stimulus onset asynchrony (SOA) of either 300ms (central resource overlap) or 1,000ms (no overlap). Participants were instructed to answer as rapidly and as accurately as possible to both tasks, and not to wait for T2 onset before answering to T1. We performed a linear regression of Bubbles’ coordinates on T2 performance. Statistical significance was determined with the Stat4CI toolbox (Chauvin et al., 2005). The categorization of angry, sad, fearful, and surprised expressions strongly correlated with utilization of both eyes and the mouth, at short and long SOAs (Z> 3.4, p< .05). Utilization of the left eye, however, was significantly reduced at short, relative to long SOA (Z> 2, k> 2,347 pixels, p< .05). Interestingly, whereas participants showed a bias favoring the left side of the face at long SOA, they favored the right side at short SOA. Participants always fixated the center of face stimuli. Thus, these results could be hinting at hemispheric differences in sensitivity to the modulation of central attentional resources.
Similar visual strategies are used to recognize spontaneous and posed facial expressions
Camille Saumure1, Marie-Pier Plouffe-Demers1,Daniel Fiset1, Caroline Blais1;
1Départemen de psychoéducation et psychologie, Université du Québec en Outaouais
Most studies bearing on the visual strategies underlying facial expression recognition have been done using posed expressions (PE). However, evidence suggests that these expressions differ from spontaneous expressions (SE) in terms of appearance, at least in regard of intensity (Ekman & Friesen, 1969), and facial asymmetry (Ross & Pulusu, 2013). In this experiment, the Bubbles method (Gosselin & Schyns, 2001) was used to compare the facial features used to recognize both kinds of expressions. Twenty participants were asked to categorize SE and PE of four basic emotions (disgust, happiness, surprise, sadness). Pictures consisted of 21 identities taken from the MUG database (Aifanti et al., 2010). The amount of facial information needed to reach an accuracy rate of 63% was higher with SE (M=64.0, SD=15.6) than with PE (M=34.4, SD=8.7) [t(19)=-15.07, p< 0.001], indicating that SE were harder to recognize. Classification images of the facial features used by participants to recognize each emotion were generated separately for SE and PE. Statistical thresholds were found with the Stat4CI (Chauvin et al, 2005; Zcrit=3.0; p< 0.025). Similar features were used for the recognition of SE and PE of disgust, happiness and surprise, although the Z scores reached significantly higher values with PE. With the expression of sadness, the information contained in the eye region was only useful for PE. An ideal observer analysis confirmed that the most diagnostic features in the recognition of happiness, surprise and disgust are very similar for PE and SE, and that the eye area has a lower diagnosticity in spontaneous sadness. These results suggest that the facial features utilization underlying the recognition of SE and PE is very similar for most basic expressions, although some qualitative differences are observed for the expression of sadness.
Voir le poster
Visual representation of age groups as a function of ageism levels
Valerie Plouffe1, Youna Dion-Marcoux1,Daniel Fiset1, Hélène Forget1,Caroline Blais1;
1 Département de psychoéducation et de psychologie, Université du Québec en Outaouais
Prejudice against the elderly is a growing concern and has shown to report many negative social and individual consequences (European social survey, 2012). Last VSS (Dion-Marcoux et al., 2016), we presented a study showing that ageism modulates the mental representation of a prototypical young and old face: individuals with higher prejudice represented a young face as being older and an old face as being younger than individuals with less prejudice. The present study verified if this finding is subtended by ageism modifying the boundaries used to categorize a person as young or old, or by ageism modifying the representation of facial aging throughout life. Thirty young adults took part in three tasks: An Implicit Association Test, an age categorization task, and a Reverse Correlation task. In the Reverse Correlation task, participants had to decide which of three faces embedded in white noise was most prototypical of the appearance of a 20, 40, 60 or 80 years-old face (block design). The mental representations of the ten participants with the highest vs. lowest ageism were averaged, and presented to 30 individuals who estimated their age. Results show a significant interaction between ageism and face group on the perceived age [F(3, 87)=17.17, p< 0.05]. Although participants with higher prejudice had a significantly older perception of the age 40 [t(58)=3.077, p=0.0032], the pattern reversed for 80 years-old faces [t(58)=-2.317, p=0.024], which they represented as younger. The boundary used in the age categorization task did not differ as a function of ageism [t(18)=0.18, ns]. These results suggest that highly prejudiced individuals represent different groups (40, 60 and 80 years-old) of other-age faces as being less dissociable from one another than lower prejudice individuals.
Voir le Poster
Morphing Angelina into Jessica reveals identity specific spatial frequency tuning for faces
Gabrielle Dugas1,2,Isabelle Charbonneau1,2,Jessica Royer1,2,Caroline Blais1,2, Benoit Brisson 3,Daniel Fiset1,2;
1Université du Québec en Outaouais, 2Centre de Recherche en Neuropsychologie et Cognition, 3Université du Québec à Trois-Rivières
Many studies have investigated the role of spatial frequencies (SF) in face processing. However, the majority have used tasks where it is difficult to dissociate the impact of physical and identity-specific information. To investigate this question, we first asked 20 participants to classify stimuli taken from 40 morph continua between pairs of famous actors. Sixteen continua reached our categorical perception criteria, i.e. the stimulus at ⅓ along the morph continuum was reliably identified as the first identity whereas the stimulus at ⅔ was reliably identified as the second identity. In the second part of the study, seven participants performed a match-to-sample task where the response stimuli (1248 trials per condition) were sampled with SF Bubbles (Willenbockel et al., 2010). On each trial, the participants saw a target (either the ⅔-⅓ or the ⅓-⅔ of a given continuum) and two response alternatives, both sampled with the same Bubbles. One response choice was visually identical to the sample (i.e. the correct response) whereas the other was taken either from the same perceived identity (e.g. 1-0 for the ⅔-⅓; within-identity trial [WIT]) or from different identities (e.g. ⅓-⅔ for the ⅔-⅓; between-identity trial [BIT]). Expectedly, WIT trials were more difficult than BIT trials for all participants. Multiple regression analyses on the sampled SFs and the participants’ reaction times (using a median split) were used to create classification images for WIT and BIT trials separately. Comparing diagnostic SFs for these two conditions reveals identity-specific SF tuning for faces. This comparison reveals a spatial frequency band between 4.9 and 8.1 cpf (Zcrit=3.45, p< 0.025; peaking at 5.6 cpf) that is specifically dedicated for identifying known faces. These data offer interesting insight about the visual granularity at which identity is represented in memory.
Voir le Poster
Spatial frequency utilization during the recognition of static, dynamic and dynamic random facial expressions.
Marie-Pier Plouffe Demers1, Camille Saumure Régimbald1, Daniel Fiset1,Caroline Blais1;
1Département de Psychoéducation et Psychologie, Université du Québec en Outaouais
Previous studies have revealed that dynamic facial expressions (DFE) are better recognized than static facial expressions (SFE; Ambadar et al., 2005). We have recently demonstrated that DFE can be recognized while fixating less on the features, and relying more on lower spatial frequencies (SF), than with SFE (Saumure et al., VSS2016). Since biological motion can be processed in extrafoveal vision (Gurnsey et al., 2008), the information provided by the motion in DFE may decrease the need to fixate the features and extract higher SF. This hypothesis would predict for dynamic-random facial expressions (D-RFE) created by altering the biological motion of the original DFE (i.e randomized frames) to be processed similarly to SFE. In this experiment, SF utilization of 27 participants was measured with SFE, DFE and D-RFE using SF Bubbles (Willenbockel et al., 2010). Participants categorized pictures and videos (block design) of the six basic facial expressions and neutrality, presented for a duration of 450 ms. SF tunings were obtained by conducting a multiple regression analysis on the SF filters and accuracies across trials. Statistical thresholds were found with the Stat4Ci (Chauvin et al., 2005). SF bands peaking at 16.6 cycles per face (cpf), 14 cpf, and 15.6 cpf were found with SFE, DFE and D-RFE, respectively (ZCrit=2.84 p< 0.05). Low SFs (3.2 to 4.2 cpf) were significantly more utilized with D-RFE than with SFE; and mid-to-high SFs (>18.6; 18.9 to 36.8 cpf) were significantly more utilized with SFE than with D-RFE and DFE respectively (ZCrit=3.09, p< 0.025). A marginal trend also indicated a higher utilization of low SF with DFE than with SFE (Zdynamic-static=2.57). These results suggest reliance on lower SF even when biological motion was altered.
Voir le poster
Spatial frequencies for rapid and accurate race categorisation in Caucasian participants
Isabelle Charbonneau1 ,Gabrielle Dugas1,Jessica Royer1,Caroline Blais1, Benoit Brisson2,Daniel Fiset1;
1Université du Québec en Outaouais, 2Université du Québec à Trois-Rivières
Race categorisation is faster for other race (OR) than same race (SR) faces (Caldara et al., 2004). Some researchers propose that face identification prevails for SR (but not for OR faces), thus decreasing race categorisation proficiency for their own-race (Hugenberg et al., 2010). To gain a better understanding of this phenomenon, we investigated the perceptual basis of race categorisation. Sixteen Caucasians were asked to categorize rapidly and correctly the race of 50 Caucasian and 50 Afro-american faces (400 trials per race). On each trial, the spatial frequencies (SF) of the stimuli were randomly sampled using SF Bubbles (Willenbockel et al., 2010). Small amounts of white noise were added to each stimulus to keep accuracy at ~90%. Multiple regression analyses were conducted on the sampled SFs and the participant’s speed (using a median split) to create group SF classification images (CI) for Caucasian and Afro-american faces separately. SFs between 1.7 and 9.3 cycles per face (cpf; peaking at 3.4 cpf; peaks were calculated using a 50% area spatial frequency measure) were significantly correlated with response speed for Caucasian faces, whereas SFs between 4.3 and 23.7 cpf (peaking at 10.3 cpf) were significantly correlated with response speed for Afro-american faces. Subtracting one CI from the other showed that rapid categorisation with Caucasian faces was significantly more correlated with the availability of low SF (< 3.3 cpf); Zcrit=3.45, p< 0.025) whereas medium/high SF availability lead to fast categorisation with afro-american faces (between 8.3 and 34.7 cpf); Zcrit=3.45, p< 0.025). These results demonstrate that participants categorized SR faces rapidly if the SFs important for face identification (i.e. medium SFs) were removed from the stimulus, whereas rapid OR face categorization can be based on medium SFs.
Voir le poster
Impact of myopia on visual attention and the potential link with cultural differences in visual perception
Caroline Blais1, Hana Furumoto-Deshaies1, Marie-Pier Plouffe-Demers1, Amanda Estéphan1,Daniel Fiset1;
1Psychoéducation & Psychologie, Université du Québec en Outaouais
Easterners and Westerners have been shown to differ in many visual perceptual tasks, and evidence supports a broader allocation of attention among Easterners than Westerners. For instance, Easterners have a larger global advantage than Westerners in a Navon Task (McKone et al., 2010); they fixate less the eyes and mouth, and more the centre of the face during its processing (Blais et al., 2008); they also tend to process faces in lower spatial frequencies (Tardif et al., in press). Although it has been proposed that perceptual differences emerge from the cultural values (individualistic vs. collectivistic) assumed by each culture (Nisbett et al., 2001), a recent study didn’t succeed at finding links between those cultural values and the eye fixation pattern during face processing (Ramon et al., VSS2016). In this study we explored another lower-level hypothesis that could explain the perceptual differences observed between Easterners and Westerners: the impact of myopia on visual attention. Recent evidence suggests that myopes are less affected by crowding in peripheral vision (Caroll et al., VSS2016). Since myopia prevalence is higher among Chinese compared to Caucasians individuals (Lam et al., 2012), this could potentially explain the visual perception differences observed between Easterners and Westerners. The ability to detect global versus local target letters was measured with myopes (N=12) and emmetropes (N=17) using Navon’s paradigm. No global/local bias differences were found between the groups [t(28)=1.08, p=0.29]. These results do not support the hypothesis that the difference in the prevalence of myopia between both groups underlies the higher global advantage observed in Easterners. More studies will allow us to verify if myopia can explain the cultural differences observed in fixation patterns and spatial frequency utilization during face perception.
Voir poster