1. Introduction

The acquisition of a sign language begins long before infants produce the first signs, and visual prosody is a crucial component of language that is expected to develop early on and to play a role in the acquisition of other aspects of grammar and vocabulary, like auditory prosody in spoken languages (Gervain et al., 2020). In sign languages, prosodic properties such as intonation and phrasing are conveyed mainly by nonmanuals, e.g., eyebrow raise, eyebrow frown, eye squint and head position (Dachkovsky & Sandler, 2009; Sandler et al., 2020). It is known that facial elements, for instance, convey visual prosody and are part of adults’ sign language systems. However, very little is still known about the early development of prosody in sign languages, and to our knowledge the early perception of intonational contrasts in sign languages has not yet been investigated. This study is part of a larger ongoing investigation aiming at (i) addressing the methodological challenges in the study of early perception of prosody by deaf infants, and (ii) determining the time course of the acquisition of intonation-like contrasts in Portuguese Sign Language (LGP) by deaf infants. Given that it is by now well-established that hearing infants have a remarkable ability to discriminate speech sounds, as well as prosodic cues, before the end of the first year of life (Gervain et al., 2020; Werker, 2024), we also want to contribute to the understanding of how much of this outstanding ability extends to language-related visual cues, both in deaf and hearing infants (e.g., Stone et al., 2017; Wilbourn & Casasola, 2007).

In this paper, we will address goal (i) above, by describing a new procedure adapted from an experimental method used for studying early prosodic discrimination in hearing infants (Frota et al., 2014), with the aim to investigate early prosodic discrimination in deaf infants (section 2). The results of a pilot experiment with two infants are presented in section 3. In section 4, we discuss these preliminary findings and their implications for further research. Firstly, in this section, we provide background information relevant to the current investigation.

1.1. Language input and age of acquisition in deaf infants

There are a number of specific challenges to the study of deaf infants’ perception. It is well established that, when a typically developing child is exposed to a natural and rich language input from birth, “the course of language development is largely predictable” (Lillo-Martin & Henner, 2021, p. 403). Nevertheless, early (initial) fluent input is not accessible to the majority of deaf infants, as it is well-known that most deaf children are born into non-deaf homes. Estimates based on a national survey in the United States from the turn of the millennium indicate that less than 5% of deaf and hard of hearing individuals have at least one deaf parent (Mitchell & Karchmer, 2004). Despite the newborn hearing screening programs currently in place in many countries, deafness is not always promptly diagnosed, because, for instance, it can be early acquired, after screening (Oliveira et al., 2019). Besides late diagnosis, delays in exposure to appropriate linguistic input may result from the lack of knowledge of a sign language by caregivers, who might nevertheless choose to learn sign language to communicate with their child – a process that inevitably takes time to produce the first effects; in addition, it may be difficult for caregivers to provide their child opportunities to interact with fluent sign language users early on. Often, parents do not engage in promoting sign language input to their infant; instead, they choose to take advantage of technological devices allowing the child to have access (only) to spoken language, such as cochlear implants, requiring surgical intervention usually not performed before 9 months (cf. current FDA indications for cochlear implant, at 9 months or above, Culbertson et al., 2022). Even after this intervention, there are losses in the quality of language input relative to typical hearing conditions and training is required (e.g., Lillo-Martin & Henner, 2021, for a review).

Due to such various circumstances, deaf children very often get access to (quasi-)appropriate linguistic input – that is, input in a modality that they can effectively perceive – only several months or even years after birth (Lillo-Martin & Henner, 2021; Meier, 2016). Studies with deaf children with cochlear implants show that the absence of appropriate linguistic input during the first year(s) of life causes difficulties in various areas of language development, as vocabulary growth, phonological processes or the acquisition of syntactic dependencies (Culbertson et al., 2022; Lynce et al., 2019; Moita, 2022; Nicholas & Geers, 2006; Quer & Steinbach, 2019).

It is estimated that “congenital deafness or early acquired deafness affects 1 to 3 out of 1000 newborns without risk factors and 20 to 40 out of 1000 newborns with risk factors” (Oliveira et al., 2019, p. 767). In Portugal, a survey carried out in 15 hospitals and maternities of the National Health Service found that in 2014 and 2015, 2.5 and 1.5 per 1000 newborns were diagnosed with congenital/early acquired deafness (Oliveira et al., 2019). Despite these small numbers, according to the Portuguese official channels, around 30,000 deaf people use Portuguese Sign Language (LGP) to communicate,1 and LGP is one of the three official languages in Portugal.2 Studies on LGP tend to focus on deaf signers coming from deaf families (e.g., Amaral et al., 1994; Bettencourt, 2015; Carmo et al., 2007). Nevertheless, like in other parts of the world, deaf individuals coming from deaf homes represent only a very small portion of the deaf population in Portugal, while a vast majority is born into hearing families and is not exposed to sign language early. Importantly, studies with deaf infants born into hearing homes are also relevant because they contribute to a better characterization of various aspects of the language input in language acquisition (such as the effect of input quantity, quality, and age of exposure in a perceptible modality), bearing on topics as fundamental as the critical period for language acquisition (Meier, 2016; Quer & Steinbach, 2019). Besides that, studies on this population may contribute to the development of remediation strategies to compensate the effects of delayed input, impacting health, society and individuals’ well-being.

For the reasons above, in our study on the early acquisition of prosody in LGP, we are focusing on deaf infants below 12 months of age, either from hearing homes (with early exposure to visual but not sound cues), or with early access to LGP.

1.2. Prosody in sign languages

Research in the last decades has shown that phonological organization and functioning in spoken and sign languages share core properties. In spoken languages, segmentals and their composing features (i.e., consonants and vowels, produced by action of speech articulators) are the fundamental units that constitute the form of words, and the same function is accomplished in sign languages by signed forms, produced mainly with the action of manual articulators, in particular, the dominant hand, its shape, orientation, movement and location relative to the body and head, and face areas (e.g., Fenlon et al., 2018; van der Hulst & van der Kooij, 2021; Wilbourn & Casasola, 2007). Prosody, in turn, is conveyed in spoken languages by variations in duration, intensity, pitch or frequency, marking prominence, speech chunking, sentence types and other linguistically relevant information, and there is broad consensus that the same functions are achieved in sign languages by variations in the rhythm, size and duration of gestures, pause (hold), body or head movements, and facial expressions (Brentari & Crossley, 2002; Cruz et al., 2019; Dachkovsky & Sandler, 2009; Nespor & Sandler, 1999; Sandler, 2010, and many others).

In sign language prosodic properties are conveyed mainly by facial expressions and head movement (involving for instance eyebrows, eyes, mouth, cheeks, and head position). These facial components mark prosodic constituents for various discourse functions, such as distinguishing among sentence types, like declarative sentences and wh- and yes/no questions, and marking information structure (e.g., focus/background, topic/comment, given/new; Dachkovsky & Sandler, 2009; Pfau & Quer, 2010; Sandler et al., 2020). In LGP, Cruz et al. (2019) have shown that nonmanuals distinguish sentence types: statements are typically produced without nonmanual marking; when head movements occur, the dominant one is the head (nod) up-down; in contrast, information-seeking yes-no questions are usually marked with nonmanuals, namely with eyebrow lowering together with head (nod) up-down. Scholars have also shown that nonmanual components present in sign languages cue prosodic constituents (prosodic word, phonological phrase and intonational phrase) (Brentari & Crossley, 2002; Brentari et al., 2015, and references therein; Crasborn et al., 2008; Nespor & Sandler, 1999; Sandler, 2010).

It is important to bear in mind, nevertheless, that besides prosody, nonmanuals and facial expressions in particular, can also signal other types of linguistic information, e.g., they can mark word class (e.g., adverbial and adjectival function), change sentence polarity, mark person distinctions in pronominals, or constitute obligatory components of signs’ lexical (phonological) form, in which case they are often iconic, reflecting the meaning of the sign (Pfau & Quer, 2010). Moreover, beyond language proper, facial expressions play a key role in conveying emotions, irrespective of language modality (e.g., Dachkovsky & Sandler, 2009; Elliott & Jacobs, 2013).

1.3. Early prosodic development in sign language

There is currently a knowledge gap on early prosodic development in sign language. Nevertheless, it seems clear that when deaf babies are exposed to a sign language from birth, sign language development progresses along a similar timeline as spoken language, possibly following the same developmental milestones as in spoken language (see Lillo-Martin & Henner, 2021 and Meier, 2016, for reviews). Two classical milestones, briefly described below, illustrate this parallelism.

Babbling is commonly observed in hearing babies, who start producing language-like sound sequences at around 6 months of age, increasingly more similar to the surrounding language as time proceeds, before and during the first words’ production period (e.g., Laing & Bergelson, 2020; Vihman et al., 1985). There is large consensus that babbling is a fundamental stage in language development, allowing babies to practice the articulation of the speech elements of the acquiring language, as well as promoting social and communicative interaction with caregivers (e.g., Lillo-Martin & Henner, 2021; Meier, 2016; Petitto & Marentette, 1991). Similarly, studies have shown that deaf infants exposed to gestures, around 10–14 months also produce meaningless manual gestures which reflect the components found in their ambient sign language (Pichler, 2012). Moreover, like in spoken language acquisition, it has been found that the manual babbling becomes more complex as children grow older, while deaf babies early exposed to gestures produce more complex manual babbling and a greater variety of types of manual babbling (Petitto & Marentette, 1991).

Another important developmental milestone, common to spoken and sign language, is the production of the first words/meaningful gestures. Babies’ first spoken words appear at around 10–12 months, although there is great individual variation (e.g., Lillo-Martin & Henner, 2021; Petitto & Marentette, 1991). Whereas some studies report coincidence in the time of the first emerging words and signs (between 10–12 months, Meier, 2016; Petitto & Marentette, 1991), others point to an earlier onset of the first gestures in sign language in comparison with the first emerging words in spoken language (at approximately 8.5 months), suggesting an early gesture advantage, which may persist for some time in language development (Anderson & Reilly, 2002; Lillo-Martin & Henner, 2021; see also Pichler, 2012 for a review on modality-driven differences in early language acquisition).

Most research so far has studied early prosodic development in sign language production (Brentari et al., 2015; Lillo-Martin & Henner, 2021; Petitto & Marentette, 1991). By contrast, very little is known about early perception of prosody. In fact, research on early perception abilities in sign languages has essentially looked at discrimination of various parameters (sub-lexical features) that compose signs, as well as grammatical markers. For example, Wilbourn and Casasola (2007) showed that 6- and 10-month-olds were able to discriminate the location of the sign and the signer’s facial expression cuing grammatical information, but did not demonstrate detecting changes in handshape or movement. However, to our knowledge there are no reports of research on early perception of prosody in sign language.

Interestingly, some work has addressed the early sensitivity to language-like visual contrasts by hearing infants without exposure to a sign language. A few studies have shown that hearing 3.5–4 month-old infants that are naïve to sign language are able to discriminate several contrasting parameters in a sign language, such as the type of movement (Wilbourn & Casasola, 2007, for a review). Moreover, visual attention to language-related visual information has been found to be modulated by age. Stone et al. (2017) looked at visual attention of hearing infants at the age of 6 months and 12 months while watching fingerspelling stimuli that were either well-formed or ill-formed with respect to a sonority parameter associated to syllable structure in sign languages. In this study, 6-month-olds showed a preference for the stimuli corresponding to well-formed stimuli in sign language, whereas 12-month-olds showed no evidence of preference for one type of stimulus or the other. Bosworth et al. (2022) examined the eye-gaze of hearing infants from monolingual English-speaking families, aged 6 and 11 months, while viewing video sequences of American Sign Language signs and non-linguistic body actions (self-directed grooming action and object directed pantomime). Here too, results revealed developmental differences in gaze patterns: 6-month-olds looked more to the signer’s face for grooming, and to the articulatory area of the signing space for mimes and signs; in contrast, 11-month-olds showed similar attention to the face irrespective of type of visual stimuli. Both Stone et al. (2017) and Bosworth et al. (2022) interpret the results as reflecting an early perceptual sensitivity for visual cues that can be meaningful in sign languages, followed by a later decline in discrimination abilities, before the end of the first year, associated to language specialization, analogous to what has been already established for spoken language acquisition (e.g., Werker, 2024). However, to our knowledge, visual prosodic discrimination of hearing infants naïve to sign language has not yet been investigated.

1.4. Early discrimination of intonation contrasts and the present study

For the spoken modality, Frota et al. (2014) used a modified version of the visual habituation paradigm to investigate infants’ ability to discriminate between declaratives and yes/no questions differing only in the intonation contours (falling contour vs. falling-rising contour, respectively), in European Portuguese (EP). The results showed that EP-learning infants were able to discriminate the intonation contrast between declaratives and yes/no questions as early as 5 months of age and this discrimination ability is maintained throughout the first year of life.

As far as we know, early discrimination of intonation contrasts in sign languages, including LGP, has not yet been investigated (see Lutzenberger et al., 2024 on experimental work focusing on early infant discrimination in other phonological domains, and Stone & Bosworth, 2019). In the present study, we have adapted Frota et al.’s (2014) method to investigate deaf infants’ ability to discriminate the corresponding intonation contrast in LGP. Following Cruz et al. (2019) findings, we tested infants’ discrimination of utterance-like productions with neutral expressions/absence of nonmanuals, expressing declarative meaning in LGP, and utterance-like productions articulated with nonmanuals (i.e., lowered eyebrows and head nod), used to mark yes/no questions in LGP. We expect that infants with early access to LGP will be able to discriminate intonation contrasts in the second half of the first year of life, as observed in the spoken modality (Frota et al., 2014). As for infants without or with little exposure to LGP, two outcomes may be hypothesized. Due to the limited/lack of exposure to prosodic marking in a sign language, they may show weaker/no evidence of intonation discrimination, reflecting the importance of (perceivable) linguistic input in the development of early discrimination skills. However, given the well-established early sensitivity to sound distinctions that may contrast across languages, which characterizes infant perception in general, and since hearing infants have been shown to discriminate some types of visual stimuli that contrast in sign language, we may put forth the hypothesis that infants without or with little exposure to LGP will be able to discriminate the prosodic cues conveyed by nonmanual visual cues marking the declarative/yes-no questions contrast in LGP.

2. Method

To the best of our knowledge, this is the first experimental approach to the study of early perception of sign language intonation contrasts by deaf infants. While we follow the general procedure of Frota et al.’s (2014) study with hearing infants, a number of methodological adaptations were introduced in order to ensure the suitability of the method for testing the perception of statement and yes-no question contrasts in a sign language, namely LGP, by deaf infants, and of the visual prosodic markers used in LGP, by hearing infants. In this paper we report on our innovative modified method, and on the results of a pilot study. Data collection from a larger sample is currently ongoing.

2.1. Participants

Two infants took part in this pilot study: one deaf infant (male, aged 7 months and 28 days) and one monolingual hearing infant (female, 6 months and 15 days of age). Both infants were raised in EP hearing homes. The deaf infant started receiving LGP input only at the age of 6 months, for two hours, twice a week, at an educational institution specialized in early intervention for deaf infants (Agrupamento de Escolas Quinta de Marrocos). At the time of data collection, the baby had been exposed to LGP for 1 month, and received a cochlear implant 15 days before coming to the Lab.3

Informed written consent was obtained from caregivers prior to data collection. The study was conducted in accordance with the Declaration of Helsinki and the guidelines of the European Union Agency for Fundamental Rights and approved by the Ethics Committee of the School of Arts and Humanities (University of Lisbon) (4_CEI2023, 11 May, 2023).

2.2. Materials

Stimuli consisted of one-pseudosign-utterances. Eight pseudosigns were created, with the help of a native LGP adult signer, relying on already existing signs in LGP and changing one parameter (location), resulting in a possible, but non-existing sign in LGP. As yes-no questions are marked by facial cues in LGP, utmost care was taken to create pseudosigns avoiding the face region, thus articulated by manuals in body regions below the chin. This ensured a clear segregation between the regions for pseudosign articulation and intonation marking, respectively, enabling the identification of non-overlapping areas of interest (AOI).

Each of the eight pseudosigns was produced with the typical LGP statement (i.e., without facial marking) and question (i.e., with facial marking, namely eyebrow lowering, together with head (nod) up-down) intonation by a female LGP native signer. The production of the resulting sixteen one-pseudosign-utterances was videorecorded with a professional JVC camera, model GY-HM11E, in.mov format (a 4:3 aspect ratio, 25 fps). The native signer was asked to remain in the same place during the recordings, in order to allow smooth transitions in the sequences of video recordings when combined. The recorded material was edited in Shotcut software. Each recorded production of one-pseudosign-utterance was first extracted to a separate file. The 8 one-pseudosign-utterances were then combined in single sequences of 4 pseudo-sign-utterances, two sequences for statement intonation, and another two for question intonation (see supplementary materials at https://labfon.letras.ulisboa.pt/babylab/sign_language). Minor lighting and colour adjustments were made in the individual recordings, so that video sequences had similar characteristics, avoiding as much as possible exposing infants to distracting visual contrasts (Figure 1).

Figure 1
Figure 1

Illustration of the intonation contrast taken as still frames of the videos sequences: Statement in the left panel; Question in the right panel.

Different pseudosign sequences were used in the habituation and test phases. Each infant only saw one version (either the statement or the question) of each one-pseudosign-utterance.

2.3. Procedure

Following Frota et al.’s (2014) study, we used a modified version of the habituation paradigm, with the sound stimuli replaced by the visual stimuli. Importantly, the data were collected using the EyeLink 1000 Plus eyetracker instead of LOOK software that collects duration of a gaze, but not the location of gaze.

Infants were seated on their caregivers’ laps facing a 22-inch ASUS monitor at a distance of 50 to 60 cm. The experiment started with a short video to attract infants’ attention to the screen, while the researcher was seating at a distance, monitoring eye-movement for calibration and subsequent data collection. Parents were instructed not to interact with the infant, nor give any bodily cues that could affect infant’s response, such as pointing.

Infants were calibrated with 3 point-calibration using small moving circles as calibration points. The same procedure was used for validation. The calibration was successful in both infants with less than 0.56º of error. After calibration and validation, a coloured image (attention getter) appeared. If infants looked at it for two consecutive seconds, a video trial started. Each video trial consisted of a video file made up of sequences of 4 pseudosigns, as described above (section 2.2). The 4 pseudosigns, each corresponding to an intonational phrase, created a 16-second trial for each intonation pattern. Because our single pseudo-sign intonational phrases were longer than the single pseudoword intonational phrases in Frota et al.’s (2014), it was not possible to have both the same length (16 secs) and the same number of pseudosigns per trial matching the number of pseudowords per trial in Frota et al.’s (2014) auditory experiment (i.e., 8 pseudowords). To ensure that the total duration of the experiments across studies was similar, and, crucially, not excessively long for young infants, we decided to keep the trial length identical in the two experiments and reduced to four pseudosigns in our trials.

Infants were presented with two phases: the habituation and the test phase. In this pilot study, infants were habituated to the declarative sequence and tested with the same test order: a declarative followed by an interrogative sequence (i.e., same-switch order). In the habituation phase, infants were habituated to one sequence of 4 one-pseudosign-intonational phrases (statement intonation) until the infant reached a pre-defined habituation criterion. As in Frota et al.’s (2014) auditory experiment, we used a sliding window comparing the first four trials, with the last four trials, with the criterion that the average looking time to the last four habituation trials should be less than 60% of the average looking time to the first four habituation trials. Thus, the habituation phase had a minimum of 8 trials. The maximum number of habituation trials was set to 20. After infants were habituated to declarative intonation pattern, the test phase was presented, including two trials: one containing a trial identical to the habituation trial (same) and another different to the habituation trial (switch). Figure 2 illustrates the experimental design. If infants were able to discriminate the intonation contrast in LGP, looking time for the switch trials should be longer than for the same trials.

Figure 2
Figure 2

An illustration of the experimental design.

The stimuli were presented using the Experiment Builder software (version 2.3.38), and each video trial was presented until the end (fixed trial presentation). The EyeLink software was used to record and monitor infants’ looking behavior, while data extraction was done in DataViewer (version 3.2.1).

Two areas of interest (AOI) were defined to compare infants’ looking time to manuals and nonmanuals. For the manual AOI we considered the area of torso/body (from mid-neck to hips) where all hand movements were expressed. The nonmanual AOI covered the head/face region.

3. Results

The habituation phase was analyzed first. Overall, average looking times to the first four trials (M = 11.11s; SD = 4.96s) were longer than the average looking times to the last four trials (M = 4.20s; SD = 3.73s), irrespective of the participants’ deafness/hearing condition (deaf: first four trials, M = 14.06s; SD = 0.38s; last four trials, M = 5.92s; SD = 2.77s; hearing: first four trials, M = 8.16s; SD = 5.84s; last four trials, M = 2.49s; SD = 4,11s). A paired t-test indicated a significant difference for the deaf infant (t(3) = 5.98, p = .009) and a marginal difference for the monolingual hearing infant (t(3) = 2.65, p = .07). Given the small sample, it is not possible to determine if there is a statistical difference between the two participants.

In the test phase, although not possible to confirm with statistical tests, both infants seem to demonstrate similar looking times to the switch and same test trials, suggesting the absence of discrimination (accumulated looking time, hearing: switch = 6.5s, same = 7.1s; deaf: switch = 4.1s, same = 5.3s).

Even though the overall looking time was not informative, the eye tracking methodology allows us to assess infants’ looking patterns. Specifically, we have defined two dynamic AOI the head and the body. The head AOI comprises visual prosodic cues and thus would indicate infants’ interest towards these nonmanual cues. The body AOI captures all hand movements of the signer during the video passages, and would indicate infants’ interest in the manual cues. We analyzed the proportion of looking time to the two AOIs in both the habituation and test phases. In the habituation phase, we observed an interesting pattern (Figure 3): the deaf infant was looking more to the head (i.e., the nonmanuals; M = .78) than the manuals (M = .21), whereas the hearing infant showed the opposite pattern (nomanuals M = .33, manuals M = .66).

Figure 3
Figure 3

Proportion of looking time to the head versus manual AOIs during the habituation phase.

In the test phase we observed a similar pattern for both test trials. For the deaf infant looking time was longer to the head (same = 0.70; switch = 0.72) in comparison to the manuals (same = 0.29; switch = 0.27), whereas for the hearing infant attention was higher to the manuals (same = 0.77; switch = 0.70) in comparison to the head (same = 0.22, switch = 0.29).

4. Discussion

In this study, we described what we believe to be the first adaptation of the habituation paradigm to study early discrimination of intonation contrasts in a sign language (a modification of Frota et al.’s 2014 procedure). The pilot study indicates that the design may be suitable for investigating early perception of visual prosody contrasts, both in deaf and in hearing infants. This study is part of a larger ongoing project and data collection with a larger sample is currently in progress.

To develop our experimental approach to the study of early perception of sign language intonation contrasts by deaf and hearing infants, a number of methodological adaptations were introduced, from the creation of one-pseudosign-intonational phrases to the production of video trials used in a visual habituation design implemented with eye-tracking.

In our pilot experiment, we examined the perception of one deaf infant (with limited access to LGP, i.e., one month prior to the time of data collection, and to spoken language, i.e., 15 days after cochlear implant) and one monolingual hearing infant (never exposed to a sign language). Both infants performed the experiment without difficulties. Besides testing our experimental design, the pilot experiment suggested some interesting similarities and differences in the gaze patterns of the two participants. Although still very preliminary, our results suggest that both infants have interest in sign language input despite the late and little amount of exposure to LGP, in the case of the deaf infant, and the lack of prior exposure to a sign language, in the case of the hearing infant. This reinforces the view that infants have an early sensitivity to visual cues that are exploited in linguistic contrasts in sign languages (Stone et al., 2017). The fact that the hearing infant, exposed to spoken EP, was attentive to the sign language productions, even if sign language is not their mother tongue, as also found in recent studies with ASL input, agrees with the hypothesis that infants in general are receptive to visual language cues (Bosworth et al., 2022; Stone et al., 2017).

In our pilot study, we found no noticeable differences in looking times between same and switch trials, in both the deaf and the hearing infant. In particular, we did not find longer looking time in the switch condition, which would indicate discrimination, under the habituation paradigm. This may suggest that, despite being attracted to movement and gestures, both infants did not discriminate the LGP intonational contrast. In our view, it is likely that this is due to the fact that both infants tested were not exposed to (sufficient and early) LGP input. In the case of our deaf participant, exposure to LGP was limited and delayed, starting only one month before data collection. We must note, nonetheless, that it has been shown that some manual parameters are discriminated early on by hearing infants not exposed to a sign language (Lillo-Martin & Henner, 2021; Lutzenberger et al., 2024; Wilbourn & Casasola, 2007). These results may therefore suggest that (these) nonmanual cues in sign language are less accessible to hearing infants than manual gestures.

However, beyond the absence of discrimination, interesting differences in looking patterns emerged between the deaf and the hearing infant. The deaf infant attended more to the head than to manuals, while the opposite pattern was found in the hearing infant. It is known that nonmanuals, mostly conveyed in the head region, are the typical carriers of prosodic information in sign languages (e.g., Cruz et al., 2019; Dachkovsky & Sandler, 2009; Nespor & Sandler, 1999). The preference for the head region manifested by the deaf infant may thus be related to an emerging sensitivity to the LGP prosodic cues, though not sufficiently developed yet to allow discrimination of the prosodic contrast between statement intonation (signaled by neutral expressions) and yes/no question intonation (marked with eyebrow lowering and head movement). In addition, the deaf infant comes from a hearing family, and has been exposed since birth to visual input associated with speech production, that includes the mouth area, as well as the visual prosody that characterizes EP which is conveyed by cues in the face region (Cruz et al., 2017). In other words, for the deaf infant, the face region is highly informative, for both vocabulary-related segmental information and prosody, which may account for the clear preference for the face area.

The strong preference for the body area by the hearing infant, in turn, seems to suggest a bias towards the articulatory area of the signing space. If that is the case, less attention to nonmanuals might be a feature characterizing hearing infants that are only exposed to spoken languages, but not deaf infants. Another possibility is that the hearing infant, unlike the deaf infant, found the gestures from the articulatory area of the signing space unusual and new, and was more attracted to them than to the gestures from the head/face region, as eyebrow, eyes, and head movements are used in both spoken and sign language to convey prosody (Cruz & Frota, 2025).

Beyond the proposal of an experimental paradigm suitable for investigating early perception of visual prosody, and in particular the discrimination of prosodic contrasts, both in deaf infants and in hearing infants, the very preliminary nature of the results from our pilot experiment calls for further research expanding the pool of participants and their profiles to include fair sample sizes across groups of unimodal and bimodal deaf infants (including deaf infants acquiring LGP in LGP homes), as well as monolingual and bilingual hearing infants, from younger and older age groups in the first year of life. Moreover, the advantages of using eye-tracking should be further explored to examine different areas of interest within the face, namely the eyes and mouth regions, given that attention to the eyes and mouth has been related to language development in hearing infants (e.g., Cruz et al., 2020; Lewkowicz & Hansen-Tift, 2012; Pejovic et al., 2021; Pons et al., 2019).

In conclusion, this study put forward an experimental design to investigate, for the first time, the early perception of intonation contrasts in a sign language, and presented preliminary findings suggesting similarities and differences in deaf and hearing infants’ sensitivity to the LGP statement/yes-no question prosodic contrast. Using the experimental method described here, in future work we expect to be able to unravel the developmental path of visual prosody discrimination in LGP and determine whether the native language modality affects early discrimination abilities of language-related visual cues, adding to the understanding of prosodic development in general, and across language modalities.

Notes

  1. https://www.portugal.gov.pt/pt/gc23/comunicacao/noticia?i=maos-que-falam-hoje-e-dia-nacional-da-lingua-gestual-portuguesa (accessed February 28, 2025). [^]
  2. Portuguese Constitution, revised 1997 (article 74, no. 2, h) – cf. Constituição da República Portuguesa. Lisboa: Texto, 2016. [^]
  3. A second hearing infant (female, 6 months of age) was tested, but the data was not included due to sleepiness and loss of data. [^]

Acknowledgements

We thank the infants and caregivers for participating in this study, Agrupamento de Escolas Quinta de Marrocos for help with the deaf participant recruitment, as well as Helena Carmo, for producing the visual stimuli videorecorded for our experiment.

This research was supported by the Portuguese Foundation for Science and Technology (UID/2014/2020, UID/00214: Center of Linguistics of the University of Lisbon).

Competing Interests

The authors have no competing interests to declare.

References

Amaral, M. A., Coutinho, A., & Martins, M. R. D. (1994). Para uma Gramática da Língua Gestual Portuguesa [Towards a grammar of Portuguese Sign Language]. Caminho.

Anderson, D., & Reilly, J. S. (2002). The MacArthur Communicative Development Inventory: Normative data for American Sign Language. Journal of Deaf Studies and Deaf Education, 7(4), 283–306.  http://doi.org/10.1093/deafed/7.2.83

Bettencourt, M. F. (2015). A ordem de palavras na Língua Gestual Portuguesa: Breve estudo comparativo com o Português e outras Línguas Gestuais [Word order in Portuguese Sign Language: a short comparative study between Portuguese and other Sign Languages]. Unpublished MA dissertation, Universidade do Porto. U.Porto repository. https://sigarra.up.pt/faup/pt/pub_geral.pub_view?pi_pub_base_id=37071

Bosworth, R. G., Hwang, S. O., & Corina, D. P. (2022). Visual attention for linguistic and non-linguistic body actions in non-signing and native signing children. Frontiers in Psychology, 13, 951057.  http://doi.org/10.3389/fpsyg.2022.951057

Brentari, D., & Crossley, L. (2002). Prosody on the hands and face: Evidence from American Sign Language. Sign Language & Linguistics, 5(2), 105–130.  http://doi.org/10.1075/sll.5.2.03bre

Brentari, D., Falk, J., & Wolford, G. (2015). The acquisition of American Sign Language prosody. Language, 91(1), 144–168.  http://doi.org/10.1353/lan.2015.0042

Carmo, H., Martins, M., Morgado, M., & Estanqueiro, P. (2007). Programa curricular de Língua Gestual Portuguesa: Educação pré-escolar e ensino básico [Syllabus of Portuguese Sign Language: preschool and primary school education]. Ministério da Educação/Direção Geral da Inovação e de Desenvolvimento Curricular.

Crasborn, O., van der Kooij, E., Waters, D., Woll, B., & Mesch, J. (2008). Frequency distribution and spreading behavior of different types of mouth actions in three sign languages. Sign Language & Linguistics, 11, 45–67.  http://doi.org/10.1075/sll.11.1.04cra

Cruz, M., Butler, J., Severino, C., Filipe, M., & Frota, S. (2020). Eyes or mouth? Exploring eye gaze patterns and their relation with early stress perception in European Portuguese. Journal of Portuguese Linguistics, 19(1), 4.  http://doi.org/10.5334/jpl.240

Cruz, M., & Frota, S. (2025). “Talking heads” in Portuguese sign and spoken languages. Language and Cognition, 17, e18.  http://doi.org/10.1017/langcog.2024.63

Cruz, M., Swerts, M., & Frota, S. (2017). The role of intonation and visual cues in the perception of sentence types: Evidence from European Portuguese varieties. Laboratory Phonology: Journal of the Association for Laboratory Phonology, 8(1), 23.  http://doi.org/10.5334/labphon.110

Cruz, M., Swerts, M., & Frota, S. (2019). Do visual cues to interrogativity vary between language modalities? Evidence from spoken Portuguese and Portuguese Sign Language. Proceedings of the 15th International Conference on Auditory-Visual Speech Processing (pp. 1–5).  http://doi.org/10.21437/AVSP.2019-1

Culbertson, S. R., Dillon, M. T., Richter, M. E., Brown, K. D., Anderson, M. R., Hancock, S. L., & Park, L. R. (2022). Younger age at cochlear implant activation results in improved auditory skill development for children with congenital deafness. Journal of Speech, Language, and Hearing Research, 65(9), 3539–3547.  http://doi.org/10.1044/2022_JSLHR-22-00039

Dachkovsky, S., & Sandler, W. (2009). Visual intonation in the prosody of a sign language. Language and Speech, 52(2–3), 287–314.  http://doi.org/10.1177/0023830909103175

Elliott, E. A., & Jacobs, A. M. (2013). Facial expressions, emotions, and sign languages. Frontiers in Psychology, 4, 115.  http://doi.org/10.3389/fpsyg.2013.00115

Fenlon, J., Cormier, K., & Brentari, D. (2018). The phonology of sign languages. In S. J. Hannahs & A. Bosch (Eds.), The Routledge Handbook of Phonological Theory (pp. 453–475). Routledge.  http://doi.org/10.4324/9781315675428-16

Frota, S., Butler, J., & Vigário, M. (2014). Infants’ perception of intonation: Is it a statement or a question?. Infancy, 19(2), 194–213.  http://doi.org/10.1111/infa.12037

Gervain, J., Christophe, A., & Mazuka, R. (2020). Prosodic bootstrapping. In C. Gussenhoven & A. Chen (Eds.), The Oxford Handbook of Prosody (pp. 563–573). Oxford University Press.  http://doi.org/10.1093/oxfordhb/9780198832232.013.36

Laing, C., & Bergelson, E. (2020). From babble to words: Infants’ early productions match words and objects in their environment. Cognitive Psychology, 122, 101308.  http://doi.org/10.1016/j.cogpsych.2020.101308

Lewkowicz, D. J., & Hansen-Tift, A. M. (2012). Infants deploy selective attention to the mouth of a talking face when learning speech. Proceedings of the National Academy of Sciences of the United States of America, 109(4), 1431–1436.  http://doi.org/10.1073/pnas.1114783109

Lillo-Martin, D., & Henner, J. (2021). Acquisition of sign languages. Annual Review of Linguistics, 7, 395–419.  http://doi.org/10.1146/annurev-linguistics-043020-092357

Lutzenberger, H., Casillas, M., Fikkert, P., Crasborn, O., & de Vos, C. (2024). More than looks: Exploring methods to test phonological discrimination in the sign language Kata Kolok. Language Learning and Development, 20(4), 297–323.  http://doi.org/10.1080/15475441.2023.2277472

Lynce, S., Moita, M., Freitas, M. J., Santos, M. E., & Mineiro, A. (2019). Phonological development in Portuguese deaf children with cochlear implants: Preliminary study. Revista de Logopedia, Foniatría y Audiología, 39(3), 115–128.  http://doi.org/10.1016/j.rlfa.2019.03.002

Meier, R. P. (2016). Sign language acquisition. In Oxford Handbook Topics in Linguistics (online edn.). Oxford University Press.  http://doi.org/10.1093/oxfordhb/9780199935345.013.19

Mitchell, R. E., & Karchmer, M. A. (2004). Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States. Sign Language Studies, 4(2), 138–163.  http://doi.org/10.1353/sls.2004.0005

Moita, M. (2022). A aquisição de dependências sintáticas com movimento em crianças surdas com implante coclear: Um défice de movimento?. [The acquisition of syntactic dependencies with movement in hearing-impaired children with cochlear implant: A deficit on movement?] Unpublished PhD dissertation, Universidade Nova de Lisboa.

Nespor, M., & Sandler, W. (1999). Prosody in Israeli sign language. Language and Speech, 42, 143–176.  http://doi.org/10.1177/00238309990420020201

Nicholas, J. G., & Geers, A. E. (2006). Effects of early auditory experience on the spoken language of deaf children at 3 years of age. Ear and Hearing, 27(3), 286–298.  http://doi.org/10.1097/01.aud.0000215973.76912.c6

Oliveira, C., Machado, M., Zenha, R., Azevedo, L., Monteiro, L., & Bicho, A. (2019). Congenital or early acquired deafness: An overview of the Portuguese situation, from diagnosis to follow-up. Acta Médica Portuguesa, 32(12), 767–775. https://www.actamedicaportuguesa.com/revista/index.php/amp/article/view/11880

Pejovic, J., Cruz, M., Severino, C., & Frota, S. (2021). Early visual attention abilities and audiovisual speech processing in 5–7 month-old down syndrome and typically developing infants. Brain Sciences, 11(7), 939, Special Issue Down Syndrome: Neuropsychological Phenotype across the Lifespan.  http://doi.org/10.3390/brainsci11070939

Petitto, L. A., & Marentette, P. F. (1991). Babbling in the manual mode: Evidence for the ontogeny of language. Science, 251(5000), 1493–1496.  http://doi.org/10.1126/science.2006424

Pfau, R., & Quer, J. (2010). Nonmanuals: Their grammatical and prosodic roles. In D. Brentari (Ed.), Sign Languages (pp. 381–402). Cambridge University Press.  http://doi.org/10.1017/CBO9780511712203.018

Pichler, D. C. (2012). Acquisition. In R. Pfau, M. Steinbach & B. Woll (Eds.), Sign Language: An International Handbook (pp. 647–686). De Gruyter Mouton.  http://doi.org/10.1515/9783110261325.647

Pons, F., Bosch, L., & Lewkowicz, D. J. (2019). Twelve-month-old infants’ attention to the eyes of a talking face is associated with communication and social skills. Infant Behavior and Development, 54, 80–84.  http://doi.org/10.1016/j.infbeh.2018.12.003

Quer, J., & Steinbach, M. (2019). Handling sign language: The impact of modality. Frontiers in Psychology, 10, 483.  http://doi.org/10.3389/fpsyg.2019.00483

Sandler, W. (2010). Prosody and syntax in sign languages. Transactions of the Philological Society, 108(3), 298–328.  http://doi.org/10.1111/j.1467-968X.2010.01242.x

Sandler, W., Lillo-Martin, D., Dachkovsky, S., & Müller de Quadros, R. (2020). Sign language prosody. In C. Gussenhoven & A. Chen (Eds.), The Oxford Handbook of Language Prosody (pp. 104–122). Oxford University Press.  http://doi.org/10.1093/oxfordhb/9780198832232.013.44

Stone, A., & Bosworth, R. G. (2019). Exploring infant sensitivity to visual language using eye tracking and the preferential looking paradigm. Journal of Visualized Experiments, 147, e59581.  http://doi.org/10.3791/59581

Stone, A., Petitto, L. A., & Bosworth, R. (2017). Visual sonority modulates infants’ attraction to sign language. Language Learning and Development, 14(2), 130–148.  http://doi.org/10.1080/15475441.2017.1404468

Van der Hulst, H., & van der Kooij, E. (2021). Sign language phonology: theoretical perspectives. In The Routledge Handbook of Theoretical and Experimental Sign Language Research (pp. 1–32). Routledge.  http://doi.org/10.4324/9781315754499-1

Vihman, M. M., Macken, M. A., Miller, R., Simmons, H., & Miller, J. (1985). From babbling to speech: A re-assessment of the continuity issue. Language, 61(2), 397–445.  http://doi.org/10.2307/414151

Werker, J. (2024). Phonetic perceptual reorganization across the first year of life: Looking back. Infant Behavior and Development, 75, 101935.  http://doi.org/10.1016/j.infbeh.2024.101935

Wilbourn, M. P., & Casasola, M. (2007). Discriminating signs: Perceptual precursors to acquiring a visual-gestural language. Infant Behavior and Development, 30(1), 153–160.  http://doi.org/10.1016/j.infbeh.2006.08.006