<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.2" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">2397-5563</journal-id>
<journal-title-group>
<journal-title>Journal of Portuguese Linguistics</journal-title>
</journal-title-group>
<issn pub-type="epub">2397-5563</issn>
<publisher>
<publisher-name>Open Library of Humanities</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.16995/jpl.22921</article-id>
<article-categories>
<subj-group>
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Early perception of intonation in Portuguese Sign Language: A preliminary study using eye tracking</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Oliveira</surname>
<given-names>Erika</given-names>
</name>
<email>erikaguimaraes@edu.ulisboa.pt</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Cruz</surname>
<given-names>Marisa</given-names>
</name>
<email>marisac@edu.ulisboa.pt</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Pejovic</surname>
<given-names>Jovana</given-names>
</name>
<email>jpejovic@edu.ulisboa.pt</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Frota</surname>
<given-names>S&#243;nia</given-names>
</name>
<email>sfrota@edu.ulisboa.pt</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Vig&#225;ri</surname>
<given-names>Marina</given-names>
</name>
<email>mvigario@edu.ulisboa.pt</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>Center of Linguistics, University of Lisbon, Portugal</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2025-08-21">
<day>21</day>
<month>08</month>
<year>2025</year>
</pub-date>
<pub-date pub-type="collection">
<year>2025</year>
</pub-date>
<volume>6</volume>
<fpage>1</fpage>
<lpage>18</lpage>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2025 The Author(s)</copyright-statement>
<copyright-year>2025</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See <uri xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="http://jpl.letras.ulisboa.pt/articles/10.16995/jpl.22921/"/>
<abstract>
<p>We developed an adaptation of Frota et al.&#8217;s (<xref ref-type="bibr" rid="B17">2014</xref>) design to investigate, for the first time, early perception of intonation contrasts in a sign language. Using a modified version of the visual habituation paradigm, implemented with eye-tracking, we ran a pilot study with one deaf infant with limited exposure to Portuguese Sign Language (LGP), and one monolingual hearing infant, both coming from European Portuguese (EP) speaking homes. The results indicate that the new procedure can be successfully implemented to investigate deaf (and hearing) infants&#8217; sensitivity to the LGP statement/yes-no question prosodic contrast, shortly after 6 months. Interestingly, the looking pattern differed between the infants: the deaf infant looked more to the face than to the body region, while the hearing infant exhibited the reverse pattern. However, both infants demonstrated no evidence of discrimination of the visual prosodic contrast, unlike hearing EP-learning infants, who discriminated the auditory intonation contrast at 5 months (<xref ref-type="bibr" rid="B17">Frota et al., 2014</xref>). The results are conjectured to follow from infants&#8217; insufficient/lack of exposure to sign language input, as well as from the kind of phonological contrast under study. Further investigation is warranted, applying the new experimental procedure to deaf infants acquiring LGP in LGP homes.</p>
</abstract>
</article-meta>
</front>
<body>
<sec>
<title>1. Introduction</title>
<p>The acquisition of a sign language begins long before infants produce the first signs, and visual prosody is a crucial component of language that is expected to develop early on and to play a role in the acquisition of other aspects of grammar and vocabulary, like auditory prosody in spoken languages (<xref ref-type="bibr" rid="B18">Gervain et al., 2020</xref>). In sign languages, prosodic properties such as intonation and phrasing are conveyed mainly by nonmanuals, e.g., eyebrow raise, eyebrow frown, eye squint and head position (<xref ref-type="bibr" rid="B14">Dachkovsky &amp; Sandler, 2009</xref>; <xref ref-type="bibr" rid="B37">Sandler et al., 2020</xref>). It is known that facial elements, for instance, convey visual prosody and are part of adults&#8217; sign language systems. However, very little is still known about the early development of prosody in sign languages, and to our knowledge the early perception of intonational contrasts in sign languages has not yet been investigated. This study is part of a larger ongoing investigation aiming at (i) addressing the methodological challenges in the study of early perception of prosody by deaf infants, and (ii) determining the time course of the acquisition of intonation-like contrasts in Portuguese Sign Language (LGP) by deaf infants. Given that it is by now well-established that hearing infants have a remarkable ability to discriminate speech sounds, as well as prosodic cues, before the end of the first year of life (<xref ref-type="bibr" rid="B18">Gervain et al., 2020</xref>; <xref ref-type="bibr" rid="B42">Werker, 2024</xref>), we also want to contribute to the understanding of how much of this outstanding ability extends to language-related visual cues, both in deaf and hearing infants (e.g., <xref ref-type="bibr" rid="B39">Stone et al., 2017</xref>; <xref ref-type="bibr" rid="B43">Wilbourn &amp; Casasola, 2007</xref>).</p>
<p>In this paper, we will address goal (i) above, by describing a new procedure adapted from an experimental method used for studying early prosodic discrimination in hearing infants (<xref ref-type="bibr" rid="B17">Frota et al., 2014</xref>), with the aim to investigate early prosodic discrimination in deaf infants (section 2). The results of a pilot experiment with two infants are presented in section 3. In section 4, we discuss these preliminary findings and their implications for further research. Firstly, in this section, we provide background information relevant to the current investigation.</p>
<sec>
<title>1.1. Language input and age of acquisition in deaf infants</title>
<p>There are a number of specific challenges to the study of deaf infants&#8217; perception. It is well established that, when a typically developing child is exposed to a natural and rich language input from birth, &#8220;the course of language development is largely predictable&#8221; (<xref ref-type="bibr" rid="B21">Lillo-Martin &amp; Henner, 2021, p. 403</xref>). Nevertheless, early (initial) fluent input is not accessible to the majority of deaf infants, as it is well-known that most deaf children are born into non-deaf homes. Estimates based on a national survey in the United States from the turn of the millennium indicate that less than 5% of deaf and hard of hearing individuals have at least one deaf parent (<xref ref-type="bibr" rid="B25">Mitchell &amp; Karchmer, 2004</xref>). Despite the newborn hearing screening programs currently in place in many countries, deafness is not always promptly diagnosed, because, for instance, it can be early acquired, after screening (<xref ref-type="bibr" rid="B29">Oliveira et al., 2019</xref>). Besides late diagnosis, delays in exposure to appropriate linguistic input may result from the lack of knowledge of a sign language by caregivers, who might nevertheless choose to learn sign language to communicate with their child &#8211; a process that inevitably takes time to produce the first effects; in addition, it may be difficult for caregivers to provide their child opportunities to interact with fluent sign language users early on. Often, parents do not engage in promoting sign language input to their infant; instead, they choose to take advantage of technological devices allowing the child to have access (only) to spoken language, such as cochlear implants, requiring surgical intervention usually not performed before 9 months (cf. current FDA indications for cochlear implant, at 9 months or above, <xref ref-type="bibr" rid="B13">Culbertson et al., 2022</xref>). Even after this intervention, there are losses in the quality of language input relative to typical hearing conditions and training is required (e.g., <xref ref-type="bibr" rid="B21">Lillo-Martin &amp; Henner, 2021</xref>, for a review).</p>
<p>Due to such various circumstances, deaf children very often get access to (quasi-)appropriate linguistic input &#8211; that is, input in a modality that they can effectively perceive &#8211; only several months or even years after birth (<xref ref-type="bibr" rid="B21">Lillo-Martin &amp; Henner, 2021</xref>; <xref ref-type="bibr" rid="B24">Meier, 2016</xref>). Studies with deaf children with cochlear implants show that the absence of appropriate linguistic input during the first year(s) of life causes difficulties in various areas of language development, as vocabulary growth, phonological processes or the acquisition of syntactic dependencies (<xref ref-type="bibr" rid="B13">Culbertson et al., 2022</xref>; <xref ref-type="bibr" rid="B23">Lynce et al., 2019</xref>; <xref ref-type="bibr" rid="B26">Moita, 2022</xref>; <xref ref-type="bibr" rid="B28">Nicholas &amp; Geers, 2006</xref>; <xref ref-type="bibr" rid="B35">Quer &amp; Steinbach, 2019</xref>).</p>
<p>It is estimated that &#8220;congenital deafness or early acquired deafness affects 1 to 3 out of 1000 newborns without risk factors and 20 to 40 out of 1000 newborns with risk factors&#8221; (<xref ref-type="bibr" rid="B29">Oliveira et al., 2019</xref>, p. 767). In Portugal, a survey carried out in 15 hospitals and maternities of the National Health Service found that in 2014 and 2015, 2.5 and 1.5 per 1000 newborns were diagnosed with congenital/early acquired deafness (<xref ref-type="bibr" rid="B29">Oliveira et al., 2019</xref>). Despite these small numbers, according to the Portuguese official channels, around 30,000 deaf people use Portuguese Sign Language (LGP) to communicate,<xref ref-type="fn" rid="n1">1</xref> and LGP is one of the three official languages in Portugal.<xref ref-type="fn" rid="n2">2</xref> Studies on LGP tend to focus on deaf signers coming from deaf families (e.g., <xref ref-type="bibr" rid="B1">Amaral et al., 1994</xref>; <xref ref-type="bibr" rid="B3">Bettencourt, 2015</xref>; <xref ref-type="bibr" rid="B7">Carmo et al., 2007</xref>). Nevertheless, like in other parts of the world, deaf individuals coming from deaf homes represent only a very small portion of the deaf population in Portugal, while a vast majority is born into hearing families and is not exposed to sign language early. Importantly, studies with deaf infants born into hearing homes are also relevant because they contribute to a better characterization of various aspects of the language input in language acquisition (such as the effect of input quantity, quality, and age of exposure in a perceptible modality), bearing on topics as fundamental as the critical period for language acquisition (<xref ref-type="bibr" rid="B24">Meier, 2016</xref>; <xref ref-type="bibr" rid="B35">Quer &amp; Steinbach, 2019</xref>). Besides that, studies on this population may contribute to the development of remediation strategies to compensate the effects of delayed input, impacting health, society and individuals&#8217; well-being.</p>
<p>For the reasons above, in our study on the early acquisition of prosody in LGP, we are focusing on deaf infants below 12 months of age, either from hearing homes (with early exposure to visual but not sound cues), or with early access to LGP.</p>
</sec>
<sec>
<title>1.2. Prosody in sign languages</title>
<p>Research in the last decades has shown that phonological organization and functioning in spoken and sign languages share core properties. In spoken languages, <italic>segmentals</italic> and their composing features (i.e., consonants and vowels, produced by action of speech articulators) are the fundamental units that constitute the form of words, and the same function is accomplished in sign languages by signed forms, produced mainly with the action of <italic>manual articulators</italic>, in particular, the dominant hand, its shape, orientation, movement and location relative to the body and head, and face areas (e.g., <xref ref-type="bibr" rid="B16">Fenlon et al., 2018</xref>; <xref ref-type="bibr" rid="B40">van der Hulst &amp; van der Kooij, 2021</xref>; <xref ref-type="bibr" rid="B43">Wilbourn &amp; Casasola, 2007</xref>). Prosody, in turn, is conveyed in spoken languages by variations in duration, intensity, pitch or frequency, marking prominence, speech chunking, sentence types and other linguistically relevant information, and there is broad consensus that the same functions are achieved in sign languages by variations in the rhythm, size and duration of gestures, pause (hold), body or head movements, and facial expressions (<xref ref-type="bibr" rid="B5">Brentari &amp; Crossley, 2002</xref>; <xref ref-type="bibr" rid="B12">Cruz et al., 2019</xref>; <xref ref-type="bibr" rid="B14">Dachkovsky &amp; Sandler, 2009</xref>; <xref ref-type="bibr" rid="B27">Nespor &amp; Sandler, 1999</xref>; <xref ref-type="bibr" rid="B36">Sandler, 2010</xref>, and many others).</p>
<p>In sign language prosodic properties are conveyed mainly by facial expressions and head movement (involving for instance eyebrows, eyes, mouth, cheeks, and head position). These facial components mark prosodic constituents for various discourse functions, such as distinguishing among sentence types, like declarative sentences and wh- and yes/no questions, and marking information structure (e.g., focus/background, topic/comment, given/new; <xref ref-type="bibr" rid="B14">Dachkovsky &amp; Sandler, 2009</xref>; <xref ref-type="bibr" rid="B32">Pfau &amp; Quer, 2010</xref>; <xref ref-type="bibr" rid="B37">Sandler et al., 2020</xref>). In LGP, Cruz et al. (<xref ref-type="bibr" rid="B12">2019</xref>) have shown that nonmanuals distinguish sentence types: statements are typically produced without nonmanual marking; when head movements occur, the dominant one is the head (nod) up-down; in contrast, information-seeking yes-no questions are usually marked with nonmanuals, namely with eyebrow lowering together with head (nod) up-down. Scholars have also shown that nonmanual components present in sign languages cue prosodic constituents (prosodic word, phonological phrase and intonational phrase) (<xref ref-type="bibr" rid="B5">Brentari &amp; Crossley, 2002</xref>; <xref ref-type="bibr" rid="B6">Brentari et al., 2015</xref>, and references therein; <xref ref-type="bibr" rid="B8">Crasborn et al., 2008</xref>; <xref ref-type="bibr" rid="B27">Nespor &amp; Sandler, 1999</xref>; <xref ref-type="bibr" rid="B36">Sandler, 2010</xref>).</p>
<p>It is important to bear in mind, nevertheless, that besides prosody, nonmanuals and facial expressions in particular, can also signal other types of linguistic information, e.g., they can mark word class (e.g., adverbial and adjectival function), change sentence polarity, mark person distinctions in pronominals, or constitute obligatory components of signs&#8217; lexical (phonological) form, in which case they are often iconic, reflecting the meaning of the sign (<xref ref-type="bibr" rid="B32">Pfau &amp; Quer, 2010</xref>). Moreover, beyond language proper, facial expressions play a key role in conveying emotions, irrespective of language modality (e.g., <xref ref-type="bibr" rid="B14">Dachkovsky &amp; Sandler, 2009</xref>; <xref ref-type="bibr" rid="B15">Elliott &amp; Jacobs, 2013</xref>).</p>
</sec>
<sec>
<title>1.3. Early prosodic development in sign language</title>
<p>There is currently a knowledge gap on early prosodic development in sign language. Nevertheless, it seems clear that when deaf babies are exposed to a sign language from birth, sign language development progresses along a similar timeline as spoken language, possibly following the same developmental milestones as in spoken language (see <xref ref-type="bibr" rid="B21">Lillo-Martin &amp; Henner, 2021</xref> and <xref ref-type="bibr" rid="B24">Meier, 2016</xref>, for reviews). Two classical milestones, briefly described below, illustrate this parallelism.</p>
<p>Babbling is commonly observed in hearing babies, who start producing language-like sound sequences at around 6 months of age, increasingly more similar to the surrounding language as time proceeds, before and during the first words&#8217; production period (e.g., <xref ref-type="bibr" rid="B19">Laing &amp; Bergelson, 2020</xref>; <xref ref-type="bibr" rid="B41">Vihman et al., 1985</xref>). There is large consensus that babbling is a fundamental stage in language development, allowing babies to practice the articulation of the speech elements of the acquiring language, as well as promoting social and communicative interaction with caregivers (e.g., <xref ref-type="bibr" rid="B21">Lillo-Martin &amp; Henner, 2021</xref>; <xref ref-type="bibr" rid="B24">Meier, 2016</xref>; <xref ref-type="bibr" rid="B31">Petitto &amp; Marentette, 1991</xref>). Similarly, studies have shown that deaf infants exposed to gestures, around 10&#8211;14 months also produce meaningless manual gestures which reflect the components found in their ambient sign language (<xref ref-type="bibr" rid="B33">Pichler, 2012</xref>). Moreover, like in spoken language acquisition, it has been found that the manual babbling becomes more complex as children grow older, while deaf babies early exposed to gestures produce more complex manual babbling and a greater variety of types of manual babbling (<xref ref-type="bibr" rid="B31">Petitto &amp; Marentette, 1991</xref>).</p>
<p>Another important developmental milestone, common to spoken and sign language, is the production of the first words/meaningful gestures. Babies&#8217; first spoken words appear at around 10&#8211;12 months, although there is great individual variation (e.g., <xref ref-type="bibr" rid="B21">Lillo-Martin &amp; Henner, 2021</xref>; <xref ref-type="bibr" rid="B31">Petitto &amp; Marentette, 1991</xref>). Whereas some studies report coincidence in the time of the first emerging words and signs (between 10&#8211;12 months, <xref ref-type="bibr" rid="B24">Meier, 2016</xref>; <xref ref-type="bibr" rid="B31">Petitto &amp; Marentette, 1991</xref>), others point to an earlier onset of the first gestures in sign language in comparison with the first emerging words in spoken language (at approximately 8.5 months), suggesting an early gesture advantage, which may persist for some time in language development (<xref ref-type="bibr" rid="B2">Anderson &amp; Reilly, 2002</xref>; <xref ref-type="bibr" rid="B21">Lillo-Martin &amp; Henner, 2021</xref>; see also <xref ref-type="bibr" rid="B33">Pichler, 2012</xref> for a review on modality-driven differences in early language acquisition).</p>
<p>Most research so far has studied early prosodic development in sign language <italic>production</italic> (<xref ref-type="bibr" rid="B6">Brentari et al., 2015</xref>; <xref ref-type="bibr" rid="B21">Lillo-Martin &amp; Henner, 2021</xref>; <xref ref-type="bibr" rid="B31">Petitto &amp; Marentette, 1991</xref>). By contrast, very little is known about early <italic>perception</italic> of prosody. In fact, research on early perception abilities in sign languages has essentially looked at discrimination of various parameters (sub-lexical features) that compose signs, as well as grammatical markers. For example, Wilbourn and Casasola (<xref ref-type="bibr" rid="B43">2007</xref>) showed that 6- and 10-month-olds were able to discriminate the location of the sign and the signer&#8217;s facial expression cuing grammatical information, but did not demonstrate detecting changes in handshape or movement. However, to our knowledge there are no reports of research on early perception of prosody in sign language.</p>
<p>Interestingly, some work has addressed the early sensitivity to language-like visual contrasts by hearing infants without exposure to a sign language. A few studies have shown that hearing 3.5&#8211;4 month-old infants that are na&#239;ve to sign language are able to discriminate several contrasting parameters in a sign language, such as the type of movement (<xref ref-type="bibr" rid="B43">Wilbourn &amp; Casasola, 2007</xref>, for a review). Moreover, visual attention to language-related visual information has been found to be modulated by age. Stone et al. (<xref ref-type="bibr" rid="B39">2017</xref>) looked at visual attention of hearing infants at the age of 6 months and 12 months while watching fingerspelling stimuli that were either well-formed or ill-formed with respect to a sonority parameter associated to syllable structure in sign languages. In this study, 6-month-olds showed a preference for the stimuli corresponding to well-formed stimuli in sign language, whereas 12-month-olds showed no evidence of preference for one type of stimulus or the other. Bosworth et al. (<xref ref-type="bibr" rid="B4">2022</xref>) examined the eye-gaze of hearing infants from monolingual English-speaking families, aged 6 and 11 months, while viewing video sequences of American Sign Language signs and non-linguistic body actions (self-directed grooming action and object directed pantomime). Here too, results revealed developmental differences in gaze patterns: 6-month-olds looked more to the signer&#8217;s face for grooming, and to the articulatory area of the signing space for mimes and signs; in contrast, 11-month-olds showed similar attention to the face irrespective of type of visual stimuli. Both Stone et al. (<xref ref-type="bibr" rid="B39">2017</xref>) and Bosworth et al. (<xref ref-type="bibr" rid="B4">2022</xref>) interpret the results as reflecting an early perceptual sensitivity for visual cues that can be meaningful in sign languages, followed by a later decline in discrimination abilities, before the end of the first year, associated to language specialization, analogous to what has been already established for spoken language acquisition (e.g., <xref ref-type="bibr" rid="B42">Werker, 2024</xref>). However, to our knowledge, visual prosodic discrimination of hearing infants na&#239;ve to sign language has not yet been investigated.</p>
</sec>
<sec>
<title>1.4. Early discrimination of intonation contrasts and the present study</title>
<p>For the spoken modality, Frota et al. (<xref ref-type="bibr" rid="B17">2014</xref>) used a modified version of the visual habituation paradigm to investigate infants&#8217; ability to discriminate between declaratives and yes/no questions differing only in the intonation contours (falling contour vs. falling-rising contour, respectively), in European Portuguese (EP). The results showed that EP-learning infants were able to discriminate the intonation contrast between declaratives and yes/no questions as early as 5 months of age and this discrimination ability is maintained throughout the first year of life.</p>
<p>As far as we know, early discrimination of intonation contrasts in sign languages, including LGP, has not yet been investigated (see <xref ref-type="bibr" rid="B22">Lutzenberger et al., 2024</xref> on experimental work focusing on early infant discrimination in other phonological domains, and <xref ref-type="bibr" rid="B38">Stone &amp; Bosworth, 2019</xref>). In the present study, we have adapted Frota et al.&#8217;s (<xref ref-type="bibr" rid="B17">2014</xref>) method to investigate deaf infants&#8217; ability to discriminate the corresponding intonation contrast in LGP. Following Cruz et al. (<xref ref-type="bibr" rid="B12">2019</xref>) findings, we tested infants&#8217; discrimination of utterance-like productions with neutral expressions/absence of nonmanuals, expressing declarative meaning in LGP, and utterance-like productions articulated with nonmanuals (i.e., lowered eyebrows and head nod), used to mark yes/no questions in LGP. We expect that infants with early access to LGP will be able to discriminate intonation contrasts in the second half of the first year of life, as observed in the spoken modality (<xref ref-type="bibr" rid="B17">Frota et al., 2014</xref>). As for infants without or with little exposure to LGP, two outcomes may be hypothesized. Due to the limited/lack of exposure to prosodic marking in a sign language, they may show weaker/no evidence of intonation discrimination, reflecting the importance of (perceivable) linguistic input in the development of early discrimination skills. However, given the well-established early sensitivity to sound distinctions that may contrast across languages, which characterizes infant perception in general, and since hearing infants have been shown to discriminate some types of visual stimuli that contrast in sign language, we may put forth the hypothesis that infants without or with little exposure to LGP will be able to discriminate the prosodic cues conveyed by nonmanual visual cues marking the declarative/yes-no questions contrast in LGP.</p>
</sec>
</sec>
<sec>
<title>2. Method</title>
<p>To the best of our knowledge, this is the first experimental approach to the study of early perception of sign language intonation contrasts by deaf infants. While we follow the general procedure of Frota et al.&#8217;s (<xref ref-type="bibr" rid="B17">2014</xref>) study with hearing infants, a number of methodological adaptations were introduced in order to ensure the suitability of the method for testing the perception of statement and yes-no question contrasts in a sign language, namely LGP, by deaf infants, and of the visual prosodic markers used in LGP, by hearing infants. In this paper we report on our innovative modified method, and on the results of a pilot study. Data collection from a larger sample is currently ongoing.</p>
<sec>
<title>2.1. Participants</title>
<p>Two infants took part in this pilot study: one deaf infant (male, aged 7 months and 28 days) and one monolingual hearing infant (female, 6 months and 15 days of age). Both infants were raised in EP hearing homes. The deaf infant started receiving LGP input only at the age of 6 months, for two hours, twice a week, at an educational institution specialized in early intervention for deaf infants (Agrupamento de Escolas Quinta de Marrocos). At the time of data collection, the baby had been exposed to LGP for 1 month, and received a cochlear implant 15 days before coming to the Lab.<xref ref-type="fn" rid="n3">3</xref></p>
<p>Informed written consent was obtained from caregivers prior to data collection. The study was conducted in accordance with the Declaration of Helsinki and the guidelines of the European Union Agency for Fundamental Rights and approved by the Ethics Committee of the School of Arts and Humanities (University of Lisbon) (4_CEI2023, 11 May, 2023).</p>
</sec>
<sec>
<title>2.2. Materials</title>
<p>Stimuli consisted of one-pseudosign-utterances. Eight pseudosigns were created, with the help of a native LGP adult signer, relying on already existing signs in LGP and changing one parameter (location), resulting in a possible, but non-existing sign in LGP. As yes-no questions are marked by facial cues in LGP, utmost care was taken to create pseudosigns avoiding the face region, thus articulated by manuals in body regions below the chin. This ensured a clear segregation between the regions for pseudosign articulation and intonation marking, respectively, enabling the identification of non-overlapping areas of interest (AOI).</p>
<p>Each of the eight pseudosigns was produced with the typical LGP statement (i.e., without facial marking) and question (i.e., with facial marking, namely eyebrow lowering, together with head (nod) up-down) intonation by a female LGP native signer. The production of the resulting sixteen one-pseudosign-utterances was videorecorded with a professional JVC camera, model GY-HM11E, in.mov format (a 4:3 aspect ratio, 25 fps). The native signer was asked to remain in the same place during the recordings, in order to allow smooth transitions in the sequences of video recordings when combined. The recorded material was edited in Shotcut software. Each recorded production of one-pseudosign-utterance was first extracted to a separate file. The 8 one-pseudosign-utterances were then combined in single sequences of 4 pseudo-sign-utterances, two sequences for statement intonation, and another two for question intonation (see supplementary materials at <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://labfon.letras.ulisboa.pt/babylab/sign_language">https://labfon.letras.ulisboa.pt/babylab/sign_language</ext-link>). Minor lighting and colour adjustments were made in the individual recordings, so that video sequences had similar characteristics, avoiding as much as possible exposing infants to distracting visual contrasts (<xref ref-type="fig" rid="F1">Figure 1</xref>).</p>
<fig id="F1">
<label>Figure 1</label>
<caption>
<p>Illustration of the intonation contrast taken as still frames of the videos sequences: Statement in the left panel; Question in the right panel.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="jpl-6-22921-g1.png"/>
</fig>
<p>Different pseudosign sequences were used in the habituation and test phases. Each infant only saw one version (either the statement or the question) of each one-pseudosign-utterance.</p>
</sec>
<sec>
<title>2.3. Procedure</title>
<p>Following Frota et al.&#8217;s (<xref ref-type="bibr" rid="B17">2014</xref>) study, we used a modified version of the habituation paradigm, with the sound stimuli replaced by the visual stimuli. Importantly, the data were collected using the EyeLink 1000 Plus eyetracker instead of LOOK software that collects duration of a gaze, but not the location of gaze.</p>
<p>Infants were seated on their caregivers&#8217; laps facing a 22-inch ASUS monitor at a distance of 50 to 60 cm. The experiment started with a short video to attract infants&#8217; attention to the screen, while the researcher was seating at a distance, monitoring eye-movement for calibration and subsequent data collection. Parents were instructed not to interact with the infant, nor give any bodily cues that could affect infant&#8217;s response, such as pointing.</p>
<p>Infants were calibrated with 3 point-calibration using small moving circles as calibration points. The same procedure was used for validation. The calibration was successful in both infants with less than 0.56&#186; of error. After calibration and validation, a coloured image (attention getter) appeared. If infants looked at it for two consecutive seconds, a video trial started. Each video trial consisted of a video file made up of sequences of 4 pseudosigns, as described above (section 2.2). The 4 pseudosigns, each corresponding to an intonational phrase, created a 16-second trial for each intonation pattern. Because our single pseudo-sign intonational phrases were longer than the single pseudoword intonational phrases in Frota et al.&#8217;s (<xref ref-type="bibr" rid="B17">2014</xref>), it was not possible to have both the same length (16 secs) and the same number of pseudosigns per trial matching the number of pseudowords per trial in Frota et al.&#8217;s (<xref ref-type="bibr" rid="B17">2014</xref>) auditory experiment (i.e., 8 pseudowords). To ensure that the total duration of the experiments across studies was similar, and, crucially, not excessively long for young infants, we decided to keep the trial length identical in the two experiments and reduced to four pseudosigns in our trials.</p>
<p>Infants were presented with two phases: the habituation and the test phase. In this pilot study, infants were habituated to the declarative sequence and tested with the same test order: a declarative followed by an interrogative sequence (i.e., same-switch order). In the habituation phase, infants were habituated to one sequence of 4 one-pseudosign-intonational phrases (statement intonation) until the infant reached a pre-defined habituation criterion. As in Frota et al.&#8217;s (<xref ref-type="bibr" rid="B17">2014</xref>) auditory experiment, we used a sliding window comparing the first four trials, with the last four trials, with the criterion that the average looking time to the last four habituation trials should be less than 60% of the average looking time to the first four habituation trials. Thus, the habituation phase had a minimum of 8 trials. The maximum number of habituation trials was set to 20. After infants were habituated to declarative intonation pattern, the test phase was presented, including two trials: one containing a trial identical to the habituation trial (same) and another different to the habituation trial (switch). <xref ref-type="fig" rid="F2">Figure 2</xref> illustrates the experimental design. If infants were able to discriminate the intonation contrast in LGP, looking time for the switch trials should be longer than for the same trials.</p>
<fig id="F2">
<label>Figure 2</label>
<caption>
<p>An illustration of the experimental design.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="jpl-6-22921-g2.png"/>
</fig>
<p>The stimuli were presented using the Experiment Builder software (version 2.3.38), and each video trial was presented until the end (fixed trial presentation). The EyeLink software was used to record and monitor infants&#8217; looking behavior, while data extraction was done in DataViewer (version 3.2.1).</p>
<p>Two areas of interest (AOI) were defined to compare infants&#8217; looking time to manuals and nonmanuals. For the manual AOI we considered the area of torso/body (from mid-neck to hips) where all hand movements were expressed. The nonmanual AOI covered the head/face region.</p>
</sec>
</sec>
<sec>
<title>3. Results</title>
<p>The habituation phase was analyzed first. Overall, average looking times to the first four trials (<italic>M</italic> = 11.11s; <italic>SD</italic> = 4.96s) were longer than the average looking times to the last four trials (<italic>M</italic> = 4.20s; <italic>SD</italic> = 3.73s), irrespective of the participants&#8217; deafness/hearing condition (deaf: first four trials, <italic>M</italic> = 14.06s; <italic>SD</italic> = 0.38s; last four trials, <italic>M</italic> = 5.92s; <italic>SD</italic> = 2.77s; hearing: first four trials, <italic>M</italic> = 8.16s; <italic>SD</italic> = 5.84s; last four trials, <italic>M</italic> = 2.49s; <italic>SD</italic> = 4,11s). A paired t-test indicated a significant difference for the deaf infant (<italic>t</italic>(3) = 5.98, <italic>p</italic> = .009) and a marginal difference for the monolingual hearing infant (<italic>t</italic>(3) = 2.65, <italic>p</italic> = .07). Given the small sample, it is not possible to determine if there is a statistical difference between the two participants.</p>
<p>In the test phase, although not possible to confirm with statistical tests, both infants seem to demonstrate similar looking times to the switch and same test trials, suggesting the absence of discrimination (accumulated looking time, hearing: switch = 6.5s, same = 7.1s; deaf: switch = 4.1s, same = 5.3s).</p>
<p>Even though the overall looking time was not informative, the eye tracking methodology allows us to assess infants&#8217; looking patterns. Specifically, we have defined two dynamic AOI the head and the body. The head AOI comprises visual prosodic cues and thus would indicate infants&#8217; interest towards these nonmanual cues. The body AOI captures all hand movements of the signer during the video passages, and would indicate infants&#8217; interest in the manual cues. We analyzed the proportion of looking time to the two AOIs in both the habituation and test phases. In the habituation phase, we observed an interesting pattern (<xref ref-type="fig" rid="F3">Figure 3</xref>): the deaf infant was looking more to the head (i.e., the nonmanuals; <italic>M</italic> = .78) than the manuals (<italic>M</italic> = .21), whereas the hearing infant showed the opposite pattern (nomanuals <italic>M</italic> = .33, manuals <italic>M</italic> = .66).</p>
<fig id="F3">
<label>Figure 3</label>
<caption>
<p>Proportion of looking time to the head versus manual AOIs during the habituation phase.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="jpl-6-22921-g3.png"/>
</fig>
<p>In the test phase we observed a similar pattern for both test trials. For the deaf infant looking time was longer to the head (same = 0.70; switch = 0.72) in comparison to the manuals (same = 0.29; switch = 0.27), whereas for the hearing infant attention was higher to the manuals (same = 0.77; switch = 0.70) in comparison to the head (same = 0.22, switch = 0.29).</p>
</sec>
<sec>
<title>4. Discussion</title>
<p>In this study, we described what we believe to be the first adaptation of the habituation paradigm to study early discrimination of intonation contrasts in a sign language (a modification of <xref ref-type="bibr" rid="B17">Frota et al.&#8217;s 2014</xref> procedure). The pilot study indicates that the design may be suitable for investigating early perception of visual prosody contrasts, both in deaf and in hearing infants. This study is part of a larger ongoing project and data collection with a larger sample is currently in progress.</p>
<p>To develop our experimental approach to the study of early perception of sign language intonation contrasts by deaf and hearing infants, a number of methodological adaptations were introduced, from the creation of one-pseudosign-intonational phrases to the production of video trials used in a visual habituation design implemented with eye-tracking.</p>
<p>In our pilot experiment, we examined the perception of one deaf infant (with limited access to LGP, i.e., one month prior to the time of data collection, and to spoken language, i.e., 15 days after cochlear implant) and one monolingual hearing infant (never exposed to a sign language). Both infants performed the experiment without difficulties. Besides testing our experimental design, the pilot experiment suggested some interesting similarities and differences in the gaze patterns of the two participants. Although still very preliminary, our results suggest that both infants have interest in sign language input despite the late and little amount of exposure to LGP, in the case of the deaf infant, and the lack of prior exposure to a sign language, in the case of the hearing infant. This reinforces the view that infants have an early sensitivity to visual cues that are exploited in linguistic contrasts in sign languages (<xref ref-type="bibr" rid="B39">Stone et al., 2017</xref>). The fact that the hearing infant, exposed to spoken EP, was attentive to the sign language productions, even if sign language is not their mother tongue, as also found in recent studies with ASL input, agrees with the hypothesis that infants in general are receptive to visual language cues (<xref ref-type="bibr" rid="B4">Bosworth et al., 2022</xref>; <xref ref-type="bibr" rid="B39">Stone et al., 2017</xref>).</p>
<p>In our pilot study, we found no noticeable differences in looking times between <italic>same</italic> and <italic>switch</italic> trials, in both the deaf and the hearing infant. In particular, we did not find longer looking time in the switch condition, which would indicate discrimination, under the habituation paradigm. This may suggest that, despite being attracted to movement and gestures, both infants did not discriminate the LGP intonational contrast. In our view, it is likely that this is due to the fact that both infants tested were not exposed to (sufficient and early) LGP input. In the case of our deaf participant, exposure to LGP was limited and delayed, starting only one month before data collection. We must note, nonetheless, that it has been shown that some manual parameters are discriminated early on by hearing infants not exposed to a sign language (<xref ref-type="bibr" rid="B21">Lillo-Martin &amp; Henner, 2021</xref>; <xref ref-type="bibr" rid="B22">Lutzenberger et al., 2024</xref>; <xref ref-type="bibr" rid="B43">Wilbourn &amp; Casasola, 2007</xref>). These results may therefore suggest that (these) nonmanual cues in sign language are less accessible to hearing infants than manual gestures.</p>
<p>However, beyond the absence of discrimination, interesting differences in looking patterns emerged between the deaf and the hearing infant. The deaf infant attended more to the head than to manuals, while the opposite pattern was found in the hearing infant. It is known that nonmanuals, mostly conveyed in the head region, are the typical carriers of prosodic information in sign languages (e.g., <xref ref-type="bibr" rid="B12">Cruz et al., 2019</xref>; <xref ref-type="bibr" rid="B14">Dachkovsky &amp; Sandler, 2009</xref>; <xref ref-type="bibr" rid="B27">Nespor &amp; Sandler, 1999</xref>). The preference for the head region manifested by the deaf infant may thus be related to an emerging sensitivity to the LGP prosodic cues, though not sufficiently developed yet to allow discrimination of the prosodic contrast between statement intonation (signaled by neutral expressions) and yes/no question intonation (marked with eyebrow lowering and head movement). In addition, the deaf infant comes from a hearing family, and has been exposed since birth to visual input associated with speech production, that includes the mouth area, as well as the visual prosody that characterizes EP which is conveyed by cues in the face region (<xref ref-type="bibr" rid="B11">Cruz et al., 2017</xref>). In other words, for the deaf infant, the face region is highly informative, for both vocabulary-related segmental information and prosody, which may account for the clear preference for the face area.</p>
<p>The strong preference for the body area by the hearing infant, in turn, seems to suggest a bias towards the articulatory area of the signing space. If that is the case, less attention to nonmanuals might be a feature characterizing hearing infants that are only exposed to spoken languages, but not deaf infants. Another possibility is that the hearing infant, unlike the deaf infant, found the gestures from the articulatory area of the signing space unusual and new, and was more attracted to them than to the gestures from the head/face region, as eyebrow, eyes, and head movements are used in both spoken and sign language to convey prosody (<xref ref-type="bibr" rid="B10">Cruz &amp; Frota, 2025</xref>).</p>
<p>Beyond the proposal of an experimental paradigm suitable for investigating early perception of visual prosody, and in particular the discrimination of prosodic contrasts, both in deaf infants and in hearing infants, the very preliminary nature of the results from our pilot experiment calls for further research expanding the pool of participants and their profiles to include fair sample sizes across groups of unimodal and bimodal deaf infants (including deaf infants acquiring LGP in LGP homes), as well as monolingual and bilingual hearing infants, from younger and older age groups in the first year of life. Moreover, the advantages of using eye-tracking should be further explored to examine different areas of interest within the face, namely the eyes and mouth regions, given that attention to the eyes and mouth has been related to language development in hearing infants (e.g., <xref ref-type="bibr" rid="B9">Cruz et al., 2020</xref>; <xref ref-type="bibr" rid="B20">Lewkowicz &amp; Hansen-Tift, 2012</xref>; <xref ref-type="bibr" rid="B30">Pejovic et al., 2021</xref>; <xref ref-type="bibr" rid="B34">Pons et al., 2019</xref>).</p>
<p>In conclusion, this study put forward an experimental design to investigate, for the first time, the early perception of intonation contrasts in a sign language, and presented preliminary findings suggesting similarities and differences in deaf and hearing infants&#8217; sensitivity to the LGP statement/yes-no question prosodic contrast. Using the experimental method described here, in future work we expect to be able to unravel the developmental path of visual prosody discrimination in LGP and determine whether the native language modality affects early discrimination abilities of language-related visual cues, adding to the understanding of prosodic development in general, and across language modalities.</p>
</sec>
</body>
<back>
<fn-group>
<fn id="n1"><p><ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.portugal.gov.pt/pt/gc23/comunicacao/noticia?i=maos-que-falam-hoje-e-dia-nacional-da-lingua-gestual-portuguesa">https://www.portugal.gov.pt/pt/gc23/comunicacao/noticia?i=maos-que-falam-hoje-e-dia-nacional-da-lingua-gestual-portuguesa</ext-link> (accessed February 28, 2025).</p></fn>
<fn id="n2"><p>Portuguese Constitution, revised 1997 (article 74, no. 2, h) &#8211; cf. <italic>Constitui&#231;&#227;o da Rep&#250;blica Portuguesa</italic>. Lisboa: Texto, 2016.</p></fn>
<fn id="n3"><p>A second hearing infant (female, 6 months of age) was tested, but the data was not included due to sleepiness and loss of data.</p></fn>
</fn-group>
<sec>
<title>Acknowledgements</title>
<p>We thank the infants and caregivers for participating in this study, Agrupamento de Escolas Quinta de Marrocos for help with the deaf participant recruitment, as well as Helena Carmo, for producing the visual stimuli videorecorded for our experiment.</p>
<p>This research was supported by the Portuguese Foundation for Science and Technology (UID/2014/2020, UID/00214: Center of Linguistics of the University of Lisbon).</p>
</sec>
<sec>
<title>Competing Interests</title>
<p>The authors have no competing interests to declare.</p>
</sec>
<ref-list>
<ref id="B1"><mixed-citation publication-type="book"><string-name><surname>Amaral</surname>, <given-names>M. A.</given-names></string-name>, <string-name><surname>Coutinho</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Martins</surname>, <given-names>M. R. D.</given-names></string-name> (<year>1994</year>). <source>Para uma Gram&#225;tica da L&#237;ngua Gestual Portuguesa</source> [Towards a grammar of Portuguese Sign Language]. <publisher-name>Caminho</publisher-name>.</mixed-citation></ref>
<ref id="B2"><mixed-citation publication-type="journal"><string-name><surname>Anderson</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Reilly</surname>, <given-names>J. S.</given-names></string-name> (<year>2002</year>). <article-title>The MacArthur Communicative Development Inventory: Normative data for American Sign Language</article-title>. <source>Journal of Deaf Studies and Deaf Education</source>, <volume>7</volume>(<issue>4</issue>), <fpage>283</fpage>&#8211;<lpage>306</lpage>. <pub-id pub-id-type="doi">10.1093/deafed/7.2.83</pub-id></mixed-citation></ref>
<ref id="B3"><mixed-citation publication-type="thesis"><string-name><surname>Bettencourt</surname>, <given-names>M. F.</given-names></string-name> (<year>2015</year>). <source>A ordem de palavras na L&#237;ngua Gestual Portuguesa: Breve estudo comparativo com o Portugu&#234;s e outras L&#237;nguas Gestuais</source> [Word order in Portuguese Sign Language: a short comparative study between Portuguese and other Sign Languages]. Unpublished MA dissertation, <publisher-name>Universidade do Porto</publisher-name>. U.Porto repository. <uri>https://sigarra.up.pt/faup/pt/pub_geral.pub_view?pi_pub_base_id=37071</uri></mixed-citation></ref>
<ref id="B4"><mixed-citation publication-type="journal"><string-name><surname>Bosworth</surname>, <given-names>R. G.</given-names></string-name>, <string-name><surname>Hwang</surname>, <given-names>S. O.</given-names></string-name>, &amp; <string-name><surname>Corina</surname>, <given-names>D. P.</given-names></string-name> (<year>2022</year>). <article-title>Visual attention for linguistic and non-linguistic body actions in non-signing and native signing children</article-title>. <source>Frontiers in Psychology</source>, <volume>13</volume>, <elocation-id>951057</elocation-id>. <pub-id pub-id-type="doi">10.3389/fpsyg.2022.951057</pub-id></mixed-citation></ref>
<ref id="B5"><mixed-citation publication-type="journal"><string-name><surname>Brentari</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Crossley</surname>, <given-names>L.</given-names></string-name> (<year>2002</year>). <article-title>Prosody on the hands and face: Evidence from American Sign Language</article-title>. <source>Sign Language &amp; Linguistics</source>, <volume>5</volume>(<issue>2</issue>), <fpage>105</fpage>&#8211;<lpage>130</lpage>. <pub-id pub-id-type="doi">10.1075/sll.5.2.03bre</pub-id></mixed-citation></ref>
<ref id="B6"><mixed-citation publication-type="journal"><string-name><surname>Brentari</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Falk</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Wolford</surname>, <given-names>G.</given-names></string-name> (<year>2015</year>). <article-title>The acquisition of American Sign Language prosody</article-title>. <source>Language</source>, <volume>91</volume>(<issue>1</issue>), <fpage>144</fpage>&#8211;<lpage>168</lpage>. <pub-id pub-id-type="doi">10.1353/lan.2015.0042</pub-id></mixed-citation></ref>
<ref id="B7"><mixed-citation publication-type="book"><string-name><surname>Carmo</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Martins</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Morgado</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Estanqueiro</surname>, <given-names>P.</given-names></string-name> (<year>2007</year>). <source>Programa curricular de L&#237;ngua Gestual Portuguesa: Educa&#231;&#227;o pr&#233;-escolar e ensino b&#225;sico</source> [<italic>Syllabus of Portuguese Sign Language: preschool and primary school education</italic>]. <publisher-name>Minist&#233;rio da Educa&#231;&#227;o/Dire&#231;&#227;o Geral da Inova&#231;&#227;o e de Desenvolvimento Curricular</publisher-name>.</mixed-citation></ref>
<ref id="B8"><mixed-citation publication-type="journal"><string-name><surname>Crasborn</surname>, <given-names>O.</given-names></string-name>, <string-name><surname>van der Kooij</surname>, <given-names>E.</given-names></string-name>, <string-name><surname>Waters</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Woll</surname>, <given-names>B.</given-names></string-name>, &amp; <string-name><surname>Mesch</surname>, <given-names>J.</given-names></string-name> (<year>2008</year>). <article-title>Frequency distribution and spreading behavior of different types of mouth actions in three sign languages</article-title>. <source>Sign Language &amp; Linguistics</source>, <volume>11</volume>, <fpage>45</fpage>&#8211;<lpage>67</lpage>. <pub-id pub-id-type="doi">10.1075/sll.11.1.04cra</pub-id></mixed-citation></ref>
<ref id="B9"><mixed-citation publication-type="journal"><string-name><surname>Cruz</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Butler</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Severino</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Filipe</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Frota</surname>, <given-names>S.</given-names></string-name> (<year>2020</year>). <article-title>Eyes or mouth? Exploring eye gaze patterns and their relation with early stress perception in European Portuguese</article-title>. <source>Journal of Portuguese Linguistics</source>, <volume>19</volume>(<issue>1</issue>), <elocation-id>4</elocation-id>. <pub-id pub-id-type="doi">10.5334/jpl.240</pub-id></mixed-citation></ref>
<ref id="B10"><mixed-citation publication-type="journal"><string-name><surname>Cruz</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Frota</surname>, <given-names>S.</given-names></string-name> (<year>2025</year>). <article-title>&#8220;Talking heads&#8221; in Portuguese sign and spoken languages</article-title>. <source>Language and Cognition</source>, <volume>17</volume>, <elocation-id>e18</elocation-id>. <pub-id pub-id-type="doi">10.1017/langcog.2024.63</pub-id></mixed-citation></ref>
<ref id="B11"><mixed-citation publication-type="journal"><string-name><surname>Cruz</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Swerts</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Frota</surname>, <given-names>S.</given-names></string-name> (<year>2017</year>). <article-title>The role of intonation and visual cues in the perception of sentence types: Evidence from European Portuguese varieties</article-title>. <source>Laboratory Phonology: Journal of the Association for Laboratory Phonology</source>, <volume>8</volume>(<issue>1</issue>), <elocation-id>23</elocation-id>. <pub-id pub-id-type="doi">10.5334/labphon.110</pub-id></mixed-citation></ref>
<ref id="B12"><mixed-citation publication-type="journal"><string-name><surname>Cruz</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Swerts</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Frota</surname>, <given-names>S.</given-names></string-name> (<year>2019</year>). <article-title>Do visual cues to interrogativity vary between language modalities? Evidence from spoken Portuguese and Portuguese Sign Language</article-title>. <source>Proceedings of the 15th International Conference on Auditory-Visual Speech Processing</source> (pp. <fpage>1</fpage>&#8211;<lpage>5</lpage>). <pub-id pub-id-type="doi">10.21437/AVSP.2019-1</pub-id></mixed-citation></ref>
<ref id="B13"><mixed-citation publication-type="journal"><string-name><surname>Culbertson</surname>, <given-names>S. R.</given-names></string-name>, <string-name><surname>Dillon</surname>, <given-names>M. T.</given-names></string-name>, <string-name><surname>Richter</surname>, <given-names>M. E.</given-names></string-name>, <string-name><surname>Brown</surname>, <given-names>K. D.</given-names></string-name>, <string-name><surname>Anderson</surname>, <given-names>M. R.</given-names></string-name>, <string-name><surname>Hancock</surname>, <given-names>S. L.</given-names></string-name>, &amp; <string-name><surname>Park</surname>, <given-names>L. R.</given-names></string-name> (<year>2022</year>). <article-title>Younger age at cochlear implant activation results in improved auditory skill development for children with congenital deafness</article-title>. <source>Journal of Speech, Language, and Hearing Research</source>, <volume>65</volume>(<issue>9</issue>), <fpage>3539</fpage>&#8211;<lpage>3547</lpage>. <pub-id pub-id-type="doi">10.1044/2022_JSLHR-22-00039</pub-id></mixed-citation></ref>
<ref id="B14"><mixed-citation publication-type="journal"><string-name><surname>Dachkovsky</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>Sandler</surname>, <given-names>W.</given-names></string-name> (<year>2009</year>). <article-title>Visual intonation in the prosody of a sign language</article-title>. <source>Language and Speech</source>, <volume>52</volume>(<issue>2&#8211;3</issue>), <fpage>287</fpage>&#8211;<lpage>314</lpage>. <pub-id pub-id-type="doi">10.1177/0023830909103175</pub-id></mixed-citation></ref>
<ref id="B15"><mixed-citation publication-type="journal"><string-name><surname>Elliott</surname>, <given-names>E. A.</given-names></string-name>, &amp; <string-name><surname>Jacobs</surname>, <given-names>A. M.</given-names></string-name> (<year>2013</year>). <article-title>Facial expressions, emotions, and sign languages</article-title>. <source>Frontiers in Psychology</source>, <volume>4</volume>, <elocation-id>115</elocation-id>. <pub-id pub-id-type="doi">10.3389/fpsyg.2013.00115</pub-id></mixed-citation></ref>
<ref id="B16"><mixed-citation publication-type="book"><string-name><surname>Fenlon</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Cormier</surname>, <given-names>K.</given-names></string-name>, &amp; <string-name><surname>Brentari</surname>, <given-names>D.</given-names></string-name> (<year>2018</year>). <chapter-title>The phonology of sign languages</chapter-title>. In <string-name><given-names>S. J.</given-names> <surname>Hannahs</surname></string-name> &amp; <string-name><given-names>A.</given-names> <surname>Bosch</surname></string-name> (Eds.), <source>The Routledge Handbook of Phonological Theory</source> (pp. <fpage>453</fpage>&#8211;<lpage>475</lpage>). <publisher-name>Routledge</publisher-name>. <pub-id pub-id-type="doi">10.4324/9781315675428-16</pub-id></mixed-citation></ref>
<ref id="B17"><mixed-citation publication-type="journal"><string-name><surname>Frota</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Butler</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Vig&#225;rio</surname>, <given-names>M.</given-names></string-name> (<year>2014</year>). <article-title>Infants&#8217; perception of intonation: Is it a statement or a question?</article-title>. <source>Infancy</source>, <volume>19</volume>(<issue>2</issue>), <fpage>194</fpage>&#8211;<lpage>213</lpage>. <pub-id pub-id-type="doi">10.1111/infa.12037</pub-id></mixed-citation></ref>
<ref id="B18"><mixed-citation publication-type="book"><string-name><surname>Gervain</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Christophe</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Mazuka</surname>, <given-names>R.</given-names></string-name> (<year>2020</year>). <chapter-title>Prosodic bootstrapping</chapter-title>. In <string-name><given-names>C.</given-names> <surname>Gussenhoven</surname></string-name> &amp; <string-name><given-names>A.</given-names> <surname>Chen</surname></string-name> (Eds.), <source>The Oxford Handbook of Prosody</source> (pp. <fpage>563</fpage>&#8211;<lpage>573</lpage>). <publisher-name>Oxford University Press</publisher-name>. <pub-id pub-id-type="doi">10.1093/oxfordhb/9780198832232.013.36</pub-id></mixed-citation></ref>
<ref id="B19"><mixed-citation publication-type="journal"><string-name><surname>Laing</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Bergelson</surname>, <given-names>E.</given-names></string-name> (<year>2020</year>). <article-title>From babble to words: Infants&#8217; early productions match words and objects in their environment</article-title>. <source>Cognitive Psychology</source>, <volume>122</volume>, <elocation-id>101308</elocation-id>. <pub-id pub-id-type="doi">10.1016/j.cogpsych.2020.101308</pub-id></mixed-citation></ref>
<ref id="B20"><mixed-citation publication-type="journal"><string-name><surname>Lewkowicz</surname>, <given-names>D. J.</given-names></string-name>, &amp; <string-name><surname>Hansen-Tift</surname>, <given-names>A. M.</given-names></string-name> (<year>2012</year>). <article-title>Infants deploy selective attention to the mouth of a talking face when learning speech</article-title>. <source>Proceedings of the National Academy of Sciences of the United States of America</source>, <volume>109</volume>(<issue>4</issue>), <fpage>1431</fpage>&#8211;<lpage>1436</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1114783109</pub-id></mixed-citation></ref>
<ref id="B21"><mixed-citation publication-type="journal"><string-name><surname>Lillo-Martin</surname>, <given-names>D.</given-names></string-name>, &amp; <string-name><surname>Henner</surname>, <given-names>J.</given-names></string-name> (<year>2021</year>). <article-title>Acquisition of sign languages</article-title>. <source>Annual Review of Linguistics</source>, <volume>7</volume>, <fpage>395</fpage>&#8211;<lpage>419</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-linguistics-043020-092357</pub-id></mixed-citation></ref>
<ref id="B22"><mixed-citation publication-type="journal"><string-name><surname>Lutzenberger</surname>, <given-names>H.</given-names></string-name>, <string-name><surname>Casillas</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Fikkert</surname>, <given-names>P.</given-names></string-name>, <string-name><surname>Crasborn</surname>, <given-names>O.</given-names></string-name>, &amp; <string-name><surname>de Vos</surname>, <given-names>C.</given-names></string-name> (<year>2024</year>). <article-title>More than looks: Exploring methods to test phonological discrimination in the sign language Kata Kolok</article-title>. <source>Language Learning and Development</source>, <volume>20</volume>(<issue>4</issue>), <fpage>297</fpage>&#8211;<lpage>323</lpage>. <pub-id pub-id-type="doi">10.1080/15475441.2023.2277472</pub-id></mixed-citation></ref>
<ref id="B23"><mixed-citation publication-type="journal"><string-name><surname>Lynce</surname>, <given-names>S.</given-names></string-name>, <string-name><surname>Moita</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Freitas</surname>, <given-names>M. J.</given-names></string-name>, <string-name><surname>Santos</surname>, <given-names>M. E.</given-names></string-name>, &amp; <string-name><surname>Mineiro</surname>, <given-names>A.</given-names></string-name> (<year>2019</year>). <article-title>Phonological development in Portuguese deaf children with cochlear implants: Preliminary study</article-title>. <source>Revista de Logopedia, Foniatr&#237;a y Audiolog&#237;a</source>, <volume>39</volume>(<issue>3</issue>), <fpage>115</fpage>&#8211;<lpage>128</lpage>. <pub-id pub-id-type="doi">10.1016/j.rlfa.2019.03.002</pub-id></mixed-citation></ref>
<ref id="B24"><mixed-citation publication-type="book"><string-name><surname>Meier</surname>, <given-names>R. P.</given-names></string-name> (<year>2016</year>). <chapter-title>Sign language acquisition</chapter-title>. In <source>Oxford Handbook Topics in Linguistics</source> (online edn.). <publisher-name>Oxford University Press</publisher-name>. <pub-id pub-id-type="doi">10.1093/oxfordhb/9780199935345.013.19</pub-id></mixed-citation></ref>
<ref id="B25"><mixed-citation publication-type="journal"><string-name><surname>Mitchell</surname>, <given-names>R. E.</given-names></string-name>, &amp; <string-name><surname>Karchmer</surname>, <given-names>M. A.</given-names></string-name> (<year>2004</year>). <article-title>Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States</article-title>. <source>Sign Language Studies</source>, <volume>4</volume>(<issue>2</issue>), <fpage>138</fpage>&#8211;<lpage>163</lpage>. <pub-id pub-id-type="doi">10.1353/sls.2004.0005</pub-id></mixed-citation></ref>
<ref id="B26"><mixed-citation publication-type="thesis"><string-name><surname>Moita</surname>, <given-names>M.</given-names></string-name> (<year>2022</year>). <source>A aquisi&#231;&#227;o de depend&#234;ncias sint&#225;ticas com movimento em crian&#231;as surdas com implante coclear: Um d&#233;fice de movimento?</source>. [The acquisition of syntactic dependencies with movement in hearing-impaired children with cochlear implant: A deficit on movement?] Unpublished PhD dissertation, <publisher-name>Universidade Nova de Lisboa</publisher-name>.</mixed-citation></ref>
<ref id="B27"><mixed-citation publication-type="journal"><string-name><surname>Nespor</surname>, <given-names>M.</given-names></string-name>, &amp; <string-name><surname>Sandler</surname>, <given-names>W.</given-names></string-name> (<year>1999</year>). <article-title>Prosody in Israeli sign language</article-title>. <source>Language and Speech</source>, <volume>42</volume>, <fpage>143</fpage>&#8211;<lpage>176</lpage>. <pub-id pub-id-type="doi">10.1177/00238309990420020201</pub-id></mixed-citation></ref>
<ref id="B28"><mixed-citation publication-type="journal"><string-name><surname>Nicholas</surname>, <given-names>J. G.</given-names></string-name>, &amp; <string-name><surname>Geers</surname>, <given-names>A. E.</given-names></string-name> (<year>2006</year>). <article-title>Effects of early auditory experience on the spoken language of deaf children at 3 years of age</article-title>. <source>Ear and Hearing</source>, <volume>27</volume>(<issue>3</issue>), <fpage>286</fpage>&#8211;<lpage>298</lpage>. <pub-id pub-id-type="doi">10.1097/01.aud.0000215973.76912.c6</pub-id></mixed-citation></ref>
<ref id="B29"><mixed-citation publication-type="webpage"><string-name><surname>Oliveira</surname>, <given-names>C.</given-names></string-name>, <string-name><surname>Machado</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Zenha</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Azevedo</surname>, <given-names>L.</given-names></string-name>, <string-name><surname>Monteiro</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Bicho</surname>, <given-names>A.</given-names></string-name> (<year>2019</year>). <article-title>Congenital or early acquired deafness: An overview of the Portuguese situation, from diagnosis to follow-up</article-title>. <source>Acta M&#233;dica Portuguesa</source>, <volume>32</volume>(<issue>12</issue>), <fpage>767</fpage>&#8211;<lpage>775</lpage>. <uri>https://www.actamedicaportuguesa.com/revista/index.php/amp/article/view/11880</uri></mixed-citation></ref>
<ref id="B30"><mixed-citation publication-type="journal"><string-name><surname>Pejovic</surname>, <given-names>J.</given-names></string-name>, <string-name><surname>Cruz</surname>, <given-names>M.</given-names></string-name>, <string-name><surname>Severino</surname>, <given-names>C.</given-names></string-name>, &amp; <string-name><surname>Frota</surname>, <given-names>S.</given-names></string-name> (<year>2021</year>). <article-title>Early visual attention abilities and audiovisual speech processing in 5&#8211;7 month-old down syndrome and typically developing infants</article-title>. <source>Brain Sciences</source>, <volume>11</volume>(<issue>7</issue>), <elocation-id>939</elocation-id>, <italic>Special Issue Down Syndrome: Neuropsychological Phenotype across the Lifespan</italic>. <pub-id pub-id-type="doi">10.3390/brainsci11070939</pub-id></mixed-citation></ref>
<ref id="B31"><mixed-citation publication-type="journal"><string-name><surname>Petitto</surname>, <given-names>L. A.</given-names></string-name>, &amp; <string-name><surname>Marentette</surname>, <given-names>P. F.</given-names></string-name> (<year>1991</year>). <article-title>Babbling in the manual mode: Evidence for the ontogeny of language</article-title>. <source>Science</source>, <volume>251</volume>(<issue>5000</issue>), <fpage>1493</fpage>&#8211;<lpage>1496</lpage>. <pub-id pub-id-type="doi">10.1126/science.2006424</pub-id></mixed-citation></ref>
<ref id="B32"><mixed-citation publication-type="book"><string-name><surname>Pfau</surname>, <given-names>R.</given-names></string-name>, &amp; <string-name><surname>Quer</surname>, <given-names>J.</given-names></string-name> (<year>2010</year>). <chapter-title>Nonmanuals: Their grammatical and prosodic roles</chapter-title>. In <string-name><given-names>D.</given-names> <surname>Brentari</surname></string-name> (Ed.), <source>Sign Languages</source> (pp. <fpage>381</fpage>&#8211;<lpage>402</lpage>). <publisher-name>Cambridge University Press</publisher-name>. <pub-id pub-id-type="doi">10.1017/CBO9780511712203.018</pub-id></mixed-citation></ref>
<ref id="B33"><mixed-citation publication-type="book"><string-name><surname>Pichler</surname>, <given-names>D. C.</given-names></string-name> (<year>2012</year>). <chapter-title>Acquisition</chapter-title>. In <string-name><given-names>R.</given-names> <surname>Pfau</surname></string-name>, <string-name><given-names>M.</given-names> <surname>Steinbach</surname></string-name> &amp; <string-name><given-names>B.</given-names> <surname>Woll</surname></string-name> (Eds.), <source>Sign Language: An International Handbook</source> (pp. <fpage>647</fpage>&#8211;<lpage>686</lpage>). <publisher-name>De Gruyter Mouton</publisher-name>. <pub-id pub-id-type="doi">10.1515/9783110261325.647</pub-id></mixed-citation></ref>
<ref id="B34"><mixed-citation publication-type="journal"><string-name><surname>Pons</surname>, <given-names>F.</given-names></string-name>, <string-name><surname>Bosch</surname>, <given-names>L.</given-names></string-name>, &amp; <string-name><surname>Lewkowicz</surname>, <given-names>D. J.</given-names></string-name> (<year>2019</year>). <article-title>Twelve-month-old infants&#8217; attention to the eyes of a talking face is associated with communication and social skills</article-title>. <source>Infant Behavior and Development</source>, <volume>54</volume>, <fpage>80</fpage>&#8211;<lpage>84</lpage>. <pub-id pub-id-type="doi">10.1016/j.infbeh.2018.12.003</pub-id></mixed-citation></ref>
<ref id="B35"><mixed-citation publication-type="journal"><string-name><surname>Quer</surname>, <given-names>J.</given-names></string-name>, &amp; <string-name><surname>Steinbach</surname>, <given-names>M.</given-names></string-name> (<year>2019</year>). <article-title>Handling sign language: The impact of modality</article-title>. <source>Frontiers in Psychology</source>, <volume>10</volume>, <elocation-id>483</elocation-id>. <pub-id pub-id-type="doi">10.3389/fpsyg.2019.00483</pub-id></mixed-citation></ref>
<ref id="B36"><mixed-citation publication-type="journal"><string-name><surname>Sandler</surname>, <given-names>W.</given-names></string-name> (<year>2010</year>). <article-title>Prosody and syntax in sign languages</article-title>. <source>Transactions of the Philological Society</source>, <volume>108</volume>(<issue>3</issue>), <fpage>298</fpage>&#8211;<lpage>328</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-968X.2010.01242.x</pub-id></mixed-citation></ref>
<ref id="B37"><mixed-citation publication-type="book"><string-name><surname>Sandler</surname>, <given-names>W.</given-names></string-name>, <string-name><surname>Lillo-Martin</surname>, <given-names>D.</given-names></string-name>, <string-name><surname>Dachkovsky</surname>, <given-names>S.</given-names></string-name>, &amp; <string-name><surname>M&#252;ller de Quadros</surname>, <given-names>R.</given-names></string-name> (<year>2020</year>). <chapter-title>Sign language prosody</chapter-title>. In <string-name><given-names>C.</given-names> <surname>Gussenhoven</surname></string-name> &amp; <string-name><given-names>A.</given-names> <surname>Chen</surname></string-name> (Eds.), <source>The Oxford Handbook of Language Prosody</source> (pp. <fpage>104</fpage>&#8211;<lpage>122</lpage>). <publisher-name>Oxford University Press</publisher-name>. <pub-id pub-id-type="doi">10.1093/oxfordhb/9780198832232.013.44</pub-id></mixed-citation></ref>
<ref id="B38"><mixed-citation publication-type="journal"><string-name><surname>Stone</surname>, <given-names>A.</given-names></string-name>, &amp; <string-name><surname>Bosworth</surname>, <given-names>R. G.</given-names></string-name> (<year>2019</year>). <article-title>Exploring infant sensitivity to visual language using eye tracking and the preferential looking paradigm</article-title>. <source>Journal of Visualized Experiments</source>, <volume>147</volume>, <elocation-id>e59581</elocation-id>. <pub-id pub-id-type="doi">10.3791/59581</pub-id></mixed-citation></ref>
<ref id="B39"><mixed-citation publication-type="journal"><string-name><surname>Stone</surname>, <given-names>A.</given-names></string-name>, <string-name><surname>Petitto</surname>, <given-names>L. A.</given-names></string-name>, &amp; <string-name><surname>Bosworth</surname>, <given-names>R.</given-names></string-name> (<year>2017</year>). <article-title>Visual sonority modulates infants&#8217; attraction to sign language</article-title>. <source>Language Learning and Development</source>, <volume>14</volume>(<issue>2</issue>), <fpage>130</fpage>&#8211;<lpage>148</lpage>. <pub-id pub-id-type="doi">10.1080/15475441.2017.1404468</pub-id></mixed-citation></ref>
<ref id="B40"><mixed-citation publication-type="book"><string-name><surname>Van der Hulst</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>van der Kooij</surname>, <given-names>E.</given-names></string-name> (<year>2021</year>). <chapter-title>Sign language phonology: theoretical perspectives</chapter-title>. In <source>The Routledge Handbook of Theoretical and Experimental Sign Language Research</source> (pp. <fpage>1</fpage>&#8211;<lpage>32</lpage>). <publisher-name>Routledge</publisher-name>. <pub-id pub-id-type="doi">10.4324/9781315754499-1</pub-id></mixed-citation></ref>
<ref id="B41"><mixed-citation publication-type="journal"><string-name><surname>Vihman</surname>, <given-names>M. M.</given-names></string-name>, <string-name><surname>Macken</surname>, <given-names>M. A.</given-names></string-name>, <string-name><surname>Miller</surname>, <given-names>R.</given-names></string-name>, <string-name><surname>Simmons</surname>, <given-names>H.</given-names></string-name>, &amp; <string-name><surname>Miller</surname>, <given-names>J.</given-names></string-name> (<year>1985</year>). <article-title>From babbling to speech: A re-assessment of the continuity issue</article-title>. <source>Language</source>, <volume>61</volume>(<issue>2</issue>), <fpage>397</fpage>&#8211;<lpage>445</lpage>. <pub-id pub-id-type="doi">10.2307/414151</pub-id></mixed-citation></ref>
<ref id="B42"><mixed-citation publication-type="journal"><string-name><surname>Werker</surname>, <given-names>J.</given-names></string-name> (<year>2024</year>). <article-title>Phonetic perceptual reorganization across the first year of life: Looking back</article-title>. <source>Infant Behavior and Development</source>, <volume>75</volume>, <elocation-id>101935</elocation-id>. <pub-id pub-id-type="doi">10.1016/j.infbeh.2024.101935</pub-id></mixed-citation></ref>
<ref id="B43"><mixed-citation publication-type="journal"><string-name><surname>Wilbourn</surname>, <given-names>M. P.</given-names></string-name>, &amp; <string-name><surname>Casasola</surname>, <given-names>M.</given-names></string-name> (<year>2007</year>). <article-title>Discriminating signs: Perceptual precursors to acquiring a visual-gestural language</article-title>. <source>Infant Behavior and Development</source>, <volume>30</volume>(<issue>1</issue>), <fpage>153</fpage>&#8211;<lpage>160</lpage>. <pub-id pub-id-type="doi">10.1016/j.infbeh.2006.08.006</pub-id></mixed-citation></ref>
</ref-list>
</back>
</article>