Papers by Nathaniel Anderson
Implicit learning of distributional patterns in linguistic and non-linguistic sequence production

Topics in Cognitive Science, 2019
This editors' introduction provides the background to the special issue. We first outline the rat... more This editors' introduction provides the background to the special issue. We first outline the rationale for bringing together, in a single volume, leading researchers from two distinct, yet related research strands, implicit learning and statistical learning. The aim of the special issue is to facilitate the development of a shared understanding of research questions and methodologies, to provide a platform for discussing similarities and differences between the two strands and to encourage the formulation of joint research agendas. We then introduce the new contributions solicited for this special issue and provide our perspective on the agenda setting that results from combining these two approaches. 3 Aligning implicit learning and statistical learning: Two approaches, one phenomenon The past 20 years have witnessed a particularly strong interest in our ability to rapidly extract information from complex stimulus environments (Armstrong, Frost & Christiansen, 2017; Rebuschat, 2015; Rebuschat & Williams, 2012). This fundamental aspect of cognition is widely believed to underpin many complex behaviors (language acquisition, music perception, social interaction, intuitive decision making, etc.), so it is not surprising that the interest spans practically all disciplines of cognitive science. Research on this topic can be found in two related, yet almost completely distinct research strands, namely "implicit learning" and "statistical learning." Implicit learning research began with the artificial grammar experiments of Arthur Reber and colleagues (e.g.,

Proceedings of the National Academy of Sciences, 2018
Speakers implicitly learn novel phonotactic patterns by producing strings of syllables. The learn... more Speakers implicitly learn novel phonotactic patterns by producing strings of syllables. The learning is revealed in their speech errors. First-order patterns, such as “/f/ must be a syllable onset,” can be distinguished from contingent, or second-order, patterns, such as “/f/ must be an onset if the vowel is /a/, but a coda if the vowel is /o/.” A metaanalysis of 19 experiments clearly demonstrated that first-order patterns affect speech errors to a very great extent in a single experimental session, but second-order vowel-contingent patterns only affect errors on the second day of testing, suggesting the need for a consolidation period. Two experiments tested an analogue to these studies involving sequences of button pushes, with fingers as “consonants” and thumbs as “vowels.” The button-push errors revealed two of the key speech-error findings: first-order patterns are learned quickly, but second-order thumb-contingent patterns are only strongly revealed in the errors on the secon...
Psychonomic Bulletin & Review, 2013
Models of spoken word recognition assume that words are represented as sequences of phonemes. We ... more Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.

Brain and Language
Recent work has sought to describe the time-course of spoken word recognition, from initial acous... more Recent work has sought to describe the time-course of spoken word recognition, from initial acoustic cue encoding through lexical activation, and identify cortical areas involved in each stage of analysis. However, existing methods are limited in either temporal or spatial resolution, and as a result, have only provided partial answers to the question of how listeners encode acoustic information in speech. We present data from an experiment using a novel neuroimaging method, fast optical imaging, to directly assess the time-course of speech perception, providing noninvasive measurement of speech sound representations, localized to specific cortical areas. We find that listeners encode speech in terms of continuous acoustic cues at early stages of processing (ca. 96 ms post-stimulus onset), and begin activating phonological category representations rapidly (ca. 144 ms post-stimulus). Moreover, cue-based representations are widespread in the brain and overlap in time with graded category-based representations, suggesting that spoken word recognition involves simultaneous activation of both continuous acoustic cues and phonological categories.
Models of Language Production in Aphasia
MacWhinney/The Handbook of Language Emergence, 2015

Gradient coding of voice onset time in posterior temporal cortex
J. Acoust. Soc. Am., 2014
ABSTRACT The issue of whether early stages of speech processing are influenced by category has be... more ABSTRACT The issue of whether early stages of speech processing are influenced by category has been central to work in speech perception for decades. We present the results of an experiment using fast diffusive optical neuroimaging (Gratton and Fabiani, 2001, Int. J. Psychophysiol.) to address this question directly by measuring neural responses to speech with high temporo-spatial resolution. We found that changes in voice onset time (VOT) along a /b/-/p/ continuum evoked linear changes in neural responses in posterior superior temporal gyrus (pSTG) 100 ms after stimulus onset. This is the first non-invasive observation of such responses in humans. It is consistent with results from recent event-related potential (Toscano et al., 2010, Psychol. Sci.) and fMRI (Blumstein et al., 2005, J. Cognit. Neurosci.) studies, and provides evidence that those results reflect listeners' early encoding of speech sounds in pSTG, independently of phonological categories. Thus, the results provide evidence that speech perception is based on continuous cues rather than discrete categories. We discuss these results in light of recent intra-cranial EEG studies reporting either categorical effects in pSTG (Chang et al., 2010, Nature Neurosci.) or evidence that pSTG maintains fine-grained detail in the signal (Pasley et al., 2012, PLoS Biol.).

Gradient coding of voice onset time in posterior temporal cortex
J. Acoust. Soc. Am., 2014
ABSTRACT The issue of whether early stages of speech processing are influenced by category has be... more ABSTRACT The issue of whether early stages of speech processing are influenced by category has been central to work in speech perception for decades. We present the results of an experiment using fast diffusive optical neuroimaging (Gratton and Fabiani, 2001, Int. J. Psychophysiol.) to address this question directly by measuring neural responses to speech with high temporo-spatial resolution. We found that changes in voice onset time (VOT) along a /b/-/p/ continuum evoked linear changes in neural responses in posterior superior temporal gyrus (pSTG) 100 ms after stimulus onset. This is the first non-invasive observation of such responses in humans. It is consistent with results from recent event-related potential (Toscano et al., 2010, Psychol. Sci.) and fMRI (Blumstein et al., 2005, J. Cognit. Neurosci.) studies, and provides evidence that those results reflect listeners' early encoding of speech sounds in pSTG, independently of phonological categories. Thus, the results provide evidence that speech perception is based on continuous cues rather than discrete categories. We discuss these results in light of recent intra-cranial EEG studies reporting either categorical effects in pSTG (Chang et al., 2010, Nature Neurosci.) or evidence that pSTG maintains fine-grained detail in the signal (Pasley et al., 2012, PLoS Biol.).

Recent work has sought to describe the time-course of spoken word recognition, from initial acous... more Recent work has sought to describe the time-course of spoken word recognition, from initial acoustic cue encoding through lexical activation, and identify cortical areas involved in each stage of analysis. However, existing methods are limited in either temporal or spatial resolution, and as a result, have only provided partial answers to the question of how listeners encode acoustic information in speech. We present data from an experiment using a novel neuroimaging method, fast optical imaging, to directly assess the time-course of speech perception, providing non-invasive measurement of speech sound representations, localized to specific cortical areas. We find that listeners encode speech in terms of continuous acoustic cues at early stages of processing (ca. 96 ms post-stimulus onset), and begin activating phonological category representations rapidly (ca. 144 ms post-stimulus). Moreover, cue-based representations are widespread in the brain and overlap in time with graded category-based representations, suggesting that spoken word recognition involves simultaneous activation of both continuous acoustic cues and phonological categories.

Proceedings of the National Academy of Sciences, 2018
Speakers implicitly learn novel phonotactic patterns by producing strings of syllables. The learn... more Speakers implicitly learn novel phonotactic patterns by producing strings of syllables. The learning is revealed in their speech errors. First-order patterns, such as “/f/ must be a syllable onset,” can be distinguished from contingent, or second-order, patterns, such as “/f/ must be an onset if the vowel is /a/, but a coda if the vowel is /o/.” A meta-analysis of 19 experiments clearly demonstrated that first-order patterns affect speech errors to a very great extent in a single experimental session, but second-order vowel-contingent patterns only affect errors on the second day of testing, suggesting the need for a consolidation period. Two experiments tested an analogue to these studies involving sequences of button pushes, with fingers as “consonants” and thumbs as “vowels.” The button-push errors revealed two of the key speech-error findings: first-order patterns are learned quickly, but second-order thumb-contingent patterns are only strongly revealed in the errors on the second day of testing. The influence of computational complexity on the implicit learning of phonotactic patterns in speech production may
be a general feature of sequence production.
Reconsidering the role of temporal order in spoken word recognition
Psychonomic Bulletin and Review, 2013
Models of spoken word recognition assume that words are represented as sequences of phonemes. We ... more Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.
Toscano, Anderson, and McMurray - Reconsidering the role of temporal order in spoken word recognition
Psychonomic Bulletin & Review, Feb 28, 2013
Models of spoken word recognition assume that words are represented as sequences of phonemes. We ... more Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.
Conference Presentations by Nathaniel Anderson

In explicit discrimination
and categorization tasks, learning a rule takes longer than
learning a... more In explicit discrimination
and categorization tasks, learning a rule takes longer than
learning a subsequent reversal of that rule. This savings is
often attributed to nonassociative components of the task
such as learning what dimensions to pay attention to. An
experiment was carried out to investigate whether this
pattern also appears in the implicit learning of phonotactic
constraints in speech production. Subjects produced lists of
nonsense syllables which followed novel phonotactic rules
(for example, /f/ might occur only at word onset). Subjects
implicitly learned these rules very quickly; within only a
few trials, most accidental productions of the constrained
phonemes followed the rules (i.e. if the subject produced /f/
at the wrong time, it showed up at the beginning of a syllable
>80% of the time). When the rules were reversed (such
that, e.g., /f/ now showed up only at the end of syllables),
learning was significantly slower. The results support a purely
associative account of this learning such that the initial bias (/f/ begins syllables) must be unlearned before the opposite
bias (/f/ ends syllables) can be learned. These results are
simulated with a connectionist model of syllable production.
Anderson, Toscano, Garnsey, Fabiani, and Gratton - Graded representations of voice onset time: evidence from fast optical imaging
Uploads
Papers by Nathaniel Anderson
be a general feature of sequence production.
Conference Presentations by Nathaniel Anderson
and categorization tasks, learning a rule takes longer than
learning a subsequent reversal of that rule. This savings is
often attributed to nonassociative components of the task
such as learning what dimensions to pay attention to. An
experiment was carried out to investigate whether this
pattern also appears in the implicit learning of phonotactic
constraints in speech production. Subjects produced lists of
nonsense syllables which followed novel phonotactic rules
(for example, /f/ might occur only at word onset). Subjects
implicitly learned these rules very quickly; within only a
few trials, most accidental productions of the constrained
phonemes followed the rules (i.e. if the subject produced /f/
at the wrong time, it showed up at the beginning of a syllable
>80% of the time). When the rules were reversed (such
that, e.g., /f/ now showed up only at the end of syllables),
learning was significantly slower. The results support a purely
associative account of this learning such that the initial bias (/f/ begins syllables) must be unlearned before the opposite
bias (/f/ ends syllables) can be learned. These results are
simulated with a connectionist model of syllable production.