Some thoughts on Healey et al (2014) failure to find syntactic priming in conversational speech

Posted on Updated on

In a recent PLoS one article, Healey, Purver, and Howes (2014) investigate syntactic priming in conversational speech, both within speakers and across speakers. Healey and colleagues follow Reitter et al (2006) in taking a broad-coverage approach to the corpus-based study of priming. Rather than to focus on one or a few specific structures, Healey and colleagues assess lexical and structural similarity within and across speakers. The paper concludes with the interesting claim that there is no evidence for syntactic priming within speaker and that alignment across speakers is actually less than expected by chance once lexical overlap is controlled for. Given more than 30 years of research on syntactic priming, this is a rather interesting claim. As some folks have Twitter-bugged me (much appreciated!), I wanted to summarize some quick thoughts here. Apologies in advance for the somewhat HLP-lab centric view. If you know of additional studies that seem relevant, please join the discussion and post. Of course, Healey and colleagues are more than welcome to respond and correct me, too.

First, the claim by Healey and colleagues that “previous work has not tested for general syntactic repetition effects in ordinary conversation independently of lexical repetition” (Healey et al 2014, abstract) isn’t quite accurate.

The authors discuss a few studies that speak to this issue without quite doing what the authors would like to do (e.g., Reitter et al., 2006, Szmrecsanyi, 2004, 2005). Crucially, there are also some studies that have done what the authors claim hasn’t been done. For example (and I suspect there might be more):

  1. Jaeger and Snider (2013-Cognition) and Recchia (2007) investigate syntactic priming in ditransitives in conversational speech, while controlling for verb overlap. Unlike in Healey et al, significant priming is observed, independent of lexical overlap. As a matter of fact, lexical overlap did not affect prime strength much (Study 1).
  2. Snider (2009-Thesis) investigate syntactic priming in ditransitives in conversational speech, while controlling for lexical overlap, including both verbs and other parts of the sentence. Some of these analyses are presented in Jaeger and Snider (2007) and Jaeger and Snider (2008), along with other relevant investigations of that-omission priming in conversational speech within and across speakers (based on my 2006 CUNY presentation).

None of these studies are cited by the authors, so I suspect they weren’t no aware of them (which has happened to me – so no blame). How do these studies compare to Healey et al (2014)? Unlike Healey et al, the studies I listed above focus on a specific structure. In my view some advantages of this are:

  • Meaning differences between the alternative structures being held more constant
  • A better understanding of what one is studying
  • Comparability with psycholinguistic experiments, all of which have focused on specific structures (e.g., in order to avoid confounds due to differences in meaning)

But there are also disadvantages to this focus on a specific structure. Put differently, there are advantages of the approach taken by Healey et al (2014; pioneered as far as I know by Reitter et al., 2006, though please correct me if I am wrong):

  • Broad coverage ensures that one isn’t studying properties that only apply to some subset of the grammatical structures of a language

Of these four differences between the broad-coverage and the structure-specific approach, I’d consider the most crucial issue to be that the broad-coverage approach (as it has been applied so far, including by Healey and colleagues) does not control for (near) meaning-equivalence of the structural choices. Syntactic priming is about the choice of one structure over another under meaning equivalence or near-meaning equivalence. If a structure is chosen or not chosen because another meaning is conveyed, that wouldn’t standardly be considered syntactic priming (for good reasons). It is not clear to me how (near) meaning-equivalence is guaranteed in the broad-coverage approach employed in, for example, Healey et al (2014) and Reitter et al (2006). For example, if an NP node is expanded into NP –> DT N, it’s not clear that it could have been expanded into NP –> DT ADJ N or even NP –> N CP while still referring to the same entity without leading to semantic, pragmatic, and discourse structural violations. [this paragraph has been edited in response to a request for clarification by Patrick Healey]

This is critical because we already know that what people say changes throughout discourse. This is even reflected in the entropy profiles throughout discourse (Genzel and Charniak, 2002, 2003; Qian and Jaeger, 2012), including entropy profiles derived by parsers (Keller, 2004). This means, it is not expected that syntactic distributions do not change throughout conversations and, critically, one very plausible cause for this is that what changes throughout discourse is what messages (in the psycholinguistic sense) we encode. Since syntactic priming is about how (the same) message get encoded, this constitutes a rather glaring potential confound for the study presented by Healey and colleagues.

The syntactic priming studies cited above (Jaeger and Snider, 2007, 2008, 2013; Snider, 2009) also differ from Healey et al in that they used exclusively hand-annotated syntactic corpora. Having worked with automatically parsed corpora (like the BNC, employed in Healey et al’s study), I’m quite aware of the many pitfalls of relying on automatic parse (for discussion, see Jaeger, 2011-Chapter). I wish Healey et al would have discussed this issue in more depth and the steps they took to address problems such as false positives and negatives, which can be very substantial (for an example, see Jaeger, 2011). As far as I can tell, over 93% of the data analyzed by Healey et al comes from automatic parses of the BNC.


It seems critical to resolve these conflicting results. While there seems to be broad agreement between the different approaches that syntactic priming effects in conversational speech are weaker across than between speakers, the more controlled studies have found a) significant priming within speakers and b) no evidence for anti-priming across speakers, contrary to Healey et al (2014). As much as I like the broad-coverage approach taken by Healey and colleagues, there are considerably potential confounds (see above). Without further evidence I would this not be convinced that the results of Healey et al hold.

Let me conclude with a link to another line of work that is of interest in this context. Several studies have investigated how attitude and perceived group membership can affect alignment across speakers. Most of this research has focused on phonetic alignment (e.g., Babel, 2010, 2011, and references therein), but there is some work on socially mediated effects on syntactic alignment. In a recent web-based study, a joint collaboration between OSU and my lab (Weatherholtz et al, in press) found that syntactic alignment was socially mediated: the strength of syntactic priming depended on the speakers’ willingness to compromise as well as the perceived social distance to a previously encountered speaker. This goes rather well with some of the discussion in Healey et al (2014). Crucially, however, Weatherholtz and colleagues directly tested whether social mediated ever led to anti-priming, rather than just reduced priming. These two potential outcomes would argue for a rather different cognitive architecture (e.g., regarding the automaticity of the mechanisms underlying syntactic priming, for further discussion, see Weatherholtz et al., in press) At least in our study (which, however, did not involve direct communication with an interlocutor and thus won’t be the ultimate word on this issue), no anti-priming was observed (though social distance did cause speakers to not prime). I think further weaving these different lines of research together will be an interesting step for future work.

PS: One additional potential difference is that the syntactic priming studies by Neal Snider and me focus on on corpora that exclusively contain conversational speech. I couldn’t tell from the description in Healey et al (2014) whether all types of speech in the 10 million word spoken BNC were used or whether only unscripted conversations were extracted (the spoken BNC contains many monologues and scripted speech, such as parliamentary addressed). I suspect and hope it is the former, given the authors’ repeated focus on unscripted conversational speech throughout the paper. 


Babel, M. (2010). Dialect divergence and convergence in New Zealand English. Language in Society, 39, 437–456. doi:10.1017/S0047404510000400

Babel, M. (2011). Evidence for phonetic and social selectivity in spontaneous phonetic imitation. Journal of Phonetics, 40(1), 177–189.

Genzel, D., & Charniak, E. (2002). Entropy Rate Constancy in Text. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL) (pp. 199–206).

Genzel, D., & Charniak, E. (2003). Variation of entropy and parse trees of sentences as a function of the sentence number. Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing –, 10, 65–72. doi:10.3115/1119355.1119364

Jaeger, T. F. (2011). Corpus-based Research on Language Production: Information Density and Reducible Subject Relatives. In E. M. Bender & J. E. Arnold (Eds.), Language from a Cognitive Perspective: Grammar, Usage, and Processing. Studies in honor of Tom Wasow (pp. 161–197). Stanford: CSLI Publications.

Jaeger, T. F., & Snider, N. (2007). Implicit Learning and Syntactic Persistence: Surprisal and Cumulativity. University of Rochester Working Papers in the Language Sciences, 3(1), 26–44.

Jaeger, T. F., & Snider, N. (2008). Implicit learning and syntactic persistence : Surprisal and Cumulativity. In B. C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Meeting of the Cognitive Science Society (CogSci08) (pp. 1061–1066). Austin, TX: Cognitive Science Society.

Jaeger, T. F., & Snider, N. E. (2013). Alignment as a consequence of expectation adaptation: Syntactic priming is affected by the prime’s prediction error given both prior and recent experience. Cognition, 127(1), 57–83. doi:10.1016/j.cognition.2012.10.013

Keller, F., & Eh, E. (2004). The Entropy Rate Principle as a Predictor of Processing Effort : An Evaluation against Eye-tracking Data. In F. Keller (Ed.), In Proceedings of the Conference on Empirical Methods in Natural Language Processing (pp. 1–8). Edinburgh, UK: University of Edinburgh.

Qian, T., & Jaeger, T. F. (2012). Cue effectiveness in communicatively efficient discourse production. Cognitive Science36(7), 1312–36. doi:10.1111/j.1551-6709.2012.01256.x

Reitter, D., Moore, J. D., & Keller, F. (2006). Priming of Syntactic Rules in Task-Oriented Dialogue and Spontaneous Conversation. In Proceedings of the 28th Annual Conference of the Cognitive Science Society (pp. 1–6).

Szmrecsanyi, B. (2004). Persistence phenomena in the grammar of spoken english. Albert-Ludwigs-Universität.

Szmrecsanyi, B. (2005). Language users as creatures of habit: A corpus-based analysis of persistence in spoken English. Corpus Linguistics and Linguistic Theory, 1(1), 113–150. doi:10.1515/cllt.2005.1.1.113

Weatherholtz, K., Campell-Kibler, K., and Jaeger, T. F. in press. Socially-mediated syntactic Alignment. Language Variation and Change.



8 thoughts on “Some thoughts on Healey et al (2014) failure to find syntactic priming in conversational speech

    tiflo responded:
    August 24, 2014 at 12:49 pm

    Ah, the Twitter-verse has already produced several additional relevant references.

    Reitter and Moore (2014, come to the opposite conclusion as Healey et al (2014), though I am not sure whether they control for lexical overlap (which is Healey and colleagues’ point), as I can’t access the paper from my current location. Anyone?

    Support for Healey et al (2014) comes from Fernandez and Grimm (2014,, who also find less syntactic priming across turns than expected by chance in adults, though their measure of syntactic alignment is based on POS-ngrams. Additionally, their approach would be subject to the same problems that I raised above.

    Thanks to Tal Linzen and Riccardo Fusaroli for these references.


    Riccardo Fusaroli said:
    August 24, 2014 at 1:02 pm

    Reitter does control for lexical overlap, though not for surrogate pairs, and the methods have some differences, as you point out. If only I had more time I would love to write a script comparing them.


      tiflo responded:
      August 24, 2014 at 1:57 pm

      Thanks, Riccardo, this is helpful. I emailed David so that he can comment, too, if he wants to. (and I emailed Patrick Healey).


    Pat Healey said:
    August 27, 2014 at 5:16 am

    It is good to have the opportunity to respond to the concerns raised by Jaeger. Jaeger’s comment is titled as ‘failure to find syntactic priming’ but it’s worth emphasising that we aren’t reporting a null result we are reporting the opposite finding: in ordinary conversation people systematically diverge from one another in their syntactic choices in the next turn.

    We believe our claim that “previous work has not tested for general syntactic repetition effects in ordinary conversation independently of lexical repetition” is accurate. Our aim is to test the claim that priming “underpins all successful human interaction” i.e. not just specific constructions but all constructions and indeed all communicative signals and the claim that this effect will be strongest in “in face-to-face spontaneous dyadic conversation between equals with short contributions” (Pickering and Garrod, 2004). Because of this we interested in a) general repetition effects across all constructions b) in ordinary conversation i.e. from spontaneous not task-oriented dialogue c) controlled for lexical repetition.

    This focus means, we think, that Jaeger and Snider (2013) isn’t directly relevant since it considers only the dative alternation (as Jaeger notes). Also we understand the thrust of Jaeger, (2010 -which we do cite) and Jaeger and Snider (2008) to be that syntactic priming effects depend to a significant degree on how unusual the prime is. So, our assumption (incorrect?) is that Jaeger and colleagues are not committed to the general claim that all syntactic structures should prime both within and across speakers in dialogue. As we note in the paper our findings do not rule out the possibility that some specific or unusual structures do have a priming effect, however they are inconsistent with a general priming mechanism.

    The point Jaeger makes about meaning equivalence is key. Meaning-equivalent syntactic alternatives of the kind created in experimental priming tasks are -as we suggest in the paper- very unlikely to occur in ordinary spontaneous dialogue; people do indeed normally change what they say throughout dialogue. If, as Jaeger seems to suggest, syntactic priming can only operate in situations when the same message gets (re)encoded then it isn’t going to provide a useful mechanism for explaining language processing in dialogue. This isn’t a confound, it’s our conclusion: the demands of engaging with an active conversational partner to move a conversation forward appear to overwhelm the (well-attested) lab-based priming effects. If we accept that we first encounter, learn and deploy language in dialogue this would appear to be a significant limitation.

    Some comments on methods issues:

    Our analysis uses corpora with syntactic parse trees produced by both manual annotation (DCPSE) and machine parsing (BNC), analysing them separately. We get the same results for both so it is unlikely to be an effect of the different kinds of parse. However, this concern doesn’t really impact on the our argument because we are comparing real and control (‘fake’) dialogues with exactly the same parses so whatever mess / noise there is in the real dialogues it is also present in the controls.

    We are indeed using only the informal conversations from the DCPSE and the informal conversations from the ‘demographic’ portion of the BNC. This should have been stated more clearly in the paper and is critical to our argument!

    The structure-specific approach favoured by Jaeger is vulnerable to problems with generalisation. The difficulty of finding syntactic alternatives that are meaning equivalent even in lab-based studies is considerable and this has led to a substantial bias in the literature toward the PO-DO alternation and a limited number of target verbs. This is a problem because there is already considerable evidence that different verbs and different constructions have different effects. Including Jaeger and Snider of course but also e.g., Gries (2005) Dubey, Sturt and Keller (2005) and others.

    Reitter’s broad coverage work is indeed closest to ours and, as we note in our paper, the analysis most similar to the ones we report in fact shows the same effect for cross person priming: “speakers try to avoid repeating their interlocutor’s sentence structure” (Reitter, Moore and Keller, 2006). Nonetheless, the Switchboard corpus Reitter et. al. use for their sample of non task-oriented conversation consists of telephone conversations on pre-defined topics. This is different from the spontaneous, face-to-face short turn dialogue of the kind found in the BNC and DCPSE (e.g. Switchboard has much longer turns). Nonetheless, we do not think there is any substantial inconsistency between our results and Reitter et. al.’s. The clear difference with their findings is for experimental task-oriented dialogues which reinforces our point.

    In answer to Fussaroli: as far as we are aware Reitter doesn’t systematically discount lexical similarity as a covariate in his analyses, rather, he excludes verbatim repetitions from the analysis (are we wrong about this?). This mitigates the problem we are concerned with but doesn’t remove it.


      tiflo responded:
      August 27, 2014 at 3:29 pm

      Hi Patrick,

      thank you for your thorough reply. I very much appreciate that you’re taking the time to respond. Your response clarifies some of my concerns and helps me to understand what you intended to emphasize in your paper. I would, however, hold that both in your paper and your response there’s a bit of a conflation of research on syntactic priming more generally and the interactive alignment model, which is one account of syntactic priming. Actually, it seems to me that the claim that Healey et al (2014) address is a very strong (but perhaps defensible) interpretation of Pickering and Garrod (2004), namely the idea that alignment (incl. syntactic priming) is the fundamental mechanisms by which efficient communication is achieved. From this interpretation one might derive that therefore it should be observe no matter what.

      That is, however, neither a property of the specific account of syntactic priming that Pickering proposes (the activation-decay type account), nor is it at all a claim of many other syntactic priming accounts (specifically, learning accounts, see below) –contrary to what Healey et al –perhaps unintentionally—seem to suggest. This has consequences, because the idea that priming should show up even when meaning isn’t controlled for (see below for an explanation and examples of what I mean by that) definitely does not apply to these alternative models. This also means that the test provided in Healey et al, which does not control for effects of meaning, is severely confounded with regard to the question of whether syntactic priming is observed in conversational speech. That in turn means that the anti-alignment Healey et al observe across speakers is not unlikely to be due to this confound.

      Let me elaborate. So that we can continue to work towards the critical issues, I’m going to try to number your points/my concerns.

      1) In response to the title of my blog post (“… failure to find syntactic priming”), you say that Healey et al don’t find a null effect, but rather find anti-priming. Well, both is true. Healey et al do find a null effect for within-speaker priming and anti-priming for across-speaker priming (as I wrote in the blog post).

      2) Establishing the novel finding of Healey et al (2014)? Part of my comment was that others before you have found the across-speaker priming is significantly weaker than within-speaker priming in conversational speech (e.g., Reitter et al, 2006; Reiter and Moore, 2014; Jaeger and Snider 2007, 2008). This includes findings of no syntactic priming *across*-speakers. Some previous work also had directly investigated whether syntactic priming was stronger in task-oriented dialogue and weaker in less task-oriented dialogue (we mentioned Reitter et al., 2006; see also Carbary, 2011; Carbary et al., 2001, who found the same).

      So the novel finding in Healey et al (2014) —and, for that matter, previous work by Fernandez and Grimm (2014) cited in my previous comment— is that syntactic priming (measured the way you do) is further shifted down. You find null effects where others have found effects (within speakers) and anti-priming where others have found weak or no priming (across speakers). I think this is critical, as it relates to the methodological problem that I raised in the blog post and will return to below.

      3) Do we always prime and you says we do? You point out that one of the conclusions of your paper (or at least what you meant to say) is that syntactic priming is not omni-present and can easily be swamped by other factors. I wholeheartedly agree. In fact, others have made this point before. For example, Carbary and colleagues (2010; and her thesis in 2011) provide evidence from speakers’ choices in referential expression production (think “the cat”, “the striped cat”, “the cat with stripes”) that referential considerations *swamp* syntactic alignment effects. You might see this as a problem for the interactive alignment model by Pickering and Garrod (2004). This is indeed also what Carbary and colleagues argue.

      Whatever you conclude about the interactive alignment model, there are other theoretical models of syntactic priming that do not predict syntactic priming to be always present and strong. These alternatives, which you do not seem to consider much, are implicit learning models (Bock and Griffin, 2000; Chang et al., 2006; Dell and Chang 2013; Jaeger and Snider, 2013; and —he might object, though I hope not— Reitter et al., 2011). So, in reply to one of your questions: yes, I don’t assume that we always align (for multiple reasons, likely including attention and social effects). For what it’s worth, learning models of syntactic priming do also not predict that all structures prime equally strong (neither do more advanced implementations of the interactive activation model, cf. Malhotra, 2009; Reitter et al., 2011). Specifically, the framework laid out in Jaeger and Snider (2013) predicts that priming is stronger the more surprising the prime — in this framework, syntactic priming facilitates the reduction of expectation violation over time.

      All of the above discussion would seem rather critical to a discussion of the findings in Healey et al. Yet none of it is discussed. This is particularly problematic because -to the best of my knowledge- there isn’t a single working implementation of syntactic priming models that does not involve some sort of (at least implicitly) frequency-sensitive learning such as the surprisal-based priming proposed in Jaeger and Snider 2013 (see Chang et al., 2013; Reitter et al., 2011 and follow-up work). Critically, as you pointed out in your reply (but not in Healey et al., 2014), such models do not actually make the prediction that priming should always be observed. In this regard, Healey et al (2014) fails to connect with the theoretical progress that has been made over the last 10 years.

      This brings me to the anti-alignment effect observed across speakers. In my view, this is the novel effect of theoretical interest since neither the interactive alignment model NOR the learning models would be able to account for it without further assumptions. I first (again) discuss some literature that has more directly looked for the existence of such anti-alignment and then return to the methodological questions I had.

      4) Anti-alignment?This brings me to research on socially mediated syntactic priming effects. I’ll focus here on our Weatherholtz et al (in press) paper simply because I know it best, but there is a large literature on this question (you cite some; for those interested in this question, Weatherholtz et al provide many additional references).

      In the Weatherholtz et al (in press) paper, we explicitly set out to test whether syntactic priming is a) automatic in that it is always present and b) if it is something that social alignment can mediate to such an extent that we actually anti-align (e.g., with people that we really dislike). So, can social effects a) mediate syntactic priming or b) even reverse it? We found evidence for a) but not for b). That is, we didn’t find anti-alignment, unlike you —despite the fact that we (successfully) got participants to really dislike the partner that prompted the priming (see Weatherholtz et al for details).

      So, it is remarkable that you find anti-alignment effects even in the absence of such strong dislike (I think it is rather unlikely that the BNC stories you selected all involved as much dislike as evoked by the politically-charged diatribes in Weatherholtz et al., in press). With all of this in mind, let me now turn to my concerns about the methodology employed in Healey et al (2014). (actually, it’s not the methodology alone that I am concerned about, but rather the conclusions you draw from it).

      5) Methodological concern 1 – the use of automatically parsed corpora: I was concerned that your results involve an automatically parsed version of the BNC. You pointed out that you also use a manually annotated corpus and find the same result. Fair point and that addresses this concern.

      6) Methodological concern 2 – lack of control for meaning difference: This is the crucial and primary concern I had and have. Unlike the structure-specific approach taken in most lab-based experiments and some corpus-based research on conversational speech (e.g., work on ditransitive alternations, active/passive choice, etc.), Healey et al (2014) take a broad-coverage approach (cf. Reitter et al 2006). This approach does not control for the effect of meaning on grammatical encoding. Let’s take a relatively recent model of sentence production, Chang et al (2006), though others would do for my point. Now, in this model roughly two things constrain the sequence of words we end up producing. One constraint comes from a sequencing (ordering) component that puts words into order. This is the component subject to syntactic priming. Another (typically much much stronger) constraint comes from the meaning we wish to convey. This makes sure that even I am exceedingly unlike to get (detectable) interference from a recently produced ditransitive structure, if all I’d like to say is “apple” or “Let’s go”. This constraining role of what I’d like to say extends to more subtle cases. Consider the sentence “He gave me a banana, but I didn’t give a rat’s ass”. Let’s imagine that we typically are capable of producing the second part of the sentence correctly without producing a ditransitive structure (indeed, syntactic blends are rather infrequent). I mention this example, because the verb “give” is used twice and heads a VP both times, but the second mention isn’t aligned with the first. Still, I think it’s fair to say that this wouldn’t be considered a counter-example to the automaticity of syntactic priming or even as evidence for anti-alignment. The second part of the sentence is using a sense of “give” that is not the one compatible with a ditransitive (I can’t say without loss of meaning “I didn’t give a rats ass to him”). Yet, that is, I believe, how this or similar examples would be counted in the broad coverage approach taken in Healey et al (2014). (please correct me if I’m being complete wrong here and problems like this could not arise) Similarly, the broad-coverage approach will treat any NP, regardless of what it refers to and what other pragmatic constraints constrain its form, as a site where priming from previous occurrences of NP could manifest. But we already know that this isn’t the case (in case it’s not immediately intuitive, see Carbary et al 2010 for evidence that pragmatic constraints easily out-compete syntactic priming, even when everything else is held constant).

      Now, if Healey et al had concluded that syntactic priming is weaker than meaning constraints, there wouldn’t be a problem here (and neither would most people in the field be surprised!). Even a much bolder claim that any factors affecting how we linguistically encode a message will have less of an effect on the distribution of forms and structures than what meaning we want to convey would likely be rather broadly accepted.

      But that is not (all) that Healey et al (2014) conclude [indexing added]:

      “Our results show that [a] in ordinary dialogue people systematically diverge from one another in their use of syntactic structures in adjacent turns. This is [b] incompatible with a structural priming account of syntactic co-ordination in dialogue and [c] challenges the more general claim that automatic resource free priming provides the basic mechanism underpinning successful human communication.”

      Claim [a] is clearly supported by Healey et al (2014). The conclusions drawn from it though are what is problematic (Claims [b] and [c]).

      If Claim [b] and [c] are meant to argue against the strong interpretation of Pickering and Garrod’s (2004) interactive alignment model I described above, I might agree (though I’m not sure that this is what Pickering and Garrod had in mind). Clearly, syntactic priming is not omni-present, always strong, unmediated, etc. (see in particular, Carbary et al 2010; but also Chang et al., 2006; Jaeger and Snider, 2013).

      From this is doesn’t follow, however, that alignment is not one of the basic mechanisms “underpinning successful human communication”. For example, implicit learning accounts that hold that syntactic priming reduces future expectation violations (cf. Chang et al., 2006; Dell and Chang, 2013; Fine et al., 2013; Jaeger and Snider 2013). There is no claim that the preference to align on how we communicate something outranks the effects of what we want to communicate (quite to the contrary, as I laid out above).

      Similarly, it doesn’t follow (Claim [b]) that there is no structural coordination in conversation. Healey et al (2014) do not provide any evidence that directly addresses whether there is structural coordination conditional on the what one wants to say (which is the claim —I would submit— the majority of the field has in mind when they talk about structural coordination). Ok, on to one final point I’d like to make.

      7) One can’t just ignore structure-specific research All of this then also affects your claim regarding the question whether your claim (that “previous work has not tested for general syntactic repetition effects in ordinary conversation independently of lexical repetition”) is accurate or not. Your response was that you’re interested in the claim that priming “underpins all successful human interaction” and continue say that “This focus means, we think, that Jaeger and Snider (2013) isn’t directly relevant since it considers only the dative alternation (as Jaeger notes).”

      I disagree. First, even if you think that the ditransitive structure, complement clauses, and relative clauses —all of which have been investigated in conversational speech while controlling for lexical overlap — are all weird and those findings wouldn’t generalize, you’d still need a story for why these studies find alignment and Healy et al (2014) don’t. So, at the very least, this finding, which directly conflicts with yours, would deserve discussion! Additionally, the outright dismissal of structure-specific research as relevant to the question of whether syntactic priming is “automatic [and] resource free’ (see conclusions of Healey et al, 2014), flies in the face of all the research on other specific structures that has found syntactic priming in conversational speech (incl. actives/passive, particle verb constructions, binomials, comparatives, etc.).

      Second, see my point 6) above.

      Concluding blabla
      Let me (again) stress that —despite the problems I see with the broad-coverage approach as applied in your paper, I think it’s important that we continue to push and develop this approach as it is a critical complement to structure-specific research on syntactic priming. That said, I think that the critical novel finding of Healey et al –anti-alignment across speakers—is confounded … unless the intended conclusion is that what we want to say is a stronger factors of the structures and word sequences we end up producing than syntactic priming. This latter conclusions is, however, considerably weaker than what Healey et al (2014, abstract) suggest:

      Our results show that when lexical repetition is taken into account there is no general tendency for people to repeat their own syntactic constructions. More importantly, people repeat each other’s syntactic constructions less than would be expected by chance; i.e., people systematically diverge from one another in their use of syntactic constructions.

      Or, if you find this a more agreeable way to put it, the above statement is only true if we simultaneously acknowledge that “by chance” we mean “while ignoring the most important factor known to constrain linguistic form” (meaning).

      Anyway, I hope you’ll continue this line of research further and that perhaps some of the thoughts written down here will help to shape future work on this question. Fwiw, I agree that too little is known about the strength and pervasiveness of alignment effects in conversational speech and the factors that mediate it.

      Additional references
      Bock, K., & Griffin, Z. M. (2000). The Persistence of Structural Priming: Transient Activation or Implicit Learning? Journal of Experimental Psychology. General, 129(2), 177–192. doi:10.1037/TO96-M45.129.2.177

      Carbary, Kathleen M. Syntactic priming, message formation, and successful communication in unscripted dialogue. Diss. University of Rochester, 2011.

      Carbary, K. M., Frohning, E. E., & Tanenhaus, M. K. (2010). Context, Syntactic Priming, and Referential Form in an Interactive Dialogue Task: Implications for Models of Alignment. In Proceedings of the 32nd Annual Conference of the Cognitive Science Society.

      Malhotra, G. (2009). Dynamics of structural priming. University of Edinburgh.
      Malhotra, G., Pickering, M., Branigan, H., & Bednar, J. A. (2008). On the Persistence of Structural Priming: Mechanisms of Decay and Influence of Word-Forms. In Proceedings of the 30th annual conference of the cognitive science society (pp. 657–662).

      Reitter, D., Keller, F., & Moore, J. D. (2011). A computational cognitive model of syntactic priming. Cognitive Science, 35(4), 587–637. doi:10.1111/j.1551-6709.2010.01165.x


    David Reitter said:
    August 31, 2014 at 5:58 pm

    This is a useful conversation. I’d like to clarify a few things, mostly with respect to some of our work on the Switchboard and Map Task corpora. A number of technical points are made. In a nutshell, I point out that we have studied generalized syntactic repetition, with some control for lexical repetition, and that it is important in this context to control for frequency effects. The magnitude of priming can differ with genre, and conversational devices may override the general tendency.

    1. Generalized syntactic repetition

    One of the core points of Reitter, Moore & Keller (2006), Reitter (2008), Reitter&Moore (2014) is that generalized syntactic adaptation (priming, alignment) is studied, as opposed to that of selected syntactic constructions.

    We generally have done this with phrase-structure rules, but also with CCG categories (Reitter, Hockenmaier, & Keller, 2006) and part-of-speech bigrams (Reitter & Keller, 2007).

    Healey et al’s (2014) results do deserve some reconciliation work, particularly because the adaptation effects in questions are otherwise pervasive in corpora. Jaeger’s work, and that of Gries and Szmrescanyi are some examples. In more recent work (Wang, Reitter, &Yen, 2014), we also found both lexical and syntactic adaptation at a larger time-scale in a big dataset of asynchronous, written internet forum conversation.

    2. Controlling for lexical repetition and evidence from lexical boost effects

    Reitter et al. (2006), as well as most of our follow-up empirical work, concerned syntactic priming occurring independently of lexical repetition. We systematically exlucded material repeated verbatim from the analysis.

    So, one might ask, could the effects have been influenced by the repetition of at least some lexical material between prime and target?

    First, as Jaeger (above) points out, structure-specific experimental studies have very carefully controlled for lexical overlap.

    Second, it is instructive to look at generalized lexical repetition effects. Classically, lexical boost effects were found in studies that repeated key material (such as the head verb) (Pickering and Branigan, 1998).

    In Reitter, Keller & Moore (2011), we tested a hypothesis derived from the ACT-R model: that the repetition of general lexical (semantic) material boosts priming, rather than that of just heads (Experiment 1). This is indeed what we found. We wrote that this result was compatible with previous results such as Raffray & Scheepers (2009) and Snider (2008,2009). Fig. 7 is instructive for the discussion here: reduced lexical repetition leads to much-reduced syntactic repetition, although the effect disappears after a few seconds.

    I presented this effect earlier at CUNY 2008 (“The repetition of general lexical material boosts structural priming in language production.”) and in my thesis: Reitter (2008).

    None of these results are incompatible with the observation that dialogue partners may avoid each other’s syntactic constructions. My 2011 ACT-R model basically suggests that syntactic priming is the result of two overlapping memory effects that also apply to other cognitive tasks. The first effect is implicit learning, for which ACT-R provides a logarithmic decay curve, additive activation after repeated presentation, and dependency on prior activation (i.e., frequency of the syntactic rule). The second effect is due to cue-based memory retrieval, where a syntactic-semantic association is learned. This effect persists as long as the semantic material is retained in working memory, which means that it can be rather short-lived, and that syntactic priming interacts with the dialogue genre (e.g., task-oriented or not). Thus, one can see the model as a hybrid between the commonly opposed models of priming, that is, residual activation and implicit learning.

    The models makes a number of explanations and predictions. In line with our empirical observations, we can state that the strength of priming depends on the dialogue genre and on other parameters, such as lexical repetition and dialogue frequency. I’ll discuss those two in the following sections.

    2. Controlling for frequency effects

    The models we built always included an interaction of the crucial decay variable (“Distance”) and the corpus frequency of the syntactic rule or CCG category.

    Invariably, we see increased priming for rare syntactic rules. That is not surprising from a memory standpoint, if one looks at e.g., Hebbian learning, surprisal (Hale 2001), or ACT-R’s memory (e.g., Anderson 1993). I also was not the first to point out the frequency effect for syntactic priming (e.g., Scheepers, 2003).

    Did Healey and colleagues control for frequency effects?

    Healey’s general argument does not require doing that. If we equally weigh each occurrence of a syntactic rule, then the more frequent ones, showing less priming, will dominate. Highly frequent and thus predictable material will, of course, be less informative, and thus less likely to influence the course of dialogue.

    3. Magnitude of syntactic priming in the lab vs. in naturalistic dialogue

    Healey et al. (2014) point out that alignment effects in their data are much lower, and even reversed, compared to structural priming found in classical lab experiments.

    Comparing effect magnitudes between controlled lab experiments and corpus studies that use complex regression models is not as straight-forward as one might think. However, I do not disagree that naturalistic dialogue will show less adaptation. The 2011 ACT-R model predicts as much: as the real-world interlocutors attend to more semantic artifacts due to the dialogue situation and topic, cue-based memory retrieval will have fewer cues to repeat syntactic choices.

    I would agree with Healey et al. that any claim that “conversation is ‘extremely repetitive'” is overstated.

    4. Interactive alignment and required effect magnitudes

    I would not interpret Pickering and Garrod’s work as a complete theory of dialogic interaction. Interactive alignment proposes a default case for a mechanism of a trend towards mutual understanding at multiple levels of linguistic represenation. The underlying cognitive mechanism is cheap, but in no way prevents speakers from engaging in more explicit grounding or from avoiding repetition in the course of a conversational device.

    As a side-note, I remember talking to Amit Dubey quite a bit back in 2005. He looked at within-sentence repetition effects in corpora of written monologue (Dubey,Keller, &Sturt, 2008; Sturt,Keller, &Dubey, 2010). He observed an initial dip in syntactic repetition, followed by a [priming] boost and decay. I do not have a reference documenting this, but generally, a convention or a conversational device mandating repetition avoidance does not seem to be out of the question.

    5. Corpus annotation – in response to TFJ’s note

    Automatic corpus annotation comes with caveats, indeed — as does manual corpus annotation. Keep in mind that human annotators are prone to memory effects, which will be a confound for priming studies if sentences are annotated in sequence.

    The Penn Treebank annotations we used were, to my knowledge, automatically drafted and then hand-corrected. Annotation accuracy is verified by quantifying inter-annotator agreement.

    [Please contact me if you find any of my references ambiguous.]


    Pat Healey said:
    September 29, 2014 at 5:58 am

    Thanks to Jaeger and Reitter for their further comments and clarifications. There are a few further clarifications that might be worth making.

    It’s worth repeating that we are primarily concerned with models of dialogue and not the more general literature on priming which mostly focuses on language processing in contexts other than dialogue. The specific focus of our paper is with the Interactive Alignment Models’ (IAM) predictions relating to dialogue.

    The claims of IAM about dialogue seem clear e.g; “… our analysis of dialogue demonstrates that priming is the central mechanism in the process of alignment and mutual understanding” (Pickering and Garrod, 2004, p.9). It identifies automatic resource-free priming as the primary mechanism operating across several levels of representation with repair and higher order reasoning invoked as auxilliary co-ordination mechanisms. This drives the prediction of a strong general priming effect.

    We think there are important differences between unstructured everyday dialogue of the kind found in the BNC and DCPSE and more task-oriented / experimental cases (even including the Switchboard corpus) but this can be a matter of perspective. We weren’t previously familiar with Carbery’s interesting work but it focuses on task-oriented dialogue, a specific subset of constructions and provides evidence of (some) priming effects which is rather different to the work we report. We agree with Carbery’s concerns though and are encouraged by convergent findings such as Reitter’s and Fernandez’s for the Switchboard corpus.

    If, as we show, people systematically diverge in their choice of syntactic structures in conversation then the explanation for this phenomenon is not likely to be found in priming mechanisms that are primarily used to explain repetition. We note in the paper that selective repetition in particular cases is compatible with a general pattern of structural divergence but doesn’t contribute to explaining it.

    For what it’s worth we have looked at the ditransitive alternation in natural dialogue in Howes et. al. 2010 but found no syntactic repetition effect in the informal dialogue portion of the DCPSE (this was a re-analysis of Gries, 2005 data which also included non-dialogue samples). These constructions are relatively rare in natural dialogue e.g. around 1% of the DCPSE sample we analysed and, like Gries, our analysis suggests that the repetition effect is sensitive to the specific verbs used.

    Selectively excluding different senses of the same verb combined with the same syntactic structure (as suggested in part of the commentary) would be controversial since some might well argue that these are nonetheless genuine cases of priming. It would also bias our analysis against finding instances of syntactic repetition which would beg the question when we are presenting evidence of divergence / mismatches.

    Our references to “chance” are about the specific contrast we make between real conversations in which the meaning of one utterance influences what is produced next and the control (‘chance’) conversations we construct in which meaning cannot have any influence. The divergence effect is only present in the real conversations suggesting that the demands of meaningful engagement with a conversational partner overwhelm general priming effects.


      tiflo responded:
      October 5, 2014 at 2:00 pm

      Dear Pat,

      thank you for replying to the concerns raised by David and me. I think we are somewhat talking past each other. I hear your point about the IAM and agree with it: at least under the (reasonable) interpretation of the IAM that alignment is the primary mechanism by which efficient communication is achieved, the IAM overstates the importance of syntactic priming.

      However, the problems that David and I raised with your paper are also real. One of my points was that your paper (unintentionally) comes across as misunderstanding some of the claims in the syntactic alignment literature, which is much richer, more nuanced, and more careful than the quote you provide from the interactive alignment model. In particular, the paper does not provide evidence for anti-alignment, which is, however, how many readers will interpret your paper.

      As I also pointed out, you end up making what I would consider overstated claims about previous literature (there is more directly relevant work that speaks to your results and puts them into perspective than your paper suggests). Additionally, there is a missed opportunity here –both in the paper and in this discussion so far, to integrate the existing literature into your discussion (this includes the work on socially-mediated effects on syntactic alignment).

      The fact that your procedure doesn’t find syntactic alignment effect in conversational speech for ditransitive, confirms to me that it’s a problematic procedure that we still have to understand better: other studies that employed more controlled approaches have found these effects for conversational speech (see references in my original post). These previous works also had found that cross-interlocutor priming is much reduced (though not negative).

      Additionally, your point that these structures are rare misses the point that syntactic priming effects have also been found for frequent alternations in conversational speech (such as the active-passive alternation, cf. Jaeger and Snider, 2007, Study 2, among others). What distinguishes these studies from yours is that they avoid some of the potential confounds that are inherent to the approach you have taken.

      So, let me restate what I would consider the main take-home points of what I’ve been trying to communicate: (1) yes, a strong interpretation of the IAM is wrong in that it overstates the importance of syntactic alignment compared to other discourse related factors in determining speakers’ decisions during linguistic encoding, (2) there was already evidence for that (Carbary et al., 2010; Carbary 2011), though it’s nice to see it confirmed, and last but not least (3) future work on this question will benefit from a more informed and nuanced discussion of the relevant issues that goes beyond providing further evidence against IAM –or rather the strong interpretation that alignment is everything– is overstated (which was already shown by ).

      I appreciate all the time and patience you and your co-authors have put into replying to this blog post and David’s points!


Questions? Thoughts?

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s