There’s an interesting phenomenon in language comprehension called the ‘garden path effect’. Proposed by Frazier and Fodor (1979), it basically meant that when you are reading or hearing a sentence, you split it up into chunks (you parse it), and due to something called ‘late closure’ you keep on adding as many words as possible to the current chunk you’re working on. This works quite well to illustrate the way in which we comprehend language.
Take a look at these examples:
1. (Since Jay always jogs a mile) he is very fit.
2. (Since Jay always jogs a mile) seems like a short distance.
3. (Since Jay always jogs) a mile seems like a short distance.
In (1), keeping the chunk as long as possible works quite nicely. In (2), it fails miserably since if you use the chunk illustrated there, the sentence doesn’t make any sense. Instead, you’d have to backtrack and reposition your chunk as shown in (3). Now, this all sounds a bit woolly until you realise that there’s plenty of evidence for this from latency measurements and eye movement studies. The latter in particular are neat – you can see people reading a sentence, come to a screeching halt as the garden path model fails them and then regress to the nearest noun.
There’s another possible method of parsing sentences, and that’s by looking at its semantics – its meaning – and parsing the sentence in a way that makes the most sense.
1. (The defendent examined) by the lawyer turned out to be unreliable.
2. (The evidence examined) by the lawyer turned out to be unreliable.
In (1), there are two possible places to position the first chunk. The defendent could be doing the examining (wrong, as shown above) or the defendent could be the one being examined (right, not shown). In (2), there’s only one meaning – since evidence can’t do any examining (it’s not alive, is it?), it must be being examined. So according to semantic parsing, you’d expect that people would have fewer problems reading (2) than (1). And that’s true, according to eye fixation studies. By the way, in both the sentences above, the parsing has been done ‘wrong’ – it has been done according to the garden path model with late closure.
As usual in these things, the two models (semantic and garden path) have been put together in a new ‘connectionist’ approach which takes the best bits from both. But that’s not what I’m interested in. What I’m interested in are the implications of altering the meanings of a word. Imagine if you had this sentence:
The AI examined by the lawyer turned out to be unreliable.
Now, imagine we go back far enough such that it is inconceivable that AIs are intelligent enough to do any examining. This would mean that there is only one way to parse the first chunk, as the AI is being examined. But go forward x decades to when it is conceivable that AIs could be examining something – you’ve just created an alternative parsing structure for the sentence. How does the brain cope with this? Is there a gradual alteration of the semantic structure and do the effects of this slowly filter down the language systems, or is the change sudden?.
(I just realised that I may have misunderstood the exact mechanisms of parsing and where to put the brackets, but the general concepts still hold. I think I’m right, anyway…)