The first in the series I described below will be from semantics, which deals with meaning, a field rarely covered by popular works. A separate post below will provide formalization for the math nerds. Following convention, an asterisk indicates an ill-formed sentence. Now, suppose you're a 1980's pop culture prof. I ask you, "How many students showed up for the Who's the Boss marathon?", which everyone skipped, and "How many came to Weird Science?", which some attended. Your respective answer sounds good in 1i) but bad in 1ii):
1i) No students showed up at all.
1ii) * Some students showed up at all.
Curious as to why no one showed up for the WTB marathon, you ask a student why they ditched WTB but caught WS. 2i) sounds good but 2ii) bad.
2i) I don't give a flying fuck about Tony Danza sitcoms.
2ii) * I give a flying fuck about John Hughes movies.
What do 1i) and 2i) share which distinguish them from 1ii) and 2ii)? Think for a sec.
- - - - - - - - - -
The good sentences have a negative like "no" or "don't." Stubborn items like "at all" and "give a flying fuck" are happy only at the negative pole, thus negative polarity items (NPIs). Others include "ever," "any," and phrases like "drink a drop," "lift a finger," etc. Convince yourself by trying them in negative & positive sentences. (Yes, linguists really do study "give a flying fuck" -- Paul Postal has an interesting essay on vulgar NPIs here, Chap. 5). But wait, where's the negative element:
3i) Every student who liked '80s movies at all showed up for WS.
Plainly, there is no negative in 3i). Worse, why does a recast of 3i) sound bad here:
3ii) * Every student who likes '80s movies gives a flying fuck about WS.
Take another look if you want at 1i), 2i), and 3i) vs 1ii), 2ii), and 3ii), but warning: you probably won't see it. Lucky for us Bill Ladusaw, who broke a lot of ground on NPIs, went into linguistics rather than something else. Before we get to the punchline, let's look at a related property of the positive, negative, and "every" sentences. We were all taught that each sentence has a subject and a predicate: "Plants" is the subject and "love sunshine" the predicate in the sentence that joins them in that order. Simplifying somewhat, we can model the meaning of the subject "plants" as just a set, namely the set of all plant-like things (tulips, grass, etc.), and the meaning of the predicate "love sunshine" also as a set, namely the set of all sunshine-loving things (plants, beach babes, etc.). The sentence "Plants love sunshine" means that the plant set is a subset of the sunshine-loving set.
Our NPI sentences also have the bits "no," "some," or "every" in front. Let's see how substituting arbitrary subsets or supersets of both the subject and predicate affect the truth conditions -- that is, does our initial sentence imply the new sentence? Let's start with "no" and alter the set denoted by the subject:
4i) No students liked Who's the Boss.
4ii) No male students liked Who's the Boss.
4iii) No people liked Who's the Boss.
The 1st implies the 2nd (subset), but not the 3rd (superset). Let's change the predicate's set:
5i) No students loved Who's the Boss.
5ii) No students saw Who's the Boss.
Again, our initial sentence implies 5i) (subset), but not 5ii) (superset). So substituting a subset for subject or predicate works. For brevity, you can convince yourself that starting the sentence with "some" results in the opposite: it implies the sentences which substitute a superset of the subject or predicate. The key is the "every" sentence:
6i) Every student liked WS.
6ii) Every male student liked WS.
6iii) Every person liked WS.
6iv) Every student loved WS.
6v) Every student saw WS.
The 1st implies the 2nd (subset of subject) but not the 3rd (superset of subject), as well as the 5th (superset of predicate) but not the 4th (subset of predicate). So "every" sentences behave like "no" sentences w.r.t. the subject (subset works) but like "some" sentences w.r.t. the predicate (superset works). Now look back at where the NPIs were allowed. Notice anything?
- - - - - - - -
NPIs are allowed where substituting a subset is implied by the original sentence, dubbed "downward-entailing" environments: both parts of a "no" sentence, neither part of a "some" sentence, and the subject but not the predicate of an "every" sentence.
So, in just this one nook of the linguistic world we've discovered a bit about human language. The meaning of a garden variety subject and predicate can be modeled as a set of things. We're unconsciously aware of whether altering this set to a subset or superset is entailed by the original sentence, and we use this to discriminate whether a quirky class of items sounds good there or not. Given that these data are all from casual speech, it shows how complex the ordinary really is and how prosaic an intimidating language like first-order logic or C++ is. Moreover, this sort of knowledge is incredibly flexible. If a new NPI entered the language, say, "dial a digit" used in negative sentences to swear to your mate that you haven't been calling other girls (or guys) -- e.g., your partner asks whether you were chatting last night with so-and-so, and you respond:
7i) Honey I swear, I didn't dial a (single) digit!
-- then we would immediately know it wouldn't work in a positive sentence, say, when your partner tells you to make an appointment with your doctor:
7ii) * OK, I'll dial a digit and book it for Friday.
Because our unconscious knowledge of language is so abstract and sophisticated, we'd have no trouble classifying "dial a digit" as an NPI given the context we heard it in, probably after just one exposure; and we'd be able to do the typical NPI things with it, all despite lack of overt instruction. Try building such quirks into an AI language faculty -- I'm not denying the computational theory of mind, but just showing how primitive the really existing anthropic AI is compared to the supposed sluggard shlepping away between our ears.