May 16, 2011

The decline in violence has weakened Americans' tough-on-crime stance

It's time for a major correction of what American attitudes about crime look like. Everyone knows that no amount of criminal blood could ever satisfy our lust for executions. Everyone knows that no level of public spending could ever make us feel safe enough to say "hold on, that's enough." Everyone knows that no punishment could ever be so Draconian that we would ease off of our push toward harsher sentences. And everyone knows that the way our views on crime are headed, it's only a matter of time before we find ourselves in the same world of theocratic regimes who we today condemn as barbaric.

Well then, everyone knows dick about American attitudes on crime.

In a post down below, I showed that people's fear of walking the streets at night just about perfectly tracks the homicide rate. This shows that ordinary people can figure out for themselves when the world is getting safer or more dangerous -- the average person is not following the Bureau of Justice Statistics' reports, but drawing on their own experience, the word-of-mouth information from others in their social networks, and the sensational and therefore unforgettable stories on the evening news.

Given that, it's possible that their attitudes toward fighting crime would change as the crime rate itself changed. It's not necessary -- they could maintain the same war-on-crime stance no matter what, the way that they might have their children vaccinated against diseases that have become rare. That's certainly the mainstream view of the mainstream view on crime.

The General Social Survey asks respondents several questions about fighting crime. Three of these questions have been asked long enough -- since the early-mid 1970s -- for us to see how, if at all, they reflect the change up and down in the homicide rate. Here is the percent of people holding tough-on-crime responses (in blue), along with the homicide rate (in red):


As with people's fear of walking the streets at night, so do their attitudes on fixing the problem of crime respond almost perfectly to changes up or down in the homicide rate. When it goes up, people want more spending to halt it, harsher treatment of criminals by the courts, and more executions for murderers. This even carries over a couple years after the peak in the homicide rate, as though people are not going to get soft on crime right away -- they wait for a pronounced steady decline "just to be sure."

Over the past 15 years, though, Americans have lost their formerly high level of bloodthirstiness when it came to dealing with criminals, although a majority still support these policies. And who knows what will happen as the homicide rate falls even futher? Support for these views is already or soon will be nearing 60% -- once it falls below that, I don't think it would be too hard for government to ban the death penalty, give up the War on Crime mindset, and return to the pity-the-poor-criminal treatment of earlier and safer times. This will start another round of rising crime all over again.

How can we get a better feel for just how great the softer-on-crime shift has been? We will consider support for the tough-on-crime views to be a threshold along a continuum of beliefs, with the left side being soft on crime and the right side being tough on crime. We then pick a threshold -- support for the death penalty, say -- and ask what percent of people from one time period were at least that tough-on-crime, and what percent from another time were. We can convert these percentages into standard deviations and compare how far apart the average person was in one time vs. another time. (This is La Griffe du Lion's "method of thresholds.")

It's like using the percent of people who can dunk on a basketball court to tell how much taller or shorter people have gotten over time. So to make the changes vivid, I'll talk about them as though the average American has gotten shorter since the mid-'90s -- by how much, and in how brief a time, is the question.

Tough-on-crime views ran at fairly steady and high levels from circa 1980 through 1994, and they're at a low-point now, so we'll compare 1994 to 2010. For support for the death penalty, the average American became 1.3 inches shorter (0.43 s.d.). For believing the government is spending too little to halt rising crime rates, the average American became 1.6 inches shorter (0.52 s.d.). And for believing the courts are not harsh enough in dealing with criminals, the average American became 2.4 inches shorter (0.79 s.d.). Again, imagine these changes in average height taking place in only 16 years -- not even a full generation.

In fact, the change is so dramatic that there are some amusing -- or perhaps depressing -- reversals of who we think of as screaming for blood vs. who wants to treat criminals with kid gloves. At any given time, men are more tough on crime than women, conservatives more so than liberals, high school-educated more so than college-educated, and middle-aged more so than youngsters. In the graphs below on support for the death penalty, this is shown by the red line being at or above the blue line in any given time period (click to enlarge).


However, throughout the heyday of feeling like striking back at criminals, the normally weak-on-crime groups were equally or even more clamoring for the sick fucks to be crucified. Look at the blue line between roughly 1980 and 1995 -- it's either as high as, or higher than, the red line was in the earlier '70s or in the later period of the 2000s. In other words, women of the '80s and early '90s were more bloodthirsty than men have been in the past decade. Remember Ellen Ripley and Pat Benatar? What so-called male actor or singer of recent years could match them, let alone top them? And similarly for the other contrasts. (Liberals and conservatives are those who are either the middle or nearly-moderate members, not even the extreme members.)

These graphs also make it clear that differences between demographic groups are smaller than differences between people from one time period vs. another. Compare the distance between the red and blue lines -- it's not as large as the swings up and down of either line over time. Back in the '80s, even self-described liberals were out for criminal blood -- they couldn't ignore the problem by that point. And over the past 15 years, even self-described conservatives have abandoned their guard posts in the war on crime, believing that since the problem seems to be solving itself, why stand watch to the same degree as before?

Because differences in the zeitgeist of two periods is so much more powerful at explaining differences in attitudes than are differences in demographic factors, social scientists need to pay attention to when a study was done. Lumping them all together will erase what could be substantial variation across the cycle of violence. That might explain why one study's results are inconsistent with those of another -- one might have been from rising-crime times, and the other from falling-crime times. That induces a strong shift in our psychology, so it must be taken into account.

It also shows why people form stronger and longer-lasting communities based on nostalgia than on political views, sex, etc. What time period you prefer creates much deeper cuts between Us and Them, as opposed to men vs. women or liberals vs. conservatives. For a Reagan-sympathizing conservative male of today, a mainstream liberal chick who time-traveled from 1985 would be great fun to hang out with and a lot easier to identify with than his fellow conservative males from 2011. Partisan splits that would have seemed unbridgeable at the time now look puny compared to the gulf separating people of one era from those of another.

GSS variables used: COURTS, NATCRIME, CAPPUN, AGE, SEX, EDUC, POLVIEWS

May 15, 2011

Americans softening up on crime, a hint from current horror movies

We never stop hearing about how Americans are so bloodthirsty when it comes to crime policy, how black-and-white our moral view of crime is, and in short how we're all one step away from an Iranian criminal justice system. I've gone through the General Social Survey data and found, not surprisingly, that in reality those hardline attitudes track the homicide rate and so have been plummeting since the mid-1990s.

But before looking at charts based on social science data, which will be up tomorrow, let's consider a more qualitative sign from popular culture that attitudes have shifted strongly away from the war-on-crime mindset of more violent times. This way we can spot a similar change in attitudes in other times and places, where quantitative survey data are not available.

During the golden age of horror movies (the mid-'70s through the late '80s or early '90s), the killers are not given any "backstory" at all. They come from who knows where, and God knows how they got the way they are. Any attempt to flash back to a troubled childhood or implicate a rough labor market (or other set of "social forces") would have been read by audiences as a callous rationalization of their murderous behavior.

This uncertainty about the causal process that gave us the maniac killer adds to the overall sense of dread and humility about frightening problems that may lie beyond the powers of human beings to end. You give it your best shot and hope for the best -- though more often than not the killer gets up again, murders some more people, and returns to wreak further havoc in the sequel.

Earlier, when the society was still relatively naive about crazy and evil people, the consensus among social scientists was that crime was a social dysfunction, or perhaps the result of childhood psychological trauma. This supposedly complex view of the world yielded the moronically simplistic cure of the Great Society era -- throw a bunch of money at poor people, and they'll stop going crazy and killing each other. Or if they had the wrong kind of parenting, hand them over to a shrink to be fixed up.

This social determinism view was not confined to crimes that could at least plausibly be explained this way, such as the proverbial thief who steals a loaf of bread to feed his starving family. Even the most sick and twisted criminals such as serial killers were portrayed as monsters who we should feel sorry for -- just look at how they had to grow up.

The most infamous example of this is the end of the movie Psycho, where the full backstory of this demented monster is explained at length, and with plenty of Freudian psychobabble laid on extra thick. And with all of the shots of Mrs. Bates shouting at her timid little son, they might as well say it straight up: you'd turn out that way, too, if you had that domineering old bitch for a mother. He wasn't an evil person -- just the product of a noxious upbringing. You should imagine yourself in his shoes and feel sorry for him.

After the crime rate peaked in 1992, we lost our memories of how horrific killers can be, and this opened the way for sympathetic portrayals of serial killers. Documentaries of Ed Gein, David Berkowitz, etc., now routinely focus on their stressful childhoods in order to soften the audience for the heavy news that, by the way, this guy hunted people down and carved them up like game animals.

Somewhere around the early 2000s, whatever residual memories we had of serial killers had vanished. It became de rigueur in horror movie remakes to provide the humanizing backstory of the psychotic killers, including the Texas Chainsaw Massacre, Halloween, Friday the 13th, and A Nightmare on Elm Street, in total contrast to the originals where the killers come from outside the human race, and so whose behavior defies explanation. Even brand-new series, like the Saw movies, have labored to explain how the sick fucks got that way. I mean, you aren't going to hold it against that bitter Jigsaw nerd, are you? -- after all, he was a cancer victim, and besides he's only trying to teach the sheeple to unplug themselves from the Matrix and appreciate life, those ungrateful non-cancer-having bunch of shits.

Sociopaths rely on the gullibility of their would-be victims in order to thrive, and it's clear from horror movies of the past 10 years that we have been a lot more willing to listen to the story of where they're coming from, instead of not giving a damn and just trying to waste them before they kill one of us again.

Like I said, the social survey data support this conclusion, but this one may be more worth bearing in mind since it is easier to spot in other settings where opinion surveys are lacking. Once this pattern becomes common, it won't be too long before the society is hit by another crime wave: it's a clear signal that they've let their defenses down around exploiters.

May 14, 2011

Music changes

A helpful way to see how much things change is to look for a pattern reversal. For example, even though in any given time period Germanic people are taller than Mediterranean people, today's Italians are taller than medieval Germanics -- a testament to just how much health has improved with the cheap availability of animal foods after the industrial revolution and the development of antibiotics.

Likewise, even though in any given time period straight males tend to make more powerful music anthems than gays, no hetero musician of the past 20 years has even come within orbit of Freddie Mercury and Rob Halford -- a testament to just how drained of testosterone the current period has been.

And of course the same goes for women vs. men in making great music, where even one of the old school of female rock singers (from circa 1975 to 1989) could wipe the floor with the whiny dork squad of current male band-leaders or rap singers who can only front about how big their balls are.

What about music aimed at the unsophisticated, like kids? I gave another listen to some of the songs from Jem, an adventure / soap opera cartoon made for elementary school girls in the mid-1980s, and they're not only more catchy but more adult in theme and sound, compared to the drowsy and overly cutesy junk made for today's 20-somethings:

Theme song
- The deep, breathy background vocals give way to the abandon of the possessed lead singer. As with many other of the songs, a syncopated bass line pulls you along rhythmically but not in simple marching-step time, heightening the sense of the controlled loss of control.

"Time Is Runnin' Out" - What a good "girl power" song would sound like, stressing the coming together of a team (rather than the ego-inflated individual) and accomplishing something of group-level importance (rather than self-promotion or "let's show the boys"). You wonder if today's parental censors would approve of the line "Come on baby, let's go for it!"

"Too Close For Comfort" - More Pat Benatar for the kiddies.

"Who Is He Kissing?" - Jem leads a double life as a straight-laced manager of a record label, as well as the lead singer babe of its glam rock band. In both of her roles, she plays the girlfriend of the same guy, who is unaware of the identity of the two girls but is happy to be seeing them both. In a song that has more psychological conflict than anything out recently, Jem wonders "Who is he kissing -- Is it me? Or is he making love to a fantasy?" Once more the singer's voice sounds thrillingly possessed, not in-control like The Spice Girls or Fergie.

"When It's Only Me And The Music" - As I've discussed many times before, the vanishing of slow dance music is one of the more palpable symptoms of how moribund the sex lives of young people have been, since this genre of music lowers inhibitions like no other. In wilder times, even 7 year-old girls wanted to get comfortable with that sound and feeling in preparation for the real thing, whereas the average college girl today has slow-danced no more than twice in her entire life. (Get a load of all that hair.)

May 11, 2011

Can people tell what's going on?

There's a question in the post below about whether the crowd's perception of trends is reliable or not. The evolutionary take is that people will be good at detecting trends that are highly important to their survival and reproduction, that are persistent, and that would have varied in our ancestral environment (if some variable were constant, there would be no need to pay attention to its rate of change).

The homicide rate fits these criteria. Your survival depends on knowing how much violence is going on, at least where we have data the homicide rate can swing up and down over the course of decades, and its level is not a given -- you have to learn it while growing up, and pay attention to whether it changes direction even after that. Even in very violent societies like the Yanamamo, there are cycles of greater and lesser warfare and raiding, so there's no constant level of homicide in pre-state societies.

The General Social Survey asks people a question (called FEAR) about whether or not they're afraid to walk the streets of their neighborhood at night. This graph plots the percent of respondents who say they are afraid (in blue), together with the homicide rate (in red). The fear data are from 1973 to 2010, about every other year; the homicide data are annual from 1970 to 2009.


Not only do people have a good perception of how dangerous and scary their world is at the bird's-eye level, picking up the high level of violence of the 1970s and '80s, then becoming less worried during the '90s and 2000s. But they are sensitive even to temporary fluctuations, such as becoming somewhat less afraid when there was a respite in the early '80s, then ramping up their fear again when the homicide rate resumed its high level for the rest of the high-crime times. Because these two track each other so closely, almost year for year and zigging and zagging in sync, we can be sure that they aren't spuriously correlated, as you might find when two variables go steadily up throughout the whole period.

It's not clear whether people's perceptions respond right away or maybe with a lag of a year or two, as they wait for more than a single year's experiences to confirm their hunch. The key point is that people's perception of changes in the violence level is amazingly accurate. For perceptions that are shared by a broad cross-section of the population, that last for a long time, and that are related to areas of life central to Darwinian fitness, our first assumption should be that the mob is right, and that the priestly-bureaucratic class who's denouncing them as fear-blinded animals are trying to keep them from believing their own lying eyes.

Why it's adaptive to focus on the sensational and "over-react"

Horrific events, like the one mentioned in a post below where an 8 year-old girl was abducted, raped, and murdered on her way home from school, happen in real life. They're not a media fabrication. When people perceive their prevalence to be rising, as they did sometime in the later 1970s and in full force during the '80s, they begin to panic -- what is the world coming to?

Nerdy skeptics at the time, and eventually everyone once they have the benefit of hindsight, charge the panickers with over-reacting to a sensational event that is being blown out of proportion. Meanwhile, the panickers look back at the non-believers like they're at best clueless, and at worst complicit in allowing the problem to grow even more out of control.

Since the panickers' reaction to a perceived growing threat is so widespread across human groups and back through time, it is probably the correct reaction -- otherwise the individuals and/or groups who adopted it would have been weeded out long ago. Yet the educated classes always charge them with blowing things out of proportion, being driven by rare (and, so it is implied, irrelevant) sensationalistic stories. Let us see why the mob is right.

The elite mistakenly believe that the average panicker is trying to do epidemiological research and estimate the probability that a bad event will happen, for various levels of "bad" -- assault, burglary, rape, murder, and so on. So when one of the panickers' representatives starts to spread a suspicious estimate -- say, that 2 million children are abducted every year -- the nerd, who has been waiting to pounce, sends off a variety of reasons why this estimate is likely bogus, and so why no reasonable person should pay any attention to the panickers, if this moron's estimate is any guide to their collective brainpower.

But, strange as it may seem, real people making real decisions in real life are not trying to ape academics, who live in a sheltered and fake world. They are not trying to estimate how widespread a problem is at an isolated point in time -- rather, they are always talking about how some pattern of behavior is growing more common or dying out during one period compared to an earlier period, i.e. the rate of change or in what direction society appears to be moving.

Well, OK, the educated skeptic says, sure the rate of child abduction (or rape or murder) may have been increasing back when everyone was panicking about it. Still, because the hysterics had so exaggerated how common it was at the time, they wound up worrying about an extremely rare problem. There are problems far more common, and only somewhat less devastating, that it would have made more sense to panic about, like a parent who pushes their kid down the stairs or smacks them across the face for changing the channel without asking, as opposed to the far rarer cases of rape, murder, etc.

However, those less sensational cases are necessarily harder (more costly) to get good information about -- you'd have to do extensive interviews with a large chunk of the neighborhood to find out. Paying attention to the sensational solves that problem by pushing us to acquire only very cheap information -- when a little girl is abducted, raped, and killed, everyone will know very soon, with no research on their part required.

Ah, but the skeptic interrupts, that sensational info may be cheaper but it's of lower quality -- what does it really tell us about violence in general? It only tells us about some incredibly rare act of violence that, while lamentable, is too uncommon to be worth losing sleep for days over.

And yet the sensational does tell us about the overall distribution of violence. Such extreme cases serve as a convenient point along the continuum of violent acts for us to apply La Griffe du Lion's "method of thresholds." In brief, to compare how two distributions compare to each other on average, find a threshold value for the trait in question. Consider the example of trying to find out if people in the neighborhood are getting taller, shorter, or staying the same height. You could look at how many of the people can dunk on the basketball court. We don't know how tall you need to be in order to dunk -- we just know that it's tall enough that we won't mis-classify anyone (midgets cannot dunk, and very tall people can).

That, by the way, is the danger in the supposedly sophisticated approach of trying to measure the prevalence of less sensational cases. The kid who shows up to school with a bruised eye -- did he fall down, was it just rough-housing with his brother, did he miss a fly ball, did his stepdad beat him, or what? The more sensational the threshold, the less difficulty there is in classifying people correctly. If he shows up with bruises all over his body, he got beaten up by someone -- not ordinary rough-housing, not a dropped fly ball. If he's not only bruised up but also dead, then we're certain he was the victim of violence.

Back on the basketball court, say that we -- meaning lots of us -- notice that there seem to be a lot more people playing ball who can dunk, compared to when we were little kids ourselves. Not being interested at all in epidemiology, we don't bother estimating the probability that a kid can dunk now, and the probability that a kid could have dunked when we were little, as though this calculation were the basis for our hunch. We simply notice that, despite the population of kids staying more or less the same in size, a lot more of them are tall enough to dunk these days. We also hear complaints about kids having to duck to fit under doorways, that beds are no longer long enough in hotels, and so on. (This all happened for real in Holland as the Dutch people got incredibly taller during recent decades.)

When a greater fraction of the population meets or exceeds some threshold value, that either means that the average has increased -- i.e., that the distribution has shifted toward the right -- or that the variance has increased -- i.e., that the distribution has stretched out fatter in both directions, while staying the same on average. Here a quick glance at the other extreme tells us which reason it is -- do we also notice lots more midgets running around these days, in addition to all those dunkers on the basketball court? If so, then people are probably the same height on average as before, but for whatever reason the distribution is spreading out and there are more people at both extremes. If not, then that means people are getting taller on average (as in the Dutch case).

Returning to the distribution of violence, we began hearing lots more reports of strangers doing sick things to kids in the later '70s and '80s. Did we also hear reports of a growing trend of strangers passing out scholarship money on the playground, or other Mother Teresa behavior? Maybe a bit more than earlier, but not really -- and certainly not to the extent that there was a trend toward greater harm of children. Therefore, using the threshold of "kids are getting abducted (or raped or killed)," we infer that the entire distribution of violence was shifting in a more violent direction.

And that is what people panicked about -- not that the probability of some particular crime went from X to Y, but that the whole distribution of events was moving in a more violent direction. Sure, murder of children may have still been "rare" in some undefined and ultimately baseless sense, but it was more common -- and that was just the tip of the iceberg, so to speak. Every type of harm to kids had become more prevalent. But since the less extreme forms are harder for us to see without doing extensive investigation, it was the increase in more sensational forms that tipped us off to the overall rise in violence against children.

Without measuring it directly, we found out that the average encounter with a stranger had shifted in a more violent direction, just as ordinary observers lacking any statistical training or instruments outside their own mind deduced that Dutch people were getting taller on average, from paying attention to the sensational cases of beds, doorways, and seats that needed to be overhauled because a growing number complained that these things were too short.

So we see that our "obsession with the sensational" is highly efficient -- the information you get is very cheaply obtained, and it buys you a lot of additional highly reliable information about the rest of the distribution that you did not measure, which is what you really care about.

And because it can't be repeated enough times, the elite's whole notion of "over-reacting" to rare yet sensational events is bogus. The panickers are not worried over just that tiny slice of the spectrum of harm, but rather their correct appraisal that the entire distribution of people's behavior has shifted in a more dangerous direction. So even the less sensational, quotidian harms have become more common. Perhaps this shift will stop, maybe even reverse, but then again it may get worse. No one knows, so it's better to prepare for the worst: complacency here can kill (asymmetric costs of type I vs. type II error). Hence the atmosphere of panic.

Ultimately the charge of "over"-reaction can only be granted if the elite give us a solidly reasoned list of what the optimal range should be for various anxieties, like the ranges you see on your blood test. In this case, given the true prevalence of abduction, rape, murder, etc., of children, how many column inches should have been devoted to the topic in the local newspaper, how many minutes of local news coverage, how many minutes of word-of-mouth discussion, and so on?

Certainly a small, unrepresentative group that lasts for a fleeting moment could get the impression of reality wrong. But when a good chunk from a broad cross-section of the community has a persistent picture of things getting worse, we should trust their "over"-reaction more than the tongue-clucking of distant academics who pretend to know what the optimal levels of all sorts of behavior are.

May 10, 2011

During falling-crime times, people are more dismissive of the pleas of others

The fable about the boy who cried wolf has come a long way from its origins as a story about the just comeuppance of some rascally nomadic herder who had been tormenting the sedentary farmers with his lies. The original thus belongs to the long history of farmer vs. herder conflict (the Mongols vs. the Chinese, Cain vs. Abel, etc.).

Now it is usually told about a boy who loses the trust of someone who actually cares about him in the beginning (unlike the ambivalence or hatred that farmers have for herders) -- a blood parent, a kindhearted teacher, or someone like that. This changes the story from one that issues a warning only to the would-be liar to quite a different one that also warns the grown-up guardian that sometimes the little liars get into some serious shit and truly need your help, or else they're goners, so don't be so cold-hearted -- even granting that they do tend to exaggerate or cry wolf a lot or most of the time.

I gave one reason here why people in rising-crime times bear less of a grudge, which I'll copy at the end of this post. * But another one just struck me, and can be cast in terms of believing or ignoring a cry of "wolf!"

All else equal, when some behavior gives a person less benefits, they tend to do less of it. So let's look at the net benefits you get from making the leap of faith and coming to the aid of someone who cries wolf. It's an act of trust or faith because you have no way to tell whether they really mean it. In words:

Net benefit = saving them from danger (weighted by how likely this is), minus wasted effort (weighted by how likely this is)

Or in symbols:

Net = S*p - W*(1-p)

Net is again the net benefit, S is how large the effect of saving them is, p is the probability that the person is telling the truth (that is, how likely it is that you're saving them, rather than wasting your effort), W is how large your wasted effort is if they're lying, which happens with probability (1-p) since probabilities add up to 1.

We're more likely to go along with their story when Net is larger -- when on average it will accomplish more. So whatever makes the first chunk, S*p, bigger, and whatever makes the second chunk, W*(1-p), smaller, will make the overall Net larger, and thus make us more likely to indulge the person's plea.

Look at what happens when the crime rate starts soaring, after a period when it had been falling or bottoming out. There are now a lot more bad guys out there, so it's more likely that this person has truly run into a big problem. So p gets larger, while (1-p) therefore gets smaller. Because small probabilities are impossible for a person to measure, I doubt that people have a keen sense of just how much more likely it is that the person is telling the truth when there are more bad guys out there. This change will tilt them in the more credulous direction, but I think it's of secondary importance.

I'm assuming that W, the amount of effort you waste in listening to and believing their story, stays the same whether times are getting more or less dangerous. It's like running into the room when the kid shouts "fire!" and it turns out they're just joking -- the wasted effort is your getting anxious, moving your muscles, and feeling embarrassed when they point and laugh at the sucker. So this has no effect on tilting you toward or away from helping out in falling vs. rising-crime times.

The main factor is what happens to S, the effect of you believing their story and saving them. In these crying wolf situations, saving them amounts to avoiding some danger, so S measures how big that danger is that you're saving them from. In much safer times, the typical bad problem, or even variety of bad problems, that the person could have gotten themselves into is not so terrifying. For example, since the crime rate peaked in 1992, serial killers have been virtually unknown, aside from the single exception of the Beltway Sniper. The last guys like that were Jeffrey Dahmer and Buffalo Bill from Silence of the Lambs -- and before them the Hillside Stranglers, Leatherface, the Manson cult, Son of Sam, Freddy Krueger, the men behind the Atlanta child murders, and the rest of them.

I don't know exactly how much greater S becomes in rising-crime times, but it's the difference between the child's waking nightmare being a bully punching him at school or some creepy fag following him around the supermarket, as opposed to a serial killer who abducts, kills, and maybe even dismembers the body, or a child molester who doesn't just follow but actually violates him, or a drug dealer who gives him his first free hit off a crack pipe to get him hooked. Go back and watch Taxi Driver, Dirty Harry, The Terminator, or whatever movie will remind you of how sick and twisted the monsters among us used to be. Then compare that to school bullies, hackers stealing his identity, and the other less vile stuff that the bad guys have been up to these days. It seems like it's at least a 10-fold difference when you save them from the boogeyman in the former than in the latter times.

Like I said, small probabilities are impossible for a person to guess on the spot, but looking back, how much more likely was a kid to be abducted, raped, beaten, killed, etc., before the violence level came crashing down? In the Finkelhor article linked to in the post below on kidnappers, the decline in various forms of child abuse ranges from about 50-80%, so let's just be conservative and say 50%. That means that finding themselves in deep shit was twice as likely for children during the '80s and early '90s, when the violence level was plateau-ing at a high level, than during most of the '90s and the 2000s.

So by my completely ballpark estimate, you're getting 20 times the benefit by going along with their plea when you're in rising-crime times. But there's so much guesswork that a person does on-the-spot that they might easily perceive it to be 100 or 1000 times more effective to help out in rising-crime times. Who really knows?

This resolves a paradox that I don't think has gotten much of a focus yet, and that is that while we always hear about how bad parents were in rising-crime times -- divorce rates were higher, whippings more common, and daycare / television / latchkey children more prevalent -- they trusted us a hell of a lot more than the helicopter parents of the past 15 to 20 years. They had more faith in us. And they really treasured us, rather than treating us like pets who they get to dress up to impress their limpdick and saggy-titted peers.

It may not sound pleasant, but the truth is that dangerous times cause parents and children to bond more closely in the face of a common wave of violence. There's no bonding without trust or faith between the two parties.

And of course this generalizes to all other cases where someone makes an earnest plea to someone in a higher social position who can help them (or possibly get suckered) -- blacks reaching out to whites, women to men, younger to old, third-worlders to first-worlders, minority to majority political parties, one sub-culture to another. Although it lasted for several years later, the most visible peak of this, partly a spreading of goodwill and charity and partly a carnivalesque uniting against a common threat, was Live Aid in 1985.

During the past 20 years of falling violence levels, there has been an attendant plummeting in the probability that someone stretching out their hand has gotten into some really heavy shit, as well as in the size of the danger that we would help them to avoid by believing their story. As a result, trust and faith in the pleas of others have mostly evaporated, and by losing the ties that would otherwise have bound us to those who we went along with and helped out, our social worlds have collapsed into tiny bubbles that include only those who we're absolutely, positively sure would not try to sucker us -- ourselves, some close blood relatives, and a friend only if we've sworn a blood oath, most friendships today being rather superficial, at-arm's-length, and easily dissolvable.

* People measure grudges as a fraction of their lifetimes, not on an absolute scale, so when life appears shorter, you hold a grudge for a shorter length of time. Plus there's the looming threat that you have to attend to that gives you an extra incentive to reconcile your differences and get back to protecting yourselves and kicking the bad guys' asses.

May 9, 2011

Stranger danger -- real or fake?

One day after school in I can't remember what grade (it seems like 2nd), I was standing on the sidewalk in front waiting to be picked up, when a dark sedan crept slowly toward me and stopped. The passenger-side window rolled down, and some strange middle-aged woman, whose eyes I could not even see behind her sunglasses, told me that my mom would be staying late at work, and that she was there to take me home -- so c'mon, get in the car.

If I had had a weapon, I would have been more brave, but she was in a car and I was not, so I bolted like a bat out of hell, screaming "Heeeeelp! Heeeelp! Kidnapper! It's a kidnapperrrr!!!" I doubt I've ever run so fast in all my life before or since, not even to impress a girl. This twisted bitch had picked the wrong target -- I was going to find the nearest grown-up and have her ass busted. She'd have no time to race back to her house and hide the bodies of all the other kids she'd abducted, and then the cops would have all the evidence they needed to slam her in jail for life, hopefully send her to the chair.

I couldn't believe that she wouldn't give up and try to get lost in traffic before I told someone. She just kept cruising along at my running speed, repeating herself over and over. She'd have to be awfully stupid to risk getting caught after I'd already broken off in flight. Then she tried to convince me that she knew my mother by saying her name, where she went to college, or some other kind of personal information like that. I think she also told me things about my dad, my brothers, where I lived, and so on. Well, either she had really done her homework -- perhaps obtaining this information in the course of killing off the rest of my family earlier in the day -- or maybe she did know my mom after all.

I stopped and let her repeat herself all over again, checking her face and tone-of-voice for signs of sketchiness, and she passed. She seemed flustered and a little embarrassed rather than angry and demanding that I do as told. So I walked over, got in the car, and the nice lady drove me home without binding my hands or duct-taping my mouth shut. When my mother got home that night, she and I had a good long laugh about the whole thing.

Why would a second-grader have been so quick to assume that such a person was a kidnapper? Because I learned what to do from numerous safety lessons in school, and every day during the public service announcements that interrupted my stream of afternoon cartoons. Here are some typical ones from McGruff the Crime Dog, G.I. Joe, and Jem. Young people, and even some old people, today look back on those PSAs and snicker at how fear-mongering and propagandistic the campaign was. It's not hard to find condescending and sarcastic videos like this one on YouTube about how silly the idea of "stranger danger" is. Yeah right, like you can really get kidnapped.

And yet, back before the decline in child abuse that's been going for 15 to 20 years, it was real. Sure, my case turned out to be nothing serious, and so it looks like the preparation and the act itself of sprinting away from a nice lady was just a waste. Still, add up the costs and benefits from all instances where a kid burst off in that situation -- there were many false positives, where the kid decided that the person was a kidnapper and they really weren't, a cost to themselves and to their ride. But this minimizes false negatives, where the kid decides the person is OK despite truly being a kidnapper. The avoided costs of false negatives are immensely greater than costs incurred from false positives: while a large number of kids blow a harmless event out of proportion, a small number of kids evade a rapist and live to see another day.

The smug attitude of those YouTube teenagers making fun of "stranger danger" is what will eventually drive up the crime rate. As the violence level begins to fall, it becomes easier to believe that kidnapping is not much of a problem, and so nothing to worry much about. In fact, anyone who tells you otherwise is just some scare-monger trying to restrict your freedom, so you might as well openly flout the so-called rules about being suspicious of creepy strangers. Once a critical mass of a local group has this unguarded attitude, the criminals will detect the shift and come out of hibernation, now that their prospects for exploiting others have suddenly shot up. And so the cycle of violence will go into its upward phase all over again.

Worse is the complacency that "it can't happen here," a belief that can only flourish in falling-crime times. I read this occasionally from people who like to emphasize that black and Hispanic neighborhoods have higher crime rates than white ones. True enough. But then their simple minds go further by believing that white areas are safe in some absolute sense (rather than in a relative one), while black and Hispanic ones are dangerous. Dismissing the threat of violence in their own neck of the woods -- since it only happens over there in the ghetto -- they have now set themselves up to be taken over by criminals from within their own boundaries. Their self-assured attitude goes so far that they won't even bother looking up crime statistics, which show that the last crime wave struck the country broadly -- including non-urbanized and uber-white states like Alaska, Utah, Vermont, and West Virginia, not just the Bronx and Compton. Or that it was indeed an international wave of violence, striking the sparsely populated and lily white country of Sweden.

Could it have happened in my mostly-white, middle-class suburb of Columbus, Ohio, where I fled (as we can only see in hindsight) for no good reason? Yes -- and it did. About five years before we moved there, a girl at a nearby elementary school had been taken out of plain sight, choked so hard that she lost consciousness and burst a blood vessel in her eye, and had her shirt taken off. Luckily something scared off her attacker, and she was not harmed further. However, a few weeks later a similar attacker (possibly the same one) followed an 8 year-old girl, Asenath Dukat, from another nearby elementary school on her way home, then abducted her, raped her, and killed her. The two main suspects were not drug-addled dark-skins invading from the ghetto, but young white loser kids from the very neighborhood where the crimes occurred.

Here is a brief local news report following up on the rape and murder 30 years afterward. And here is an online discussion forum for alumni of the nearby high school, where many of them independently began searching for information on the case more than 25 years later. It must have burned its shape forever onto the minds of every young person in the area. Some of them are even still trying to do research and detective work on this cold case, that's how real it was.

That thread is also worth reading for details of other aspects of life that haven't made it into the history books, and so ones that kids today don't appreciate -- a shady guy trailing elementary school kids home in a dirty car, a separate creepo handing out illegal porn to small children at the public pool (right next to an elementary school), and yet a third weirdo disguising himself as an Ohio State University scientist who waltzes into the school restroom and gets the little boys there to urinate into plastic bags to provide him with research samples. Then again, maybe these were the same deranged person.

Whatever the case may be, against that background, you bet your ass there's such a thing as stranger danger, and thank god the schools and our cartoons helped us prepare for the inevitable time when we'd find ourselves right in the middle of it and have to get out. They never told us to stay locked inside, though, or to trust no one -- that only came to pass during the '90s heyday of paranoia and the current reign of helicopter parents. The message was instead that we should have a life and be out and about, but to recognize a threat and deal with it when we saw one.

May 7, 2011

A new low: Smithsonian to exhibit "The Art of Video Games"

Here is the website. So now even video games count as "an artistic medium," eh? It really makes you long for the good old days when we were debating whether or not "art" included a picture of some fag with a bullwhip stuck up his butt.

What makes art art? Well that's an empirical question, not an a priori one. We just look at everything that people treat as art -- all over the world, and back to the beginning of our species history -- and try to see what traits they all have in common. Otherwise it's pointless exercises in arguing over what words mean.

There is great disagreement over this empirical question, mostly stemming from how common is "common" across cultures, and how "timeless" is timeless. Still, here are two traits that never make the list of "what makes art art," and that if you were to include them, everyone at every time would say, "God no, that would make it not art!" Both relate to how much control the audience has over the experience.

1) The audience chooses the outcomes for at least one, maybe all, of the major branching points in the experience. E.g., what should happen to a character in a narrative, whether the next movement in a piece of music should be frenetic or calm, etc.

2) The audience is required to participate physically to advance the experience along. E.g., having to jog in place to move the first act of a play along, then switching to squat-thrust in place to move the play into the second act, or having to trace certain patterns with your finger over a page in order to move a print narrative along. Turning a page is not participation, and neither is keeping your eyeballs open, getting a source of light to see the page, etc.

Of course, both of these are essential to video games, so video games are not art -- no more than a crossword puzzle, a Mad Libs story, or a jigsaw puzzle (the puzzle and the final picture formed are not the same thing)

Academic morons aside, everyone everywhere believes that the artist and audience are distinct, however much they may or may not interact when they're face-to-face. So a work of art must be created by the creators (sounds obvious, but again those are the times we're living in). If a so-called artist kept asking us to make this and that decision about where the experience should go, we would conclude that he couldn't make up his mind and had given us a very incomplete work of art. Make every decision for us, and then it might be art.

As for physical participation that must be of a certain form at certain times along the experience, that puts the audience in control rather than the artist. We in the audience want to be taken on a journey, led by the expert storyteller, singer, movie-maker, or whoever they are. We might have a physical reaction to this guide-tour through their imagination -- dancing along to music, stabbing the air with our finger when some character gets what's coming to them, and so on. But our actions are never required to move it along.

What popular, mass-market things would qualify as art, at least on these two requirements? Just to name some examples that are close to video games, and leaving aside obvious things like movies and TV shows, there are comic books, pro wrestling matches, and the dark rides at amusement parks. They offer narrative content and visual spectacle, like video games, but the distinction between creators and spectators is maintained. Based on the relative lack of interest in these art forms compared to movies, TV, books, etc., it doesn't look like comics, wrestling, and ghost train rides are the best vehicle for artistic creation. But they at least make the cut, while video games do not.

May 4, 2011

Romans during the Empire lost 1 inch in height; Celts and Germanics were taller

Here is a revealing article on well-being in pre-historic and historic Europe. It uses estimates of human height as the proxy for how well people were doing. The main reason for taller average height across these time periods is assumed to be better nutrition, lower infectious disease load, etc. For anyone who's ever had experience eating little animal products vs. a lot of them, it's clear that better nutrition isn't good only for the body but for the mind too. Same goes for being more free of pathogens.

It's an interesting read, but you can skip to page 42 of the PDF to see the average heights plotted over time, from the 8th century B.C. through the 18th century A.D. (i.e., before the industrial revolution, when we were still in a Malthusian world), and for three major regions of Europe -- Mediterranean, Central-Western, and North-Eastern.

During the Roman Kingdom and Republic, their people averaged about 5'6.3", and once the Empire's days were numbered, they were also about 5'6.5". However, at best during the Empire they were one inch shorter, about 5'5.6", and during the 1st century B.C. -- the time of Julius Caesar -- they were just 5'4.6". A good deal of this Imperial shrinkage appears to be caused by their curtailing of herding cattle, and thus losing meat and milk, in favor of cultivating wheat (see PDF page 40). This is just another example, albeit from historical times, of the devastation to our health when we give up animal products and make it fashionable to chew on grass instead.

The Celtic, Germanic, and Slavic tribes and their descendants weren't so decadent, so over the whole time period their average height is somewhere around 5'6.5" and 5'7" -- not as tall as today when disease is less rampant, but still pretty good for pre-antibiotics people. How muscular they were does not show up in the height estimates, but we can infer from how much meat and milk they ate, as well as from first-hand descriptions, that the average young adult male would have looked like The Dying Gaul (complete with rich rock singer hair, in contrast to the bald Roman grain-munchers).

To get a better feel for just how terrified the invading Romans must have felt before the taller Barbarians, let's run through some numbers. In the 1st century B.C., the average Roman was 5'4.6", and the Barbarians about 5'6.3". What percent of each population would stand at least 6' tall? (I'm assuming a standard deviation of 3 inches that we see today.) Among the civilized Romans, only 0.68% -- not even 1% -- whereas among the Barbarians, 2.87% would have, or over 4 times as many per capita.

Now, 3% may not sound like much, but that's only assuming you picked Barbarians at random. Those who made it into the ranks of warriors were more elite. Let's say they were the top 10% of their group by height -- then their minimum height would be 5'10.1". Since the Roman invaders relied more on strength in numbers than on prepossessing individual stature, they probably weren't chosen to be so much taller than the average citizen -- maybe 5'7" on average? Then these Roman grunts of, let's say, 5'7" confront a warrior class who are all taller than 5'10" -- they must have needed nerves of steel to get through the Gallic Wars.

So here's another example of how much better Barbarian life was compared to Imperial Roman life. They were taller, probably fitter, and certainly less neurotic since they weren't so stung by the pangs of hunger that a grain-heavy diet causes, not to mention breathing more freely (quite literally) on account of lower urbanization and thus lower rates of crowd diseases.