On Norah Jones as an example of the rise in fearful-avoidant women, Ziel left a comment about how painfully uncomfortable she is in this duet for "Love Hurts" with the admittedly oddball Keith Richards. She should've just gotten over it for a few minutes in order to give a good performance for the crowd, but she totally freezes.
Compare that to the 1983 duet, "We've Got Tonight", by Kenny Rogers and Sheena Easton. Rogers was 44 and gray-haired, while Easton was 23 and a vulnerable little thing. She doesn't even look like she's just putting up with him for the sake of the crowd -- she's getting really into the song, letting him hold her hand and hug her. Today her counterpart's reaction would be omg creepy guy!!! where are you when i need you harry potter?!?!?! But back then people didn't find normal affection weird.
They can sound cheesy, but duets are not made with pop music perfection in mind. They're more of a demonstration to the audience that normal barriers can be let down -- for a little while, anyway -- so that two growing-distant groups can overcome their differences and be a little more team-minded in the future. If the duet partners can do it, why can't you?
It's rarely two groups who are bitter enemies for good reason, but more like those who are drifting out of touch and need to count on each other more. Men and women, younger and older, one sub-culture and another, or blacks and whites (NB: never Mexicans or East Asians).
Banding together, putting aside petty differences, the carnivalesque temporary weakening of barriers -- all themes that keep showing up in relation to the trend in the violence rate, so let's see if it's here too.
I looked at the songs of the Billboard Year-End Hot 100 (1959 was the earliest year with all songs), and found which ones had more than one artist listed -- this excludes songs performed by a group that makes duets normally, like Simon & Garfunkel. I'm looking just at those that represent two normally separate musicians coming together. (I also left out two songs from Disney movies, sticking just to pop music for teenagers and older.)
Then I checked if the Wikipedia page described the song as a duet (or if unsure, the Pandora page). This excludes songs where two stars come together just for hype value, but where there is no sense of the two interweaving their voices, interacting with each other, and if anything playing down their own egos. We all know what "duet" means.
I've weighted each song based on how high in the rankings it was [1], and added up these weighted values for each year. Here is how this index of popularity has changed over time:
It starts low but shows a jump in 1967, and continues to rise through the '70s, plateauing at a high throughout the '80s. Even 1990 is still pretty high. But by 1991, there are zero duets for the first time in over two decades, the downward trend after that is clear. There are still fluctuations up and down, but both the peaks and troughs get steadily lower in the post-'91 period.
So the link to the crime rate checks out. Still, it looks like there are only fits and starts during the '60s and early '70s; most of the action is in the mid-'70s through 1990. It seems like people get more into duet mode during the apocalyptic second half of a rising-crime period, while in the first half they're still hoping that the experts can work things out, so that banding together at the grassroots level won't be necessary.
Aside from serving as a model to the audience that they shouldn't keep their guard up and be so isolated all the time, some of these duets sound great on their own:
"Easy Lover" by Philip Bailey (of Earth, Wind & Fire) and Phil Collins (of Genesis). Collins has a few moments in the video where he overdoes the "goofy white guy" act, but overall there's no self-consciousness about a black guy and a white guy making a kickass rock song about heartbreaking women.
"What Have I Done to Deserve This?" by Pet Shop Boys and Dusty Springfield. In this different-age duet, it's the woman who's older, by 15 years. But this was back in the pre-cougar days when women still aged gracefully. Both made their careers during rising-crime times, so it was easy to fit together.
[1] For 100 songs, a good formula seems to be exp(-0.02*R), where R is the rank number.
January 31, 2012
January 30, 2012
What ever happened to making home movies?
Over Christmas vacation my family watched a bunch of old home movies from when my brothers and I were little, around 1982 to '85. We made movies after that as well, but they must be under a pile somewhere. The last I remember was probably Christmas of 1990 or '91 being taped, and anything after that would have been very sporadic.
My mother has a DVD compilation of all of her old 8mm home movies from the '60s and early '70s. The same kind of events are shown there as on ours -- every birthday, every Christmas, and random special occasions. (My mother even taped my brothers "performing" in the daycare center's 2 year-old Olympics during the summer of '84. In hindsight, it's something we all felt like fast-forwarding through after a few minutes, but moms were more motherly back then and recorded more rather than less.)
Again that seemed to fade out during the '90s, and not just because we were too old to tape. My mother's home movies show her in high school, when her older brother had already become a father, etc.
By now it seems just about dead. My nephew is nearly 4 years old, and there isn't a single home movie of him, whether for a birthday, Christmas, or anything else. Sure, there are a handful of clips, but none is longer than about a minute, they are usually divorced from any context, and so there's no sense of recreating an experience. Also, we were with my cousin and his four kids (ages 6 to 16, I think) for Christmas, and there were no home movies being made. People still take pictures, but no movies.
The movies of me and my brothers go for about 10 minutes or more at a stretch, adding up to several hours per tape, and you get a good feel for what was going on, and how everyone was interacting with one another.
I tried to google around to see if there are any data to pin this feeling down, but I didn't find anything. Still, it was just so common to make home movies, and I haven't seen anything at all of my nephew. Some of my brother's friends have little kids too, and I think he would've mentioned it if they were very different in that respect. Like, "Yeah, they make lots of home movies of their kids, but as for us..."
Obviously it has nothing to do with economics, since camcorders only get cheaper over time, and we either stay the same or get richer. Not to mention easier to operate -- my dad had to hold that Betamovie camcorder with two hands and rest it on his shoulder, while wearing the giganto VCR itself (which held the blank tape) strapped over his shoulders, like a proton pack or something.
Technological change isn't it either, since people choose what to adopt, how often to use it, and in what ways. It's true that a smartphone is dumb as a camcorder, but nothing keeps people from still buying camcorders and using them the way they used to. "My iPhone wasn't built for that" is just a lame excuse.
Once again the changes seem to reflect the trend in the violence rate. Looking through YouTube for home movies from the '50s (i.e. before '59 or '58), they aren't as frequent as the ones from the '60s through the '80s. It's not just that the cameras weren't as common in the '50s vs. the '60s. They don't show very private or intimate events, whereas in the ones from the '60s the people are more open and not so concerned about being caught on film. Here's an example of Christmas from the mid-'60s. And then people in home movies only became less self-conscious through the '70s and '80s.
Nobody else was ever going to watch your home movies but you, so this isn't a difference between one time period being more private and another more exhibitionistic. It's more about how close you wanted to be to your family members. If you compare the '50s family sit-coms to the ones from the '80s, the family members are much more distant and neutral toward each other in the former, and more affectionate and looking out for one another in the latter. During the '90s and 2000s, they've returned to "it's nice that you're here, but I don't really need you."
Making home movies, then, was a way to preserve something that looked like it was being increasingly threatened. Not consciously, of course. You just care more about preserving something that feels transient, and take it for granted when it feels permanent.
That has lasting consequences, too. You can gather the family around and watch those old home movies as a rite of renewal for your togetherness. The Silent Generation never seemed to feel that way about their childhood as they aged, probably because they never got very close to each other in the first place. "Too mushy." I don't think most Millennials will either. Just as the Silent Generation mostly gets nostalgic about radio programs, the Millennials will get nostalgic mostly for TV and video games. Not so much the relationships they had.
This is further evidence not to believe the hype about the "family values revolution" since the mid-'90s. They're definitely hovering over their children all the time, but if parents so strongly value their family, then why aren't they making home movies like they used to?
My mother has a DVD compilation of all of her old 8mm home movies from the '60s and early '70s. The same kind of events are shown there as on ours -- every birthday, every Christmas, and random special occasions. (My mother even taped my brothers "performing" in the daycare center's 2 year-old Olympics during the summer of '84. In hindsight, it's something we all felt like fast-forwarding through after a few minutes, but moms were more motherly back then and recorded more rather than less.)
Again that seemed to fade out during the '90s, and not just because we were too old to tape. My mother's home movies show her in high school, when her older brother had already become a father, etc.
By now it seems just about dead. My nephew is nearly 4 years old, and there isn't a single home movie of him, whether for a birthday, Christmas, or anything else. Sure, there are a handful of clips, but none is longer than about a minute, they are usually divorced from any context, and so there's no sense of recreating an experience. Also, we were with my cousin and his four kids (ages 6 to 16, I think) for Christmas, and there were no home movies being made. People still take pictures, but no movies.
The movies of me and my brothers go for about 10 minutes or more at a stretch, adding up to several hours per tape, and you get a good feel for what was going on, and how everyone was interacting with one another.
I tried to google around to see if there are any data to pin this feeling down, but I didn't find anything. Still, it was just so common to make home movies, and I haven't seen anything at all of my nephew. Some of my brother's friends have little kids too, and I think he would've mentioned it if they were very different in that respect. Like, "Yeah, they make lots of home movies of their kids, but as for us..."
Obviously it has nothing to do with economics, since camcorders only get cheaper over time, and we either stay the same or get richer. Not to mention easier to operate -- my dad had to hold that Betamovie camcorder with two hands and rest it on his shoulder, while wearing the giganto VCR itself (which held the blank tape) strapped over his shoulders, like a proton pack or something.
Technological change isn't it either, since people choose what to adopt, how often to use it, and in what ways. It's true that a smartphone is dumb as a camcorder, but nothing keeps people from still buying camcorders and using them the way they used to. "My iPhone wasn't built for that" is just a lame excuse.
Once again the changes seem to reflect the trend in the violence rate. Looking through YouTube for home movies from the '50s (i.e. before '59 or '58), they aren't as frequent as the ones from the '60s through the '80s. It's not just that the cameras weren't as common in the '50s vs. the '60s. They don't show very private or intimate events, whereas in the ones from the '60s the people are more open and not so concerned about being caught on film. Here's an example of Christmas from the mid-'60s. And then people in home movies only became less self-conscious through the '70s and '80s.
Nobody else was ever going to watch your home movies but you, so this isn't a difference between one time period being more private and another more exhibitionistic. It's more about how close you wanted to be to your family members. If you compare the '50s family sit-coms to the ones from the '80s, the family members are much more distant and neutral toward each other in the former, and more affectionate and looking out for one another in the latter. During the '90s and 2000s, they've returned to "it's nice that you're here, but I don't really need you."
Making home movies, then, was a way to preserve something that looked like it was being increasingly threatened. Not consciously, of course. You just care more about preserving something that feels transient, and take it for granted when it feels permanent.
That has lasting consequences, too. You can gather the family around and watch those old home movies as a rite of renewal for your togetherness. The Silent Generation never seemed to feel that way about their childhood as they aged, probably because they never got very close to each other in the first place. "Too mushy." I don't think most Millennials will either. Just as the Silent Generation mostly gets nostalgic about radio programs, the Millennials will get nostalgic mostly for TV and video games. Not so much the relationships they had.
This is further evidence not to believe the hype about the "family values revolution" since the mid-'90s. They're definitely hovering over their children all the time, but if parents so strongly value their family, then why aren't they making home movies like they used to?
Is file-sharing the sole cause of the music industry's decline?
Here's the abstract from a newer paper by the always enlightening Stan Liebowitz (free PDF):
The studies that suggest that the whole decline in sales is due to file-sharing are earlier, back in the wild west days of the early 2000s. The more recent studies from 2008 and 2009 still put file-sharing's role as accounting for 65-75% of the declining sales.
Most of what the music industry puts out is junk, but sooner or later the zeitgeist will change and we'll get the next big thing. First it was jazz, then rock, and next who knows. But for that to happen, all of the infrastructure has to still be around. Good music, when people can create it again, won't record and distribute itself.
That's the worst part of this whole mess -- having to stick up for bloodsucking record companies. But what choice is there when the other side is file-sharing dorks who don't care if the basic infrastructure melts away, all so they can save a few bucks on their faggot album by Bruno Mars or Avenged Sevenfold?
The file-sharing literature has focused mainly on whether file-sharing has decreased record sales, with less attention paid to the size of any decline. Although there is still some contention, most studies have concluded that file-sharing has decreased record sales. What has not been noted is that most estimates indicate that the file-sharing has caused the entire enormous decline in record sales that has occurred over the last decade. This heretofore hidden result is due to the lack of a consistent metric that would allow easy comparability across studies. The task of this paper is to provide such a metric, translate the results reported in the literature into that metric, and then summarizes the results from this exercise.
The studies that suggest that the whole decline in sales is due to file-sharing are earlier, back in the wild west days of the early 2000s. The more recent studies from 2008 and 2009 still put file-sharing's role as accounting for 65-75% of the declining sales.
Most of what the music industry puts out is junk, but sooner or later the zeitgeist will change and we'll get the next big thing. First it was jazz, then rock, and next who knows. But for that to happen, all of the infrastructure has to still be around. Good music, when people can create it again, won't record and distribute itself.
That's the worst part of this whole mess -- having to stick up for bloodsucking record companies. But what choice is there when the other side is file-sharing dorks who don't care if the basic infrastructure melts away, all so they can save a few bucks on their faggot album by Bruno Mars or Avenged Sevenfold?
January 29, 2012
Percent of teenagers with driver's license still plummeting
I think I've already got two other graphs like this on the blog somewhere, but the 2010 data are in.
The first drop-off takes place in 1990, actually a bit before the peak in the crime rate. This is another example of cocooning behavior slightly preceding a drop in violence rates. During the mid-2000s, it looked like it might have been slowing down, but it's only fallen at a faster rate within the past several years. The overall decline gets sharper the younger the age group you look at -- least steep for 19 year-olds, up to the most steep for 16 year-olds.
A driver's license used to be sought after as a rite of passage and a ticket to freedom, especially in high school when you don't have a college campus that's at least somewhat accessible by foot or school shuttles. By now, among high schoolers old enough to get one (16-17 year-olds), only 38% have followed through. Talk about delaying growing up.
Don't teenagers find it embarrassing anymore to have to get driven around by their parents, or burden their one friend who drives with ride requests? Truly a lazy and shameless generation.
It would be neat to find data on rates of attempting the test for the first time, passing rates, and re-try rates for those who failed. Like, are fewer of them trying to get a license at all? Even among those who try, if they fail, are they more likely to put off the second attempt until much later?
Due to a careless mistake, I flunked my first test -- and right on my 16th birthday! -- but I showed up the next weekend, or maybe the one after that, and passed it with no problems. Today I don't think a 16 year-old would have that minimal amount of perseverance. They'd need a year of self-esteem therapy before they'd feel comfortable giving it another go.
The first drop-off takes place in 1990, actually a bit before the peak in the crime rate. This is another example of cocooning behavior slightly preceding a drop in violence rates. During the mid-2000s, it looked like it might have been slowing down, but it's only fallen at a faster rate within the past several years. The overall decline gets sharper the younger the age group you look at -- least steep for 19 year-olds, up to the most steep for 16 year-olds.
A driver's license used to be sought after as a rite of passage and a ticket to freedom, especially in high school when you don't have a college campus that's at least somewhat accessible by foot or school shuttles. By now, among high schoolers old enough to get one (16-17 year-olds), only 38% have followed through. Talk about delaying growing up.
Don't teenagers find it embarrassing anymore to have to get driven around by their parents, or burden their one friend who drives with ride requests? Truly a lazy and shameless generation.
It would be neat to find data on rates of attempting the test for the first time, passing rates, and re-try rates for those who failed. Like, are fewer of them trying to get a license at all? Even among those who try, if they fail, are they more likely to put off the second attempt until much later?
Due to a careless mistake, I flunked my first test -- and right on my 16th birthday! -- but I showed up the next weekend, or maybe the one after that, and passed it with no problems. Today I don't think a 16 year-old would have that minimal amount of perseverance. They'd need a year of self-esteem therapy before they'd feel comfortable giving it another go.
January 28, 2012
Similar music from similar environments
Stumbled upon this hit from the Jazz Age, where a clingy girl wonders why her crush doesn't make a move on her, even after sending him such forward signals:
"He's So Unusual" by Helen Kane, 1929
In tone it's not so different from this hit from the New Wave Age:
"Johnny Are You Queer" by Josie Cotton, 1982
One of the biggest changes that people undergo when violence rates start rising is to desire more social interaction and attachment. So even people with low self-esteem will still want to reach out and touch someone; they'll just come off as needy.
But I miss clingy-needy girls, now that I've seen the alternatives. When people no longer desire social contact, the high self-esteem ones become dismissive and avoidant, like Fergie and other attention whores, while the low self-esteem ones become fearful and avoidant, or mousy, kind of like Norah Jones. They both don't trust others, and wouldn't feel comfortable letting their guard down to get close to someone. That really comes across in their singing, which has a very limited range of pitch and never gets very high.
That songbird type of inflection that says "hey, notice my voice and come over to talk to me" only comes from those who desire contact, whether they have lower self-esteem like the needy ones or higher self-esteem, who show a "secure" attachment style, like how Belinda Carlisle sounded.
"He's So Unusual" by Helen Kane, 1929
In tone it's not so different from this hit from the New Wave Age:
"Johnny Are You Queer" by Josie Cotton, 1982
One of the biggest changes that people undergo when violence rates start rising is to desire more social interaction and attachment. So even people with low self-esteem will still want to reach out and touch someone; they'll just come off as needy.
But I miss clingy-needy girls, now that I've seen the alternatives. When people no longer desire social contact, the high self-esteem ones become dismissive and avoidant, like Fergie and other attention whores, while the low self-esteem ones become fearful and avoidant, or mousy, kind of like Norah Jones. They both don't trust others, and wouldn't feel comfortable letting their guard down to get close to someone. That really comes across in their singing, which has a very limited range of pitch and never gets very high.
That songbird type of inflection that says "hey, notice my voice and come over to talk to me" only comes from those who desire contact, whether they have lower self-esteem like the needy ones or higher self-esteem, who show a "secure" attachment style, like how Belinda Carlisle sounded.
January 25, 2012
Did cigarette warning labels curb smoking rates?
That wasn't the question I had in mind when looking at these data, but one I realized I could take a crack at after picking apart something that I had felt like looking at for no reason at all. *
The General Social Survey asked two questions about regular smoking -- do you regularly smoke now (SMOKE), or if you don't now, did you ever regularly smoke in the past (EVSMOKE). Adding the "Yes" answers for both together, and taking them as a percent of everyone asked about smoking, we can find out who has ever been a regular smoker in their life -- past or present.
Here is the percent of ever-regular-smokers within 5-year birth cohorts, shown by the first year of their group. So "90" means 1890-94, "15" means 1915-1919, etc. I've split them into males and females because their history of adoption and abandoning of smoking is different (something I hadn't thought of at first).
The first two male cohorts don't have huge sample sizes, so I wouldn't make much of the jump in the 1895-99 group. Males born around the turn of the 20th century already started with a high chance of ever taking up smoking during their lives, around 40%. That rose to a plateau of just under 50% for men born between 1925-39. Already by the cohort of the earlier '40s there is a decline, and that only plummets in each cohort after.
Women, by contrast, started this period out very unlikely to ever get into smoking. That changed with the Jazz Age, which began around the mid-1910s, about the time that girls born around the turn of the century would be teenagers and be exposed to a whole new set of role models. Although it was born of that time, the glamor of female smoking outlived the Roaring Twenties. The cohort most likely to regularly smoke at some point was the 1940-44 group. After a half-century of steady growth, the first decline comes with the 1945-49 group, which only falls further with each one after. There is a jump in the late '50s cohort, though -- maybe when they were young teenagers during the Counter-Culture, smoking became briefly fashionable again for girls (not for boys, according to their graph).
Since the cohorts are fairly large, breaking them up into single-year groups still leaves sample sizes over 100. To pinpoint when the declines began, for males it looks like the 1941 cohort, and for females the 1947 cohort.
The first cigarette warning labels came in 1966 and have remained since. By that time, the 1941 cohort of males had already started to give up trying smoking in their teens and early 20s, when most people who ever become regular smokers begin. I also doubt it was the earlier famous 1954 British Doctors Study linking smoking and lung cancer -- hardly any 13 year-old American boys would've heard about it.
For females, it's a little more plausible that the warning labels had an effect, since the 1947 cohort would've been 19 when they came out, still having some of their vulnerable period left. We can definitely rule out the academic studies from the '50s, since they would've been 7 when the famous one came out.
Still, we should prefer one cause to two. The labels can only possibly explain the female decline, requiring some separate cause for the male decline. The disappearance of cigarettes seems so close in both groups that we should just treat it as one change, just like when men and women both got scared of dietary fat and began eating more carbohydrates.
What that cause is, I don't know. It's not related to my usual stories about rising or falling crime rates. But it is worth noting that two "obvious" answers -- a huge study linking smoking and lung cancer, and dire warnings right there on the package -- are wrong. That doesn't surprise me, since those are both top-down influences, and those cannot change people's attitudes so dramatically and for so long. (And it's clear that this is an attitudinal change, not some lame economic change.)
In fact, since the warning labels came after the decline was already in progress, we see yet another case of the top-down solution being a mere delayed and symbolic reaction by the technocracy. The experts, whether in the government, academia, private sector, or whatever, just jump on the already-rolling bandwagon and shamelessly try to hog the credit for having set it in motion.
It could be as simple as a fashion cycle, where no external factors are needed to explain the rise and fall pattern. It would be just like an epidemic. That would be a local, bottom-up explanation, and those tend to pan out.
I don't like "fashion cycle" explanations when a bunch of cycles all fit on top of each other, like the ones I've been looking at for awhile with crime rates, etc. But if there isn't some other cycle to link it to, I'm OK with treating cigarette smoking as a fashion that came and went. There may well be some other collection of cycles that fit with the smoking cycle, and I'll keep my eyes peeled. For now, though, I have no clue. Someone else should look into it, though. But something that changed at the grassroots level, not some ineffectual policy or campaign by outsiders.
* As an aside, that's how real science gets done -- you're just playing with all kinds of different stuff, and some of it goes places you didn't intend. Blind variation, then selective retention. Starting with a question and collecting data that bear on it -- so-called "hypothesis testing" -- usually goes nowhere.
Some cool pattern could turn up in your data, but because you're in tunnel-vision mode when testing hypotheses, you'll miss what's right under your nose. You'll only think of it months or years later, when you're no longer thinking so narrowly about that project, just letting your mind wander. But why not start off that way, instead of wasting so much time with blinkers on?
The General Social Survey asked two questions about regular smoking -- do you regularly smoke now (SMOKE), or if you don't now, did you ever regularly smoke in the past (EVSMOKE). Adding the "Yes" answers for both together, and taking them as a percent of everyone asked about smoking, we can find out who has ever been a regular smoker in their life -- past or present.
Here is the percent of ever-regular-smokers within 5-year birth cohorts, shown by the first year of their group. So "90" means 1890-94, "15" means 1915-1919, etc. I've split them into males and females because their history of adoption and abandoning of smoking is different (something I hadn't thought of at first).
The first two male cohorts don't have huge sample sizes, so I wouldn't make much of the jump in the 1895-99 group. Males born around the turn of the 20th century already started with a high chance of ever taking up smoking during their lives, around 40%. That rose to a plateau of just under 50% for men born between 1925-39. Already by the cohort of the earlier '40s there is a decline, and that only plummets in each cohort after.
Women, by contrast, started this period out very unlikely to ever get into smoking. That changed with the Jazz Age, which began around the mid-1910s, about the time that girls born around the turn of the century would be teenagers and be exposed to a whole new set of role models. Although it was born of that time, the glamor of female smoking outlived the Roaring Twenties. The cohort most likely to regularly smoke at some point was the 1940-44 group. After a half-century of steady growth, the first decline comes with the 1945-49 group, which only falls further with each one after. There is a jump in the late '50s cohort, though -- maybe when they were young teenagers during the Counter-Culture, smoking became briefly fashionable again for girls (not for boys, according to their graph).
Since the cohorts are fairly large, breaking them up into single-year groups still leaves sample sizes over 100. To pinpoint when the declines began, for males it looks like the 1941 cohort, and for females the 1947 cohort.
The first cigarette warning labels came in 1966 and have remained since. By that time, the 1941 cohort of males had already started to give up trying smoking in their teens and early 20s, when most people who ever become regular smokers begin. I also doubt it was the earlier famous 1954 British Doctors Study linking smoking and lung cancer -- hardly any 13 year-old American boys would've heard about it.
For females, it's a little more plausible that the warning labels had an effect, since the 1947 cohort would've been 19 when they came out, still having some of their vulnerable period left. We can definitely rule out the academic studies from the '50s, since they would've been 7 when the famous one came out.
Still, we should prefer one cause to two. The labels can only possibly explain the female decline, requiring some separate cause for the male decline. The disappearance of cigarettes seems so close in both groups that we should just treat it as one change, just like when men and women both got scared of dietary fat and began eating more carbohydrates.
What that cause is, I don't know. It's not related to my usual stories about rising or falling crime rates. But it is worth noting that two "obvious" answers -- a huge study linking smoking and lung cancer, and dire warnings right there on the package -- are wrong. That doesn't surprise me, since those are both top-down influences, and those cannot change people's attitudes so dramatically and for so long. (And it's clear that this is an attitudinal change, not some lame economic change.)
In fact, since the warning labels came after the decline was already in progress, we see yet another case of the top-down solution being a mere delayed and symbolic reaction by the technocracy. The experts, whether in the government, academia, private sector, or whatever, just jump on the already-rolling bandwagon and shamelessly try to hog the credit for having set it in motion.
It could be as simple as a fashion cycle, where no external factors are needed to explain the rise and fall pattern. It would be just like an epidemic. That would be a local, bottom-up explanation, and those tend to pan out.
I don't like "fashion cycle" explanations when a bunch of cycles all fit on top of each other, like the ones I've been looking at for awhile with crime rates, etc. But if there isn't some other cycle to link it to, I'm OK with treating cigarette smoking as a fashion that came and went. There may well be some other collection of cycles that fit with the smoking cycle, and I'll keep my eyes peeled. For now, though, I have no clue. Someone else should look into it, though. But something that changed at the grassroots level, not some ineffectual policy or campaign by outsiders.
* As an aside, that's how real science gets done -- you're just playing with all kinds of different stuff, and some of it goes places you didn't intend. Blind variation, then selective retention. Starting with a question and collecting data that bear on it -- so-called "hypothesis testing" -- usually goes nowhere.
Some cool pattern could turn up in your data, but because you're in tunnel-vision mode when testing hypotheses, you'll miss what's right under your nose. You'll only think of it months or years later, when you're no longer thinking so narrowly about that project, just letting your mind wander. But why not start off that way, instead of wasting so much time with blinkers on?
January 23, 2012
Burger King home delivery and Starbucks pub
In a move that recalls the anti-social drive-in culture of the mid-century, Burger King has begun testing a home delivery service.
There's virtually nothing to be saved in terms of time or money. You have to live within 10 minutes of the restaurant, and they only aim to get your food to you within 30 minutes. So on your own, less than 10 minutes there and back, plus the time to get your food inside or via the drive-thru, and you get it faster. There's a $2 charge for the delivery, and you won't be spending that much on gas to do it yourself. What they're selling is the "luxury" to stay locked indoors all day.
You can't really fault the company, since they're only trying to keep up with consumer tastes, and people these days don't want to be out in the open. They've had to reverse their self-conception from a place that offers an experience special enough to leave the house for, to a stripped-down catering / outsource service.
I usually drop in a couple times a week, eating there, and it's not as enjoyable as it used to be because all of the windows are blotted out with ads. It's on a busy corner, and would make a greater spot for people-watching, but it's too inefficient to have the windows open up to the outside world for the handful of dining room customers, when they could be used to push the latest deals to the penny-pinching drive-thru majority.
One good thing, though, is the music. You can't expect good music in public, but Burger King's is usually listenable and unobtrusive. If they're not going to go for an exciting atmosphere inside, they should just play '80s adult contemporary hits. Something soothing and uplifting. I did hear "Mad About You" by Belinda Carlisle once -- I don't know who could hear that song in public and not break a smile.
Unfortunately you can't say the same about Starbucks music. It's always so cerebral (bebop) and self-conscious (folk) that it wakes you right up from the dream that you'd like to settle into when you're lounging around out of the house. It's not toe-curling, but it would be such an easy thing to fix to make the experience more satisfying.
Still it is my go-to hang-out since it's the closest thing to a neighborhood meeting place that you can find nowadays. They may begin to focus even more on that communal aspect by offering more food and booze. They're starting with beer and wine; hopefully they'll sell Irish coffee. In any case, it'll help mellow everyone out. I do like the direction the store's experience has taken, where it used to be a lek for attention-whoring professionals, and has slowly become a more chill, all-ages hang-out.
The one big misstep they've made is offering free WiFi. That just invites the parasites and cocooners. Reading newspapers, books, writing with your hand, etc., is all fine. But once you're hunched over a laptop, you might as well bring portable cubicle walls as well. It's off-putting and depressing to walk into a coffeehouse and behold a computer lab.
Once the society shifts away from cocooning, I'll bet it will start at a place like Starbucks, similar to the use of diners as young people came out of hibernation in the late '50s. They weren't as carnivalesque as the food court at a mall would later become, but they were at least keeping the embers of sociability burning in a withdrawn era, compared to the drive-ins and strip malls.
The last period of falling crime and cocooning lasted 25 years, and we're already about 20 years into this one. So hopefully around the end of the decade we'll begin to see signs of life again.
There's virtually nothing to be saved in terms of time or money. You have to live within 10 minutes of the restaurant, and they only aim to get your food to you within 30 minutes. So on your own, less than 10 minutes there and back, plus the time to get your food inside or via the drive-thru, and you get it faster. There's a $2 charge for the delivery, and you won't be spending that much on gas to do it yourself. What they're selling is the "luxury" to stay locked indoors all day.
You can't really fault the company, since they're only trying to keep up with consumer tastes, and people these days don't want to be out in the open. They've had to reverse their self-conception from a place that offers an experience special enough to leave the house for, to a stripped-down catering / outsource service.
I usually drop in a couple times a week, eating there, and it's not as enjoyable as it used to be because all of the windows are blotted out with ads. It's on a busy corner, and would make a greater spot for people-watching, but it's too inefficient to have the windows open up to the outside world for the handful of dining room customers, when they could be used to push the latest deals to the penny-pinching drive-thru majority.
One good thing, though, is the music. You can't expect good music in public, but Burger King's is usually listenable and unobtrusive. If they're not going to go for an exciting atmosphere inside, they should just play '80s adult contemporary hits. Something soothing and uplifting. I did hear "Mad About You" by Belinda Carlisle once -- I don't know who could hear that song in public and not break a smile.
Unfortunately you can't say the same about Starbucks music. It's always so cerebral (bebop) and self-conscious (folk) that it wakes you right up from the dream that you'd like to settle into when you're lounging around out of the house. It's not toe-curling, but it would be such an easy thing to fix to make the experience more satisfying.
Still it is my go-to hang-out since it's the closest thing to a neighborhood meeting place that you can find nowadays. They may begin to focus even more on that communal aspect by offering more food and booze. They're starting with beer and wine; hopefully they'll sell Irish coffee. In any case, it'll help mellow everyone out. I do like the direction the store's experience has taken, where it used to be a lek for attention-whoring professionals, and has slowly become a more chill, all-ages hang-out.
The one big misstep they've made is offering free WiFi. That just invites the parasites and cocooners. Reading newspapers, books, writing with your hand, etc., is all fine. But once you're hunched over a laptop, you might as well bring portable cubicle walls as well. It's off-putting and depressing to walk into a coffeehouse and behold a computer lab.
Once the society shifts away from cocooning, I'll bet it will start at a place like Starbucks, similar to the use of diners as young people came out of hibernation in the late '50s. They weren't as carnivalesque as the food court at a mall would later become, but they were at least keeping the embers of sociability burning in a withdrawn era, compared to the drive-ins and strip malls.
The last period of falling crime and cocooning lasted 25 years, and we're already about 20 years into this one. So hopefully around the end of the decade we'll begin to see signs of life again.
Do pastoralists make better taxi and truck drivers?
East Asians and Central Americans flock to all sorts of low-paying jobs, but you hardly ever see them driving taxis or trucks (long-distance, anyway).
The stereotypical cab driver is from north and eastern Africa, the Middle East, and the more northern and western parts of South Asia. Occasionally from Western Africa. The stereotypical truck driver is a restless Scotch-Irish hillbilly.
Being adapted to a more nomadic way of life certainly helps if you're going to be driving around all day. That weeds out the East Asians and some Mexicans, who are designed for sedentary life in large-scale intensive agricultural societies.
That still wouldn't weed out most sub-Saharan Africans and other Central Americans, whose horticulturalist ways also involve a good deal of moving about. They certainly enjoy going out cruising, but they wouldn't want to transport others for pay. They're too wary of strangers.
A cab ride is only a little different from hitch-hiking, so trust is crucial. Low trust will make the cabbie think that the customers are going to kill him, rob him, or skip out without paying. Low trust will also keep customers from getting in, thinking that they're just going to get ripped off.
It's a kind of guest-host relationship, where the taxi driver is the merciful host willing to help out the stranded guest, expecting a little something in return. Cultures of hospitality are the same as cultures of honor, since they're just two forms of an obsession with reciprocity. Hospitality is kindness repaying kindness, starting out generous; and honor is harm repaying harm, starting out threatening. And those both are found almost entirely in pastoralist societies.
The most helpful taxi driver I've known was from some herding region in East Africa. He was wiry, had a thinner and more pronounced nose than other black-skinned Africans, and spoke with pharyngeal consonants, unique to the Afroasiatic family that includes Semitic, and is mostly spoken by pastoralist groups. (It wasn't Arabic. Maybe Amharic.)
I wanted to make sure he got a tip, but the meter was running a little close to what I had on me, so I asked to be let out a couple blocks before the spot that I'd first said. He immediately sensed why I'd changed my mind, and said he'd turn off the meter and take me home, that I could pay with whatever I had -- "It's too cold for you to walk tonight." After I apologized that the tip wouldn't be that much for a late night trip, he brushed it off saying, "Oh no, that's too much anyway."
He may have been an extreme case, but you can't hold down a job like that if you don't have at least a milder level of that generosity. There will be too many complaints about how rude and inhospitable you are, and you'll get fired. Most times you may not even know that the person is generous because they won't be put to any test. You'll only find out when you're a little short. They're really one of the few groups of workers who are willing to cut you a break when you're in a bind.
Now, driving trucks long-distance doesn't tap into the guest-host psychology, but it does require drivers who can deal with and even enjoy long stretches of solitary, adventuresome activity. That's perfect for someone built to follow a herd of livestock, which isn't a big-group affair.
It's not hermetic either, though: bouts of close socializing punctuate the on-your-own flow of time. For herders, it's reuniting with kin, boisterous communal festivals, and the all-important guest-host relationships like stopping at a caravanserai to mingle and rest. For truck drivers, it may have been a honky tonk bar or a rest stop with a greasy spoon diner, places where you can interact face-to-face and even get worked up into a crowd-vibe with people trusting one another enough to cut loose and have some rowdy fun.
I wish we'd grown up living closer to my father's father, who was a truck driver. We got a good deal of exposure to footloose, unpretentious living through my mom's side (Appalachian hillbillies), but it would've been better if we'd gotten more from my dad's side too.
The stereotypical cab driver is from north and eastern Africa, the Middle East, and the more northern and western parts of South Asia. Occasionally from Western Africa. The stereotypical truck driver is a restless Scotch-Irish hillbilly.
Being adapted to a more nomadic way of life certainly helps if you're going to be driving around all day. That weeds out the East Asians and some Mexicans, who are designed for sedentary life in large-scale intensive agricultural societies.
That still wouldn't weed out most sub-Saharan Africans and other Central Americans, whose horticulturalist ways also involve a good deal of moving about. They certainly enjoy going out cruising, but they wouldn't want to transport others for pay. They're too wary of strangers.
A cab ride is only a little different from hitch-hiking, so trust is crucial. Low trust will make the cabbie think that the customers are going to kill him, rob him, or skip out without paying. Low trust will also keep customers from getting in, thinking that they're just going to get ripped off.
It's a kind of guest-host relationship, where the taxi driver is the merciful host willing to help out the stranded guest, expecting a little something in return. Cultures of hospitality are the same as cultures of honor, since they're just two forms of an obsession with reciprocity. Hospitality is kindness repaying kindness, starting out generous; and honor is harm repaying harm, starting out threatening. And those both are found almost entirely in pastoralist societies.
The most helpful taxi driver I've known was from some herding region in East Africa. He was wiry, had a thinner and more pronounced nose than other black-skinned Africans, and spoke with pharyngeal consonants, unique to the Afroasiatic family that includes Semitic, and is mostly spoken by pastoralist groups. (It wasn't Arabic. Maybe Amharic.)
I wanted to make sure he got a tip, but the meter was running a little close to what I had on me, so I asked to be let out a couple blocks before the spot that I'd first said. He immediately sensed why I'd changed my mind, and said he'd turn off the meter and take me home, that I could pay with whatever I had -- "It's too cold for you to walk tonight." After I apologized that the tip wouldn't be that much for a late night trip, he brushed it off saying, "Oh no, that's too much anyway."
He may have been an extreme case, but you can't hold down a job like that if you don't have at least a milder level of that generosity. There will be too many complaints about how rude and inhospitable you are, and you'll get fired. Most times you may not even know that the person is generous because they won't be put to any test. You'll only find out when you're a little short. They're really one of the few groups of workers who are willing to cut you a break when you're in a bind.
Now, driving trucks long-distance doesn't tap into the guest-host psychology, but it does require drivers who can deal with and even enjoy long stretches of solitary, adventuresome activity. That's perfect for someone built to follow a herd of livestock, which isn't a big-group affair.
It's not hermetic either, though: bouts of close socializing punctuate the on-your-own flow of time. For herders, it's reuniting with kin, boisterous communal festivals, and the all-important guest-host relationships like stopping at a caravanserai to mingle and rest. For truck drivers, it may have been a honky tonk bar or a rest stop with a greasy spoon diner, places where you can interact face-to-face and even get worked up into a crowd-vibe with people trusting one another enough to cut loose and have some rowdy fun.
I wish we'd grown up living closer to my father's father, who was a truck driver. We got a good deal of exposure to footloose, unpretentious living through my mom's side (Appalachian hillbillies), but it would've been better if we'd gotten more from my dad's side too.
January 19, 2012
Civilization crumbles during Wikipedia blackout
"Imagine a world without free knowledge" -- if that means a world without Wikipedia, well, it would be like 2003. Even if it meant a world without the web or the internet itself, it would be like 1994.
We haven't gotten any richer, happier, or more productive since either date, so really who cares? You'd lose the minor buzz you feel from visiting your favorite sites, but they're more than replaceable in the real world.
The main reason people freak out so much when pondering the disappearance of the internet is that we live in an age of cocooning, and they're too dismissive or fearful of those real-world substitutes for farting around on the Facebook etc. They really would have nothing to do.
As for Wikipedia, it is incapable of increasing our wisdom, for technological reasons alone (forget who controls and edits it). It belongs with websites that only allow you to tunnel narrowly around an initial search, rather than browse broadly -- Amazon, Netflix, iTunes, Google, eBay, Pandora, and any other site that has so massive a scale of items on offer that you cannot hope to browse through them.
You must instead tell the search bar where you want to go, and trying to click away from that target will still keep you confined to a narrow range around where you started. Similar or related items, customers who like this also like that, also by this artist, and so on.
The best they can do to expose you to things you didn't even know about is to have a "featured item" or "new items". That's like the movie being played at a video rental store -- better than nothing, but not as good as browsing their selection.
A real encyclopedia allows browsing. In fact that's what you end up doing most of the time after the initial stage of "Oh I wonder what it says about this, Oh I wonder what it says about that!" That's true whether it's a general one or a subject-specific one. So browsing is fractal -- an encyclopedia only about animals still allows you to explore parts of the animal world you didn't even know about, and so could not have purposefully searched for.
In high school I had a Benet's Reader's Encyclopedia that I used purposefully as a reference for a little bit, then quickly switched to just flipping through pages at random and following through on the entries that sounded cool. Blind variation and selective retention -- the basic ingredients for evolution by natural selection. And you just can't get that first part with massive-scale sites based on a search bar.
It's not as though a person couldn't tunnel before if they wanted to; all of the specialized knowledge in Wikipedia is out there in books or databases. It's just faster and sometimes cheaper to access through Wikipedia. I use it for the #1 songs on the Billboard charts, to familiarize myself with the zeitgeist over time. But I could find that out some other way, though it might take a couple days instead of minutes, and maybe cost me something, though nothing prohibitive.
So in exchange for a tiny boost in convenience -- remember how well-run and productive society was before the web -- we've sacrificed the ability for our sight to wander into places we didn't even know were there. The search-and-tunnel websites make our view so hyper-specialized that we often cannot see what is right under our nose.
We try to make the best out of the internet, now that it's here, but it would probably have been better if we hadn't adopted it in the first place.
We haven't gotten any richer, happier, or more productive since either date, so really who cares? You'd lose the minor buzz you feel from visiting your favorite sites, but they're more than replaceable in the real world.
The main reason people freak out so much when pondering the disappearance of the internet is that we live in an age of cocooning, and they're too dismissive or fearful of those real-world substitutes for farting around on the Facebook etc. They really would have nothing to do.
As for Wikipedia, it is incapable of increasing our wisdom, for technological reasons alone (forget who controls and edits it). It belongs with websites that only allow you to tunnel narrowly around an initial search, rather than browse broadly -- Amazon, Netflix, iTunes, Google, eBay, Pandora, and any other site that has so massive a scale of items on offer that you cannot hope to browse through them.
You must instead tell the search bar where you want to go, and trying to click away from that target will still keep you confined to a narrow range around where you started. Similar or related items, customers who like this also like that, also by this artist, and so on.
The best they can do to expose you to things you didn't even know about is to have a "featured item" or "new items". That's like the movie being played at a video rental store -- better than nothing, but not as good as browsing their selection.
A real encyclopedia allows browsing. In fact that's what you end up doing most of the time after the initial stage of "Oh I wonder what it says about this, Oh I wonder what it says about that!" That's true whether it's a general one or a subject-specific one. So browsing is fractal -- an encyclopedia only about animals still allows you to explore parts of the animal world you didn't even know about, and so could not have purposefully searched for.
In high school I had a Benet's Reader's Encyclopedia that I used purposefully as a reference for a little bit, then quickly switched to just flipping through pages at random and following through on the entries that sounded cool. Blind variation and selective retention -- the basic ingredients for evolution by natural selection. And you just can't get that first part with massive-scale sites based on a search bar.
It's not as though a person couldn't tunnel before if they wanted to; all of the specialized knowledge in Wikipedia is out there in books or databases. It's just faster and sometimes cheaper to access through Wikipedia. I use it for the #1 songs on the Billboard charts, to familiarize myself with the zeitgeist over time. But I could find that out some other way, though it might take a couple days instead of minutes, and maybe cost me something, though nothing prohibitive.
So in exchange for a tiny boost in convenience -- remember how well-run and productive society was before the web -- we've sacrificed the ability for our sight to wander into places we didn't even know were there. The search-and-tunnel websites make our view so hyper-specialized that we often cannot see what is right under our nose.
We try to make the best out of the internet, now that it's here, but it would probably have been better if we hadn't adopted it in the first place.
January 17, 2012
Getting a little banged up makes you stronger (historical data)
After listening to this EconTalk podcast with Nassim Taleb on how sometimes a little stress can make a system stronger, I remembered some data I looked at but never wrote up. Might as well now.
The property they talk about is hormesis, where some stress to a system, even a lot of it, can make the system stronger, although there is usually a limit that can be reached. For example, stressing your muscles by lifting heavy things makes them stronger over the long run because your body says "I just had to lift 100 pounds, and next time it may be worse," and beefs up in anticipation of that potentially greater stressor.
Medical literature shows this experimentally, where a control group might get little or no stress, and the experimental group gets lots of stress, and the experimental group winds up stronger.
But what about changes over time in how much stress we're subjected to? Do we see more or less strength as people face more or less stress to their system?
I'm amazed at how uncoordinated and weak young people are today, but that's what you get when you never get thrown off your balance or have to lift anything heavy. It could be something as simple as running down a hill or crossing a stream over a thin log to improve your balance, or lifting and carrying stones to shove or throw into the creek to boost strength. Way more fun than atrophying in front of a video game, and you don't have to plunk down $60 for every spot in the woods, baseball diamond, or wherever, that you play around in.
A study by Khosla et al. (2003) looked at the rates of the most common type of fracture (distal forearm) in four time periods -- 1969-1971, 1979-1981, 1989-1991, and 1999-2001. The location was Minnesotan hospitals, and the population was people under 35.
Clearly the rate of fracturing a bone will be positively related to how often you put yourself in a dangerous situation. So along with the fracture rates from the study, I've plotted the violent crime rate (just for Minnesota, and using the average of the three-year period). That is a good enough proxy for how likely young people are to get into situations where they could fracture a bone -- not necessarily because they fracture their bone in a fight, but because young people are more rambunctious and rough-and-tumbling in a world of higher violent crime.
As expected, when people are more physical and risk-taking (as measured by the violent crime rate), there are more bone fractures. When crime rates fell during the '90s, so did the rate of fractures.
But that doesn't tell us how strong people's bones were in different times, since they were put to very different levels of stress. A young person who boxes professionally will get more fractures than another who has been strapped in bed all their life. To see how strong people were, we need to standardize the fracture rate by some measure of physical stress -- again I'll just use the violent crime rate. The ratio of fracture rates to violent crime rates says something like, "For a certain number of violent crimes in the population, how many bone fractures are there?" Higher values mean weaker bones.
Now we see something very different. As crime rates rose from 1970 through 1990, young people's bones got stronger -- for a given level of physical danger and stress, they fractured fewer and fewer bones. Using this ratio as a measure of strength, they got 25% stronger over those 20 years. However, as crime rates fell during the '90s, their bones got weaker -- there's an increase in the ratio by 2000. *
The 1970 ratio is pretty high, even though that was a rising-crime period. I think because the study looked at anyone under 35, a decent amount of the 1970 group was later Silent Generation people who spent most of their childhood locked indoors reading horror comic books and listening to radio programs, rather than roaming around the woods and playing sports like the Baby Boomers did.
Likewise, the 2000 ratio is an uptick, but not a leap upward. Again in that group there are generational differences in childhood wildness -- it's including a lot of Generation X and Gen Y people, along with the more sheltered and weaker Millennials. The strongest group is the one from 1990, when it was made up mostly of Gen X and Gen Y, whose bones developed in childhood during the rough-and-tumbling '70s and '80s. (Green punch buggy, no hits back!)
If peak bone density is reached by age 25, and if growing up in a rising-crime environment makes bones stronger (as shown above), then the strongest cohorts would be those who were born in and reached 25 entirely during rising-crime times. That includes people born from 1959 (the first year of the crime wave), through 1967 (who would reach 25 when the crime rate peaked in '92). Obviously we'll have to wait until each cohort reaches elderly age, and look at rates of osteoporosis, fractures, etc., and controlling for other health-related factors that show generational differences (like smoking).
Finally, this shows the importance of looking at the long-term effects of stressing your bones while growing up. Sure, the Millennials may fracture bones at a lower rate when they're young, but that's only because they're insulated from physical demands in the first place. You can run but you can't hide from the real world of accidents, though -- eventually you'll get caught off-balance during your ordinary, low-stress routine. When that happens, your weak bones will shatter because of a simple fall. That may not happen right away, but you can't cheat fate into old age. And other basic human activities will be out of the question, like moving stuff, dancing, teaching your kids sports, just to name a few.
The better way is to be more physically engaged while you're young, at the risk of greater fractures in this "training period," in order to build stronger bones to last from early adulthood into your elderly years. Again, peak bone density is reached no later than 25, plateaus until about 30, and tends to decline after that. A sheltered childhood and adolescence is not something you can correct later on down the line.
When adults overly protect young people, they wind up impairing their growth. Social isolation makes them more egocentric, autistic, and avoidant as adults, but here we see an even more concrete case where it makes them physically weaker. It's not just helicopter parents shielding their own kids, but any group of grown-ups charged with guarding young people -- nutritionists, education policy makers, whoever.
There's nothing "traditional" or "family-friendly" or "health-conscious" about any of this, despite the rationalizations that the shielders offer. In the good old days kids were always horsing around, getting dirty, and scratching themselves up. Helicopter parents can only fool themselves into believing their ways are good for the child by exaggerating what kids would actually experience to life-threatening levels -- rolling around in filth all day, bareknuckles boxing at the dinner table, and dismemberment from falling down the branches of a tree.
Time for the whole country to just take a fuckin' chill pill and let young people naturally bang themselves up -- it's for their own good.
* Hopefully the authors will continue this study for the 2009-2011 period, although that wouldn't be published for at least a year. I would predict that the fracture rate will decline further from the 2000 period, just as the violent crime rate has, but that the ratio between the two will go up.
The property they talk about is hormesis, where some stress to a system, even a lot of it, can make the system stronger, although there is usually a limit that can be reached. For example, stressing your muscles by lifting heavy things makes them stronger over the long run because your body says "I just had to lift 100 pounds, and next time it may be worse," and beefs up in anticipation of that potentially greater stressor.
Medical literature shows this experimentally, where a control group might get little or no stress, and the experimental group gets lots of stress, and the experimental group winds up stronger.
But what about changes over time in how much stress we're subjected to? Do we see more or less strength as people face more or less stress to their system?
I'm amazed at how uncoordinated and weak young people are today, but that's what you get when you never get thrown off your balance or have to lift anything heavy. It could be something as simple as running down a hill or crossing a stream over a thin log to improve your balance, or lifting and carrying stones to shove or throw into the creek to boost strength. Way more fun than atrophying in front of a video game, and you don't have to plunk down $60 for every spot in the woods, baseball diamond, or wherever, that you play around in.
A study by Khosla et al. (2003) looked at the rates of the most common type of fracture (distal forearm) in four time periods -- 1969-1971, 1979-1981, 1989-1991, and 1999-2001. The location was Minnesotan hospitals, and the population was people under 35.
Clearly the rate of fracturing a bone will be positively related to how often you put yourself in a dangerous situation. So along with the fracture rates from the study, I've plotted the violent crime rate (just for Minnesota, and using the average of the three-year period). That is a good enough proxy for how likely young people are to get into situations where they could fracture a bone -- not necessarily because they fracture their bone in a fight, but because young people are more rambunctious and rough-and-tumbling in a world of higher violent crime.
As expected, when people are more physical and risk-taking (as measured by the violent crime rate), there are more bone fractures. When crime rates fell during the '90s, so did the rate of fractures.
But that doesn't tell us how strong people's bones were in different times, since they were put to very different levels of stress. A young person who boxes professionally will get more fractures than another who has been strapped in bed all their life. To see how strong people were, we need to standardize the fracture rate by some measure of physical stress -- again I'll just use the violent crime rate. The ratio of fracture rates to violent crime rates says something like, "For a certain number of violent crimes in the population, how many bone fractures are there?" Higher values mean weaker bones.
Now we see something very different. As crime rates rose from 1970 through 1990, young people's bones got stronger -- for a given level of physical danger and stress, they fractured fewer and fewer bones. Using this ratio as a measure of strength, they got 25% stronger over those 20 years. However, as crime rates fell during the '90s, their bones got weaker -- there's an increase in the ratio by 2000. *
The 1970 ratio is pretty high, even though that was a rising-crime period. I think because the study looked at anyone under 35, a decent amount of the 1970 group was later Silent Generation people who spent most of their childhood locked indoors reading horror comic books and listening to radio programs, rather than roaming around the woods and playing sports like the Baby Boomers did.
Likewise, the 2000 ratio is an uptick, but not a leap upward. Again in that group there are generational differences in childhood wildness -- it's including a lot of Generation X and Gen Y people, along with the more sheltered and weaker Millennials. The strongest group is the one from 1990, when it was made up mostly of Gen X and Gen Y, whose bones developed in childhood during the rough-and-tumbling '70s and '80s. (Green punch buggy, no hits back!)
If peak bone density is reached by age 25, and if growing up in a rising-crime environment makes bones stronger (as shown above), then the strongest cohorts would be those who were born in and reached 25 entirely during rising-crime times. That includes people born from 1959 (the first year of the crime wave), through 1967 (who would reach 25 when the crime rate peaked in '92). Obviously we'll have to wait until each cohort reaches elderly age, and look at rates of osteoporosis, fractures, etc., and controlling for other health-related factors that show generational differences (like smoking).
Finally, this shows the importance of looking at the long-term effects of stressing your bones while growing up. Sure, the Millennials may fracture bones at a lower rate when they're young, but that's only because they're insulated from physical demands in the first place. You can run but you can't hide from the real world of accidents, though -- eventually you'll get caught off-balance during your ordinary, low-stress routine. When that happens, your weak bones will shatter because of a simple fall. That may not happen right away, but you can't cheat fate into old age. And other basic human activities will be out of the question, like moving stuff, dancing, teaching your kids sports, just to name a few.
The better way is to be more physically engaged while you're young, at the risk of greater fractures in this "training period," in order to build stronger bones to last from early adulthood into your elderly years. Again, peak bone density is reached no later than 25, plateaus until about 30, and tends to decline after that. A sheltered childhood and adolescence is not something you can correct later on down the line.
When adults overly protect young people, they wind up impairing their growth. Social isolation makes them more egocentric, autistic, and avoidant as adults, but here we see an even more concrete case where it makes them physically weaker. It's not just helicopter parents shielding their own kids, but any group of grown-ups charged with guarding young people -- nutritionists, education policy makers, whoever.
There's nothing "traditional" or "family-friendly" or "health-conscious" about any of this, despite the rationalizations that the shielders offer. In the good old days kids were always horsing around, getting dirty, and scratching themselves up. Helicopter parents can only fool themselves into believing their ways are good for the child by exaggerating what kids would actually experience to life-threatening levels -- rolling around in filth all day, bareknuckles boxing at the dinner table, and dismemberment from falling down the branches of a tree.
Time for the whole country to just take a fuckin' chill pill and let young people naturally bang themselves up -- it's for their own good.
* Hopefully the authors will continue this study for the 2009-2011 period, although that wouldn't be published for at least a year. I would predict that the fracture rate will decline further from the 2000 period, just as the violent crime rate has, but that the ratio between the two will go up.
January 16, 2012
The cultural euphoria from 2003 to 2006 -- was 9/11 the source?
Those sure felt like different times, compared to the '90s, early and late 2000s, and so far into this decade. It may not have been a reversal of the trend over the past 20 years toward more trivial, off-putting, and meaningless popular culture, but it sure was a breath of fresh air.
For a brief time, a decent minority left their silly lifestyle centers and soulless big box centers, heading back to enjoy some lost-in-the-crowd excitement -- at the mall.
Patriotism came back a little, admittedly in the service of a foolish and pointless war. You didn't see that during the Clinton years, during Bush's campaign or during the early days of his administration. It had already cooled off sometime in 2007, and of course the Republican candidate who tried to whip us up into taking a stand against, well, the whole world was defeated in 2008. Since then it's been the Clinton Years, Uncensored Extended Version. Although not within the mainstream of the GOP, the anti-immigration movement seems to have been at its peak during the mid-2000s.
Live 8, a multinational benefit concert, raised money for poor countries somewhat like Live Aid did back in 1985. When patriotism is high, we're also most charitable toward other nations, provided we don't see them as a threat.
Colors exploded throughout large areas of the visual culture, primarily in clothing but also to a lesser extent in home and retail decoration. I don't recall what graphic design of the mid-2000s looked like. Product design had much less color, the Apple look being exemplary. Architecture had even less. Movies didn't offer a very lush color palette either -- it was mostly washed-out CGI junk, like video games.
Guys were still pretty covered up (long shorts, baggy shirt), but girls started showing more skin. The uniform was low-rise jeans and a mid-riff-baring top with spaghetti straps, showing off a several-inch band around her lower torso, as well as her shoulders, collar, and upper back. Mini-skirts popped back into style, including the new "ruffle" mini-skirt (usually white, though occasionally the more eye-catching yellow). After the mid-2000s, they've returned to the trend toward covering up, with higher-rise jeans, tops that drop far below the waist and hips, "baby doll" tops that obscure more of the shoulder and collar area than spaghetti straps, and tossing out their mini-skirts.
For the first time in a long while, rock music stirred awake. Albums by The Yeah Yeah Yeahs and The White Stripes got the driving sound started in the spring of 2003. By the end of the year it was clear that contemporary bands were going to revisit an earlier, more satisfying sound when The Raveonettes released their easy-going surfer take on Psychocandy by The Jesus and Mary Chain, while No Doubt took the direct route by covering "It's My Life" by Talk Talk.
The rock revival was everywhere in 2004 and '05, and I'm incredibly grateful to have had the chance to go out dancing to it twice a week when I lived in Barcelona. After those two jam-packed years, things already began stalling out during 2006. The only things that come to mind are the self-titled album by She Wants Revenge and The Black Parade by My Chemical Romance. That's stretching it, but you can't even manage that with the guitar-based music since then.
That same timing shows up in R&B and dance music too. In 2003, OutKast released their carefree, get-up-and-move song "Hey Ya!" The last likable album came in summer 2006 when Nelly Furtado's Loose took a stab at counteracting the self-consciousness and self-aggrandizement of recent R&B and dance music -- Spice Girls, 50 Cent, The Black Eyed Peas, etc. Overall the "black music" genres weren't as lifted out of the quagmire as white music was, although I do confess that one of my guilty pleasures (at least when I can dance to it in a club) is "Get Low" by Lil Jon.
People sensed that the zeitgeist wasn't barreling ahead in the direction of the '90s and early 2000s, but moved at least a little bit backward toward more exciting times. VH1's I Love the '80s series ran in three installments from December 2002 through 2005, and '80s nights sprang up across the country. Some of that is still going on, but the nostalgic dance parties just ain't what they used to be. Since I started going in 2007, I've noticed much less socializing at '80s night, though thankfully this one continues to draw a crowd.
It may not have touched all areas of the culture, and even where it did it was not a total throwback to the good old days, and it certainly didn't last very long. Still, this anomaly in the zeitgeist calls for an explanation. Of course, it may be just a fluke, but I think it may have been a response to 9/11. It looked less typical of a falling-crime era, and a bit more like a rising-crime era. Crime rates were steadily down, so it couldn't be that.
The only other source of such great harm is external, and 9/11 sure made us feel more threatened by violence than we'd been used to. It was unexpected, visually powerful, and brought a high death toll. Plus, who knew what they'd escalate to next time? There was a feeling that the worst wasn't behind us just yet, so that the near future would be a period of rising risk of violence.
Why didn't it have an immediate effect on the culture? A little over a year went by before the shift in the zeitgeist. It's probably because we can't be thinking about the threat too consciously, as we were in the aftermath, trying to make sense of it. There's a vast literature in psychology called "terror management theory" that shows how differently people think, feel, and behave when you prime them with thoughts about death. They tend to respond in the way you'd expect from my rising-crime posts.
One broad finding is that the effect is stronger once you give the subjects a longer time for it to sink in and affect them unconsciously. Something similar might have gone on after 9/11 -- for awhile we were thinking about violence and death on a very conscious level. Only after we'd moved on past that stage did it begin to affect us unconsciously.
The presence of the Iraq War in the popular mind served to remind us of the threat off and on through the mid-2000s, but after no more terrorist attacks struck us like the first one, we became aware that the threat had probably gone away. We didn't jump to that conclusion right away in 2003 or '04, since maybe it would happen next year. Still, after 5 years, it seemed that the future was not going to be as increasingly violent as we'd thought, and we returned to the zeitgeist more typical of a falling-crime period.
Why did it affect more than just America? Most places had no 9/11 of their own; even the bombings at Madrid and London were not as deadly or visually arresting as planes crashing into skyscrapers. Given that the attackers were from outside Western civilization, the West in general likely felt threatened. Who knew who would be next?
Why didn't it affect some parts of the culture? The national Youth Risk Behavior Survey doesn't show a jump in drug use, joyriding, or sexual activity among high schoolers during the mid-2000s. Young girls were definitely more flirtatious, but on the whole they must have been holding out for a clearer sign of rising violence to conclude that they needed to start earlier and mate with more partners. Overall it was still part of the attention whore trend of the past 20 years.
Movies, TV shows, and video games didn't see much of a change from the prevailing spirit either. These are narrative media, whereas those that showed the change most strongly were popular music and personal appearance. Narratives take a long time to weave together, so a major change in storytelling must take longer to respond to rising crime rates. That was true for the last real crime wave that began in 1959: there were a handful of good movies from the '60s, but the bulk of spellbinding new movies come from the mid-'70s through the mid-'80s.
The lack of change in architecture is similar -- it takes so long to plan and erect major buildings that they won't respond so quickly to a brief rise in crime rates. Again it was like that in the last real wave -- the International Style took longer to fade during the '60s and early '70s than did styles in music from the mid-century.
Popular music and personal appearance are far quicker to plan and execute, and they don't involve any narrative component -- more of a raw emotional expression. So they'll be more responsive to other social changes, such as in the threat of violence.
Could the euphoria have been due to the housing bubble? I think the housing bubble euphoria was just part of this larger euphoria, a spillover effect. The two major real estate crazes that come to mind are from the 1920s and the 1980s, both periods of soaring crime. People are just more willing to go for it, at least in some ways, when their physical security seems less guaranteed into the future. The already-inflated housing bubble just provided one more outlet for the general euphoria to express itself.
All of the other pieces of the housing bubble were in place by the early-mid 1990s, plus the wider belief that everyone is a smart investor, that the real rise in prosperity during the '90s would only go up, etc. Yet there was no general euphoria then, as I've described it above. Those pieces of the bubble are still there -- despite a brief gasp when the recession first struck -- and again the euphoria has been gone for at least 5 years.
Also, I've detailed all sorts of links between rising crime and the zeitgeist changes above, and possible reasons why they're linked. The terror management people in psychology have done something similar in a brief lab setting. But why would a housing bubble make girls show more skin, or young people move their tastes more back toward rock music? Again it's not worth generating just-so stories there since the euphoria only lasted a few years, while the housing bubble got going well before, and large pieces of that economic and political environment are still in place.
For a brief time, a decent minority left their silly lifestyle centers and soulless big box centers, heading back to enjoy some lost-in-the-crowd excitement -- at the mall.
Patriotism came back a little, admittedly in the service of a foolish and pointless war. You didn't see that during the Clinton years, during Bush's campaign or during the early days of his administration. It had already cooled off sometime in 2007, and of course the Republican candidate who tried to whip us up into taking a stand against, well, the whole world was defeated in 2008. Since then it's been the Clinton Years, Uncensored Extended Version. Although not within the mainstream of the GOP, the anti-immigration movement seems to have been at its peak during the mid-2000s.
Live 8, a multinational benefit concert, raised money for poor countries somewhat like Live Aid did back in 1985. When patriotism is high, we're also most charitable toward other nations, provided we don't see them as a threat.
Colors exploded throughout large areas of the visual culture, primarily in clothing but also to a lesser extent in home and retail decoration. I don't recall what graphic design of the mid-2000s looked like. Product design had much less color, the Apple look being exemplary. Architecture had even less. Movies didn't offer a very lush color palette either -- it was mostly washed-out CGI junk, like video games.
Guys were still pretty covered up (long shorts, baggy shirt), but girls started showing more skin. The uniform was low-rise jeans and a mid-riff-baring top with spaghetti straps, showing off a several-inch band around her lower torso, as well as her shoulders, collar, and upper back. Mini-skirts popped back into style, including the new "ruffle" mini-skirt (usually white, though occasionally the more eye-catching yellow). After the mid-2000s, they've returned to the trend toward covering up, with higher-rise jeans, tops that drop far below the waist and hips, "baby doll" tops that obscure more of the shoulder and collar area than spaghetti straps, and tossing out their mini-skirts.
For the first time in a long while, rock music stirred awake. Albums by The Yeah Yeah Yeahs and The White Stripes got the driving sound started in the spring of 2003. By the end of the year it was clear that contemporary bands were going to revisit an earlier, more satisfying sound when The Raveonettes released their easy-going surfer take on Psychocandy by The Jesus and Mary Chain, while No Doubt took the direct route by covering "It's My Life" by Talk Talk.
The rock revival was everywhere in 2004 and '05, and I'm incredibly grateful to have had the chance to go out dancing to it twice a week when I lived in Barcelona. After those two jam-packed years, things already began stalling out during 2006. The only things that come to mind are the self-titled album by She Wants Revenge and The Black Parade by My Chemical Romance. That's stretching it, but you can't even manage that with the guitar-based music since then.
That same timing shows up in R&B and dance music too. In 2003, OutKast released their carefree, get-up-and-move song "Hey Ya!" The last likable album came in summer 2006 when Nelly Furtado's Loose took a stab at counteracting the self-consciousness and self-aggrandizement of recent R&B and dance music -- Spice Girls, 50 Cent, The Black Eyed Peas, etc. Overall the "black music" genres weren't as lifted out of the quagmire as white music was, although I do confess that one of my guilty pleasures (at least when I can dance to it in a club) is "Get Low" by Lil Jon.
People sensed that the zeitgeist wasn't barreling ahead in the direction of the '90s and early 2000s, but moved at least a little bit backward toward more exciting times. VH1's I Love the '80s series ran in three installments from December 2002 through 2005, and '80s nights sprang up across the country. Some of that is still going on, but the nostalgic dance parties just ain't what they used to be. Since I started going in 2007, I've noticed much less socializing at '80s night, though thankfully this one continues to draw a crowd.
* * *
It may not have touched all areas of the culture, and even where it did it was not a total throwback to the good old days, and it certainly didn't last very long. Still, this anomaly in the zeitgeist calls for an explanation. Of course, it may be just a fluke, but I think it may have been a response to 9/11. It looked less typical of a falling-crime era, and a bit more like a rising-crime era. Crime rates were steadily down, so it couldn't be that.
The only other source of such great harm is external, and 9/11 sure made us feel more threatened by violence than we'd been used to. It was unexpected, visually powerful, and brought a high death toll. Plus, who knew what they'd escalate to next time? There was a feeling that the worst wasn't behind us just yet, so that the near future would be a period of rising risk of violence.
Why didn't it have an immediate effect on the culture? A little over a year went by before the shift in the zeitgeist. It's probably because we can't be thinking about the threat too consciously, as we were in the aftermath, trying to make sense of it. There's a vast literature in psychology called "terror management theory" that shows how differently people think, feel, and behave when you prime them with thoughts about death. They tend to respond in the way you'd expect from my rising-crime posts.
One broad finding is that the effect is stronger once you give the subjects a longer time for it to sink in and affect them unconsciously. Something similar might have gone on after 9/11 -- for awhile we were thinking about violence and death on a very conscious level. Only after we'd moved on past that stage did it begin to affect us unconsciously.
The presence of the Iraq War in the popular mind served to remind us of the threat off and on through the mid-2000s, but after no more terrorist attacks struck us like the first one, we became aware that the threat had probably gone away. We didn't jump to that conclusion right away in 2003 or '04, since maybe it would happen next year. Still, after 5 years, it seemed that the future was not going to be as increasingly violent as we'd thought, and we returned to the zeitgeist more typical of a falling-crime period.
Why did it affect more than just America? Most places had no 9/11 of their own; even the bombings at Madrid and London were not as deadly or visually arresting as planes crashing into skyscrapers. Given that the attackers were from outside Western civilization, the West in general likely felt threatened. Who knew who would be next?
Why didn't it affect some parts of the culture? The national Youth Risk Behavior Survey doesn't show a jump in drug use, joyriding, or sexual activity among high schoolers during the mid-2000s. Young girls were definitely more flirtatious, but on the whole they must have been holding out for a clearer sign of rising violence to conclude that they needed to start earlier and mate with more partners. Overall it was still part of the attention whore trend of the past 20 years.
Movies, TV shows, and video games didn't see much of a change from the prevailing spirit either. These are narrative media, whereas those that showed the change most strongly were popular music and personal appearance. Narratives take a long time to weave together, so a major change in storytelling must take longer to respond to rising crime rates. That was true for the last real crime wave that began in 1959: there were a handful of good movies from the '60s, but the bulk of spellbinding new movies come from the mid-'70s through the mid-'80s.
The lack of change in architecture is similar -- it takes so long to plan and erect major buildings that they won't respond so quickly to a brief rise in crime rates. Again it was like that in the last real wave -- the International Style took longer to fade during the '60s and early '70s than did styles in music from the mid-century.
Popular music and personal appearance are far quicker to plan and execute, and they don't involve any narrative component -- more of a raw emotional expression. So they'll be more responsive to other social changes, such as in the threat of violence.
Could the euphoria have been due to the housing bubble? I think the housing bubble euphoria was just part of this larger euphoria, a spillover effect. The two major real estate crazes that come to mind are from the 1920s and the 1980s, both periods of soaring crime. People are just more willing to go for it, at least in some ways, when their physical security seems less guaranteed into the future. The already-inflated housing bubble just provided one more outlet for the general euphoria to express itself.
All of the other pieces of the housing bubble were in place by the early-mid 1990s, plus the wider belief that everyone is a smart investor, that the real rise in prosperity during the '90s would only go up, etc. Yet there was no general euphoria then, as I've described it above. Those pieces of the bubble are still there -- despite a brief gasp when the recession first struck -- and again the euphoria has been gone for at least 5 years.
Also, I've detailed all sorts of links between rising crime and the zeitgeist changes above, and possible reasons why they're linked. The terror management people in psychology have done something similar in a brief lab setting. But why would a housing bubble make girls show more skin, or young people move their tastes more back toward rock music? Again it's not worth generating just-so stories there since the euphoria only lasted a few years, while the housing bubble got going well before, and large pieces of that economic and political environment are still in place.
January 13, 2012
Mid-century man-children
One of the striking features about our culture since the early 1990s is how juvenile people's interests have become, among younger and older alike. Going from a world where the most popular icons for children were He-Man and G.I. Joe to one where they are the Teletubbies and SpongeBob is sad enough, but it's not the end of the world if elementary school kids prefer kiddie junk.
What's really worrying is the juvenile tastes of so-called grown-ups. Being addicted to video games well into one's 20s and 30s (and before long, 40s). Creating a mass market for blockbuster movies based on kid's toys and cartoons. They made a Transformers movie when I was little, but it wasn't marketed to everyone. Even I didn't see it, and I was part of the small target audience. Listening to pop music that is so drained of emotion it sounds like they haven't even gone through puberty yet. And on and on...
If the main driver behind changes in the zeitgeist is whether the violence rate is rising or falling, we should expect to see something qualitatively similar from the mid-1930s through the late '50s, another falling-crime period. Since most of that time has faded from memory, we have to turn to historians of popular culture, as well as look into things on our own.
John Springhall's book Youth, Popular Culture and Moral Panics: Penny Gaffs to Gangsta-Rap, 1830-1996 details the elite outrage over penny gaffs and penny dreadfuls in Victorian England, gangster movies and horror comic books mostly in mid-century America, and gangsta rap during the '90s. For that decade he should have included the violent video game panic, which gave rise to the ESRB rating system. Not surprisingly these are all from falling-crime eras, when moral reformers act more like thought police and media censors, whereas in rising-crime eras they act to keep out poisonous physical substances like drugs. (That's another topic, which I'll get around to sometime.)
For now, let's stick to the mid-century period and comic books. Although there were a few, admittedly very popular, superheroes introduced through comic books in the 1930s, like Batman and Superman, by the '40s that trend died off and would not be revived until the '60s. During the '40s and '50s, comic books moved more toward the genres of horror, crime, Westerns, romance, etc.
Relying only on others' descriptions, the Western genre sounds like simplistic war movies like 300, and the romance genre like your standard chick lit. The covers and content of horror comics are strikingly similar to the recent "torture porn" in movies and video games. The crime genre sounds and looks more like the Law & Order TV show, especially its lurid spin-off Special Victims Unit. There were entire series of crime/gangster comic books from the '40s and '50s that focused just on women criminals. And they peddled exactly the same fierce-minded butt-kicking babes and GIRL FIGHTS! that provide shower-nozzle masturbation material for today's emotionally stunted and feminized nerd audiences. (See here for a great gallery of crime comic covers.)
Were comic books back then just kids' stuff? Not at all (p. 129-30):
Thankfully they weren't as common among adults as among children, but it still should have been about 0% of grown-ups who read goofy immature junk like that. I'm not slamming popular vs. elite entertainment, but the specifically juvenile aspect of it all.
I wish I knew more about the history of radio dramas, since that was another huge form of popular entertainment that we don't remember anything about by now.
As for popular music, I'm not going to rate every top song for how childish it is, since that's a bit subjective. It's easier to find hit songs that are clearly kiddie stuff, and to treat them as the tip of the iceberg (or the tail of a distribution). Here are just those that reached #1 on the Billboard charts:
1948 "Woody Wood-Pecker"
1949 "All I Want for Christmas (Is My Two Front Teeth)"
1950 "Rudolph the Red-Nosed Reindeer"
1953 "(How Much Is) That Doggie in the Window?"
1958 "The Purple People Eater"
1958-9 "The Chipmunk Song (Christmas Don't Be Late)"
Remember, this isn't a list of songs that were played on the radio at all, but those that topped the charts, and not for a children's category but for pop music overall. Some of those are likeable enough for adults to listen to them, although not so great that they should reach #1. And others can't even say that -- even as a child I couldn't stand that annoying doggie in the window song. I heard more grown-up music in the theme songs to my cartoons.
When the crime rate started rising in 1959, people wanted to grow up sooner, so this trend gradually died off. In 1962 "Monster Mash" hit #1, although at least that one was about parties and dancing. The last major entry was "Hello Muddah, Hello Fadduh", which reached #2 in 1963. After that, when "Sixties music" proper began, you didn't hear any of that stuff. At least not until 1997, when "Teletubbies say 'Eh-oh!'" topped the UK Singles chart.
Obviously other factors contribute to grown men acting like 10 year-olds, like living at home into their 20s, not having to earn their keep even if they do move out and get a job, working in a more feminizing service as opposed to a manufacturing economy, and so on.
Still, the influence of the rising vs. falling crime trend is strong enough that you can see a lot of today's man-children culture thriving in the mainstream during the '40s and '50s. Particularly compared to the Jazz Age before and Rock and New Wave Age after, the relative immaturity of the mid-century period jumps out. Movies like Double Indemnity, Sunset Boulevard, and Rear Window really were exceptional, the refugia of a cultural ice age.
What's really worrying is the juvenile tastes of so-called grown-ups. Being addicted to video games well into one's 20s and 30s (and before long, 40s). Creating a mass market for blockbuster movies based on kid's toys and cartoons. They made a Transformers movie when I was little, but it wasn't marketed to everyone. Even I didn't see it, and I was part of the small target audience. Listening to pop music that is so drained of emotion it sounds like they haven't even gone through puberty yet. And on and on...
If the main driver behind changes in the zeitgeist is whether the violence rate is rising or falling, we should expect to see something qualitatively similar from the mid-1930s through the late '50s, another falling-crime period. Since most of that time has faded from memory, we have to turn to historians of popular culture, as well as look into things on our own.
John Springhall's book Youth, Popular Culture and Moral Panics: Penny Gaffs to Gangsta-Rap, 1830-1996 details the elite outrage over penny gaffs and penny dreadfuls in Victorian England, gangster movies and horror comic books mostly in mid-century America, and gangsta rap during the '90s. For that decade he should have included the violent video game panic, which gave rise to the ESRB rating system. Not surprisingly these are all from falling-crime eras, when moral reformers act more like thought police and media censors, whereas in rising-crime eras they act to keep out poisonous physical substances like drugs. (That's another topic, which I'll get around to sometime.)
For now, let's stick to the mid-century period and comic books. Although there were a few, admittedly very popular, superheroes introduced through comic books in the 1930s, like Batman and Superman, by the '40s that trend died off and would not be revived until the '60s. During the '40s and '50s, comic books moved more toward the genres of horror, crime, Westerns, romance, etc.
Relying only on others' descriptions, the Western genre sounds like simplistic war movies like 300, and the romance genre like your standard chick lit. The covers and content of horror comics are strikingly similar to the recent "torture porn" in movies and video games. The crime genre sounds and looks more like the Law & Order TV show, especially its lurid spin-off Special Victims Unit. There were entire series of crime/gangster comic books from the '40s and '50s that focused just on women criminals. And they peddled exactly the same fierce-minded butt-kicking babes and GIRL FIGHTS! that provide shower-nozzle masturbation material for today's emotionally stunted and feminized nerd audiences. (See here for a great gallery of crime comic covers.)
Were comic books back then just kids' stuff? Not at all (p. 129-30):
An American survey of 1950 revealed that there was indeed a large adult readership for comic books, horror and otherwise. Roughly 41 per cent of adult males and 28 per cent of adult females read comic books regularly. In the same year a government-sponsored survey of an Ohio town found that 54 per cent of all comic book readers were over 20 years of age. These percentages are placed in perspective by the 95 per cent of boys and 91 per cent of girls between six and 11 who read comic books, while 80 per cent of all American 'teenagers'... read comic books as well, usually a dozen or more every month in the 1950s.
Thankfully they weren't as common among adults as among children, but it still should have been about 0% of grown-ups who read goofy immature junk like that. I'm not slamming popular vs. elite entertainment, but the specifically juvenile aspect of it all.
I wish I knew more about the history of radio dramas, since that was another huge form of popular entertainment that we don't remember anything about by now.
As for popular music, I'm not going to rate every top song for how childish it is, since that's a bit subjective. It's easier to find hit songs that are clearly kiddie stuff, and to treat them as the tip of the iceberg (or the tail of a distribution). Here are just those that reached #1 on the Billboard charts:
1948 "Woody Wood-Pecker"
1949 "All I Want for Christmas (Is My Two Front Teeth)"
1950 "Rudolph the Red-Nosed Reindeer"
1953 "(How Much Is) That Doggie in the Window?"
1958 "The Purple People Eater"
1958-9 "The Chipmunk Song (Christmas Don't Be Late)"
Remember, this isn't a list of songs that were played on the radio at all, but those that topped the charts, and not for a children's category but for pop music overall. Some of those are likeable enough for adults to listen to them, although not so great that they should reach #1. And others can't even say that -- even as a child I couldn't stand that annoying doggie in the window song. I heard more grown-up music in the theme songs to my cartoons.
When the crime rate started rising in 1959, people wanted to grow up sooner, so this trend gradually died off. In 1962 "Monster Mash" hit #1, although at least that one was about parties and dancing. The last major entry was "Hello Muddah, Hello Fadduh", which reached #2 in 1963. After that, when "Sixties music" proper began, you didn't hear any of that stuff. At least not until 1997, when "Teletubbies say 'Eh-oh!'" topped the UK Singles chart.
Obviously other factors contribute to grown men acting like 10 year-olds, like living at home into their 20s, not having to earn their keep even if they do move out and get a job, working in a more feminizing service as opposed to a manufacturing economy, and so on.
Still, the influence of the rising vs. falling crime trend is strong enough that you can see a lot of today's man-children culture thriving in the mainstream during the '40s and '50s. Particularly compared to the Jazz Age before and Rock and New Wave Age after, the relative immaturity of the mid-century period jumps out. Movies like Double Indemnity, Sunset Boulevard, and Rear Window really were exceptional, the refugia of a cultural ice age.
January 11, 2012
Getting attention vs. being popular
One of the most striking changes over the past 20 years is how little importance young people attach to being popular anymore, part of the general shift toward cocooning.
It wasn't an overnight switch. In the mid-'90s, the pretty girls in my middle and high school still made some effort to mingle with a wide variety of people, to be more broadly liked and accepted, and so did the athletic guys. The central teenagers on My So-Called Life didn't belong to the in-crowd, but they totally wished they did. Clueless was probably the last charitable portrayal of the popular kids.
Still, Nirvana lyrics like "I'd rather be dead than cool" resonated with a lot of adolescents, as did the snarky-ironic song "Popular" by Nada Surf. Unlike Revenge of the Nerds or Weird Science, where the outcasts wanted to be accepted by normal people, by the time American Pie came out, it was OK to be total losers.
Since then there hasn't even been the residue of the desire for popularity that was around in the '90s. Harold & Kumar and Superbad only strengthened the message that it's cool to be sheltered dorks for life, while Mean Girls joined the pile-on against the in-crowd, and without the humanized view of them that made Heathers so enjoyable. The off-putting "it's all about me" attitude in pop music has gotten even worse. And you obviously aren't trying to gain other people's sympathy and acceptance by making a kabuki face in all your pictures. The only exception was that MTV show Made, where high schoolers worked to improve themselves in some way, typically with the goal of fitting in better at school.
Even in these leave-me-alone times, we're still a social species and crave some form of recognition from other people in order to feel good about who we are. But since people have cut themselves off from each other, they have no interactions to use to find out who is likeable, and no one winds up feeling well-liked. The only recognition they can get is looks alone, so that cocooning leads to attention-whoring as the main way to feel approved of by others.
This link between cocooning and attention-whoring (as opposed to popularity-seeking) showed up in the past two periods of falling crime rates, when people isolate themselves. Here's an excerpt from a complaint about how lacking in femininity and affability English women had become by the mid-Victorian era:
That's from 1868, and it only got worse with the egocentric "New Woman" of the later Victorian era. Unless we've looked into it, we tend to treat all of history before 1960 as the same, but it wasn't at all. The self-focused, attention-craving, yet behaviorally prudish woman of Victorian times isn't so different from her counterpart of the past 20 years. And both are worlds away from the women in Jane Austen's world, who strove to be well-liked, and who therefore had to socialize with a range of others. But that was during the more outgoing and rising-crime Romantic-Gothic period.
Earlier I looked at the attention-whore culture of the mid-century, as shown in Time Magazine's original 1951 ethnography on the Silent Generation. Here's an excerpt, which reads exactly like a scene from today's Girls Gone Wild culture, right down to the abstinence from actual sexual activity:
Just as the Victorian woman was worlds apart from a Jane Austen heroine, the woman of the falling-crime mid-century would have been unrecognizable to a young woman of the rising-crime Jazz Age. Several of Fitzgerald's short stories center around a debutante or similar girl making the rounds during parties, mingling and dancing with many young men, and feeling fulfilled for being so widely well-liked and appreciated. One even emphasizes that this necessarily involves self-sacrifice, as she must talk about all sorts of topics that don't really interest her, dance with men who don't smell or look perfect, and so on. But how else can you expect to appeal to a broad group of others?
During the most recent wave of violence, when everyone came out of their cocoons, we were also near or at our peak for patriotism, so this type of popular adolescent got a special name -- All-American. So likable and approaching toward others that they could fit in and be liked no matter where in the country they went. I can't remember the last time I heard anyone described as an All-American girl or a real All-American kinda guy. Kelly Kapowski, perhaps, in the early '90s. Since then we've become a lot less likable and a lot more avoidant of other people, putting our guard up even when we do approach others.
The larger take-home message is that we are almost entirely ignorant of what price we pay to enjoy a falling crime rate. The main price is that we must cocoon, draining the pool of potential victims in public spaces. But it doesn't end there, since cocooning entails all sorts of other corrosive behaviors like attention-whoring as the main way that people seek validation, instead of socializing in order to be more popular. That in turn deprives people of years of opportunities for developing their social skills, turning the population in a more autistic direction instead.
It wasn't an overnight switch. In the mid-'90s, the pretty girls in my middle and high school still made some effort to mingle with a wide variety of people, to be more broadly liked and accepted, and so did the athletic guys. The central teenagers on My So-Called Life didn't belong to the in-crowd, but they totally wished they did. Clueless was probably the last charitable portrayal of the popular kids.
Still, Nirvana lyrics like "I'd rather be dead than cool" resonated with a lot of adolescents, as did the snarky-ironic song "Popular" by Nada Surf. Unlike Revenge of the Nerds or Weird Science, where the outcasts wanted to be accepted by normal people, by the time American Pie came out, it was OK to be total losers.
Since then there hasn't even been the residue of the desire for popularity that was around in the '90s. Harold & Kumar and Superbad only strengthened the message that it's cool to be sheltered dorks for life, while Mean Girls joined the pile-on against the in-crowd, and without the humanized view of them that made Heathers so enjoyable. The off-putting "it's all about me" attitude in pop music has gotten even worse. And you obviously aren't trying to gain other people's sympathy and acceptance by making a kabuki face in all your pictures. The only exception was that MTV show Made, where high schoolers worked to improve themselves in some way, typically with the goal of fitting in better at school.
Even in these leave-me-alone times, we're still a social species and crave some form of recognition from other people in order to feel good about who we are. But since people have cut themselves off from each other, they have no interactions to use to find out who is likeable, and no one winds up feeling well-liked. The only recognition they can get is looks alone, so that cocooning leads to attention-whoring as the main way to feel approved of by others.
This link between cocooning and attention-whoring (as opposed to popularity-seeking) showed up in the past two periods of falling crime rates, when people isolate themselves. Here's an excerpt from a complaint about how lacking in femininity and affability English women had become by the mid-Victorian era:
The Girl of the Period is a creature who dyes her hair and paints her face as the first articles of her personal religion — a creature whose sole idea of life is fun; whose sole aim is unbounded luxury; and whose dress is the chief object of such thought and intellect as she possesses. Her main endeavour is to outvie her neighbours in the extravagance of fashion. No matter if, in the time of crinolines, she sacrifices decency; in the time of trains, cleanliness; in the time of tied-back skirts, modesty; no matter either, if she makes herself a nuisance and an inconvenience to every one she meets; — the Girl of the Period has done away with such moral muffishness as consideration for others or regard for counsel and rebuke. It was all very well in old-fashioned times, when fathers and mothers had some authority and were treated with respect, to be tutored and made to obey, but she is far too fast and flourishing to be stopped in mid-career by these slow old morals; and as she lives to please herself, she does not care if she displeases every one else.
That's from 1868, and it only got worse with the egocentric "New Woman" of the later Victorian era. Unless we've looked into it, we tend to treat all of history before 1960 as the same, but it wasn't at all. The self-focused, attention-craving, yet behaviorally prudish woman of Victorian times isn't so different from her counterpart of the past 20 years. And both are worlds away from the women in Jane Austen's world, who strove to be well-liked, and who therefore had to socialize with a range of others. But that was during the more outgoing and rising-crime Romantic-Gothic period.
Earlier I looked at the attention-whore culture of the mid-century, as shown in Time Magazine's original 1951 ethnography on the Silent Generation. Here's an excerpt, which reads exactly like a scene from today's Girls Gone Wild culture, right down to the abstinence from actual sexual activity:
Says a Minneapolis priest: "The young American male is increasingly bewildered and confused by the aggressive, coarse, dominant attitudes and behavior of his women. I believe it is one of the most serious social traits of our time-and one that is certain to have most serious social consequences."
The shrieking blonde ripped the big tackle's shirt from his shoulder and Charlestoned off through the crowded room, fan-dancing with a ragged sleeve. In her wake, shirts fell in shreds on the floor, until half the male guests roared around bare to the waist. Shouts and laughs rose above the full-volume records from Gentlemen Prefer Blondes. The party, celebrating the departure of a University of Texas coed who had flunked out, had begun in midafternoon some three hours earlier. In one corner, four tipsily serious coeds tried to revive a passed-out couple with more salty dog (a mixture of gin, grapefruit juice and salt). About 10 p.m., a brunette bounded on to the coffee table, in a limited striptease. At 2 a.m., when the party broke up, one carload of youngsters decided to take off on a two-day drive into Mexico (they got there all right, and sent back picture postcards to the folks).
Just as the Victorian woman was worlds apart from a Jane Austen heroine, the woman of the falling-crime mid-century would have been unrecognizable to a young woman of the rising-crime Jazz Age. Several of Fitzgerald's short stories center around a debutante or similar girl making the rounds during parties, mingling and dancing with many young men, and feeling fulfilled for being so widely well-liked and appreciated. One even emphasizes that this necessarily involves self-sacrifice, as she must talk about all sorts of topics that don't really interest her, dance with men who don't smell or look perfect, and so on. But how else can you expect to appeal to a broad group of others?
During the most recent wave of violence, when everyone came out of their cocoons, we were also near or at our peak for patriotism, so this type of popular adolescent got a special name -- All-American. So likable and approaching toward others that they could fit in and be liked no matter where in the country they went. I can't remember the last time I heard anyone described as an All-American girl or a real All-American kinda guy. Kelly Kapowski, perhaps, in the early '90s. Since then we've become a lot less likable and a lot more avoidant of other people, putting our guard up even when we do approach others.
The larger take-home message is that we are almost entirely ignorant of what price we pay to enjoy a falling crime rate. The main price is that we must cocoon, draining the pool of potential victims in public spaces. But it doesn't end there, since cocooning entails all sorts of other corrosive behaviors like attention-whoring as the main way that people seek validation, instead of socializing in order to be more popular. That in turn deprives people of years of opportunities for developing their social skills, turning the population in a more autistic direction instead.
January 6, 2012
Suburban archaeology
This Christmas vacation I've spent a lot of time re-connecting with the places no one visits anymore, like the woods, especially the real thing that isn't right alongside a bike path or road, where you have to trample through leaves, sticks, and logs, and where you often have to cross a creek by jumping or very steadily walking across a fallen tree.
I don't hardly recognize most of the stores in my neighborhood shopping center. The bowling alley with arcade games, the mom and pop video rental store that unlike Netflix allowed for browsing, the army-navy surplus store, the deli that catered to normal people instead of the drinkers of kumquat-tinis, the total lack of anything geared to Mexicans -- all of that is ancient history, gone within the past 20 years.
One of the few places you can still go to and feel at home is public spaces that lie off the beaten path. Public spaces that housed a built environment like the mall, the roller rink, or the record store can all be razed and re-built into a "power center" with big box stores, or at best get converted into the organic doggie salons that fill up a "lifestyle center." With all of those buildings already in existence, and whose location is already known to customers, developers would rather screw up that part of your neighborhood than clear out a stretch of woods and try to build it and popularize it from scratch.
Even the uncontested public spaces like the woods don't look exactly like they used to -- like, where the hell is everyone else besides me? The general trend is to cocoon right before the peak of the crime rate, and during the falling-crime period. We saw that in the mid-century and we've seen it again since the early '90s. It's just another example to go out in the woods and not see anyone, except on bike paths. (That shows that it's not due to the winter weather, since some people are out, just not in the off-the-path parts.)
Still, it may have been de-peopled, but at least the woods has preserved the other signs of human life. Because the culture is so different in rising vs. falling-crime times, it is like visiting the ruins of a vanished civilization, almost right in your own back yard.
In two posts to follow I'll look at the changes over time in the writings that people carve into trees, and in the drink containers they leave on the ground. Even something as mundane as these cases tells us a lot about the ranges that people cover -- do they visit the woods at all, and if so, how close to the paths do they stick? -- as well as what they used the spot for -- did they drink alcohol or soda, did they carve their own initials or include someone else's too?
Fortunately, people leave dates carved in trees, and drink cans and bottles can be dated pretty well. It won't be a SPOILER!!! to say now that it looks like the time to be alive was the mid-1970s through the mid-'80s, since that's what all other signs point to. But it is nice to see it with your own eyes, particularly if you weren't there for it or have only hazy memories of it, whether because you were too young or too stoned.
I don't hardly recognize most of the stores in my neighborhood shopping center. The bowling alley with arcade games, the mom and pop video rental store that unlike Netflix allowed for browsing, the army-navy surplus store, the deli that catered to normal people instead of the drinkers of kumquat-tinis, the total lack of anything geared to Mexicans -- all of that is ancient history, gone within the past 20 years.
One of the few places you can still go to and feel at home is public spaces that lie off the beaten path. Public spaces that housed a built environment like the mall, the roller rink, or the record store can all be razed and re-built into a "power center" with big box stores, or at best get converted into the organic doggie salons that fill up a "lifestyle center." With all of those buildings already in existence, and whose location is already known to customers, developers would rather screw up that part of your neighborhood than clear out a stretch of woods and try to build it and popularize it from scratch.
Even the uncontested public spaces like the woods don't look exactly like they used to -- like, where the hell is everyone else besides me? The general trend is to cocoon right before the peak of the crime rate, and during the falling-crime period. We saw that in the mid-century and we've seen it again since the early '90s. It's just another example to go out in the woods and not see anyone, except on bike paths. (That shows that it's not due to the winter weather, since some people are out, just not in the off-the-path parts.)
Still, it may have been de-peopled, but at least the woods has preserved the other signs of human life. Because the culture is so different in rising vs. falling-crime times, it is like visiting the ruins of a vanished civilization, almost right in your own back yard.
In two posts to follow I'll look at the changes over time in the writings that people carve into trees, and in the drink containers they leave on the ground. Even something as mundane as these cases tells us a lot about the ranges that people cover -- do they visit the woods at all, and if so, how close to the paths do they stick? -- as well as what they used the spot for -- did they drink alcohol or soda, did they carve their own initials or include someone else's too?
Fortunately, people leave dates carved in trees, and drink cans and bottles can be dated pretty well. It won't be a SPOILER!!! to say now that it looks like the time to be alive was the mid-1970s through the mid-'80s, since that's what all other signs point to. But it is nice to see it with your own eyes, particularly if you weren't there for it or have only hazy memories of it, whether because you were too young or too stoned.
January 2, 2012
Why some '90s songs were better than you would've expected
Looking over songs that were popular after the 1992 peak in the violence rate -- as the culture became a lot less energetic, carefree, and emotionally open -- some stand out for being catchy rather than repulsive or saccharine. They sound like they came from a slightly earlier time, and in fact they did.
During the 1991-92 period, one phase of the zeitgeist was grinding to a halt and the society was switching gears to a new one. To make the picture clearer, let's just look at songs that hit it big in 1993 or later, after the limbo period.
"Two Princes" by Spin Doctors was recorded back in 1990 (and released in '91), a year whose culture-feel is indistinguishable from the late 1980s.
"Hey Jealousy" and "Found Out About You" by Gin Blossoms were originally recorded and released in 1989, during the heyday of college rock. When they were re-recorded for their 1992 album, the tempo was a little slower and the mood a little more reflective, but they hardly sound any more different than a song's studio and live versions performed in the same year.
"I'm Gonna Be (500 Miles)" by The Proclaimers became famous when it was later included on the Benny & Joon soundtrack, but it was first recorded and released for their 1988 album.
"Show Me Love" by Robin S. was recorded in 1990, although it was given somewhat different instrumentation in the version that became a hit. What carries the song is her voice, though, and the vocal track sounds the same. What a breath of fresh air when dance vocals had moved toward expressionless speaking instead of singing -- C+C Music Factory, Real McCoy, Crystal Waters, etc.
"Can't Help Falling in Love" by UB40 was a cover of an Elvis song from 1961, even if the slightly faster reggae interpretation did make it sound fairly different.
"Come Undone" by Duran Duran is a bit more of an exception, although the English homicide rate didn't peak until 1995, so it was still made in the creators' rising-crime environment.
"Dreamlover" by Mariah Carey gets some recognition for not having been recorded before its 1993 release, although how catchy it is can be debated. It's a bit too consciously sweet for the listener to get lost in it and feel like singing it the rest of the day. That's about as good of an exception as I could find.
This handful of catchy tunes were all popular in '93 and '94, and after that you couldn't even find a handful, aside from the 2003-'06 period where they consciously tried copying the sound of the late '70s through the mid-'80s. In one way or another, the seeming exceptions after 1992 are rooted in a rising-crime zeitgeist.
During the 1991-92 period, one phase of the zeitgeist was grinding to a halt and the society was switching gears to a new one. To make the picture clearer, let's just look at songs that hit it big in 1993 or later, after the limbo period.
"Two Princes" by Spin Doctors was recorded back in 1990 (and released in '91), a year whose culture-feel is indistinguishable from the late 1980s.
"Hey Jealousy" and "Found Out About You" by Gin Blossoms were originally recorded and released in 1989, during the heyday of college rock. When they were re-recorded for their 1992 album, the tempo was a little slower and the mood a little more reflective, but they hardly sound any more different than a song's studio and live versions performed in the same year.
"I'm Gonna Be (500 Miles)" by The Proclaimers became famous when it was later included on the Benny & Joon soundtrack, but it was first recorded and released for their 1988 album.
"Show Me Love" by Robin S. was recorded in 1990, although it was given somewhat different instrumentation in the version that became a hit. What carries the song is her voice, though, and the vocal track sounds the same. What a breath of fresh air when dance vocals had moved toward expressionless speaking instead of singing -- C+C Music Factory, Real McCoy, Crystal Waters, etc.
"Can't Help Falling in Love" by UB40 was a cover of an Elvis song from 1961, even if the slightly faster reggae interpretation did make it sound fairly different.
"Come Undone" by Duran Duran is a bit more of an exception, although the English homicide rate didn't peak until 1995, so it was still made in the creators' rising-crime environment.
"Dreamlover" by Mariah Carey gets some recognition for not having been recorded before its 1993 release, although how catchy it is can be debated. It's a bit too consciously sweet for the listener to get lost in it and feel like singing it the rest of the day. That's about as good of an exception as I could find.
This handful of catchy tunes were all popular in '93 and '94, and after that you couldn't even find a handful, aside from the 2003-'06 period where they consciously tried copying the sound of the late '70s through the mid-'80s. In one way or another, the seeming exceptions after 1992 are rooted in a rising-crime zeitgeist.
Subscribe to:
Posts (Atom)