Flipping through Wikipedia, I wound up on the entry for helicopter parents. Everyone knows what they are and roughly when they showed up big-time (early 2000s, with a transition period of growth during the 1990s). But in trying to account for the huge change in parenting styles, no one can come up with anything.
The main reason why they're clueless is that the human mind is not meant to study things scientifically, in particular it isn't built to take the broadest view of something and then try to explain it using the fewest number of ideas as possible. So commentators talk about helicopter parents as though the parents of Millennials were the only large-scale example of this parenting style. Yet we all know of an earlier period of prolonged and widespread helicopter parenting. Don't remember? How about this --
"You'll shoot your eye out!"
The mother in that movie, in typical overparenting fashion, bundles up her son so heavily and tightly that he can't even move his arms or get up when he falls down.
A Christmas Story is set sometime in the late 1930s or early '40s, at a time when the crime rate was plummeting from its peak in 1933, ending the wildness of the Jazz Age. That's just like the timing for the current generation of helicopter parents -- it got its start after the 1992 peak in the crime rate and had risen to such a high level by the turn of the century that it started showing up in popular culture. When Baby Boomers look back on their childhoods, it's not until the late 1950s, as the crime rate began soaring once again, that they start to recall how liberated they'd become from their parents' hovering, such as in the movies Stand By Me (set in 1959) or The Sandlot (set in 1962).
It's odd that no one is aware of the previous era of helicopter parents since a good chunk of the Baby Boomers had them as their parents, and a lot of Generation X and younger (but pre-Millennial) people had them as grandparents. My grandmother was born in 1921 and had four children from 1941 through 1955, so she was a parent almost entirely during falling-crime times. (Only when her youngest child, my mother, turned 4 in 1959 would she even have had a hint that violence rates were getting worse.)
Not surprisingly, she is almost a caricature of a helicopter parent. Parenting styles fossilize into place when a person has kids; they don't change when they begin dealing with grandchildren. My grandmother treated me, my brothers, and my cousins just as hoveringly as she did my mother and her siblings when they were growing up. My mother, aunt, and uncles still bristle when she tries to overparent them or us grandchildren, even though they are in mid-life or retired and the younger generation are in our 20s and 30s.
"That there knife's liable to fall off and cut someone if you leave it near the edge of the counter like that..."
Mom! ... it's not even hanging over the side.
"You kids is gonna freeze out there if you don't -- "
Mom! ... they're grown boys now. They know how to dress warm.
"Are you sure you 'uns don't want more to eat? I just don't want you to get... anemic. You know."
Mom! ... they already had three helpings. They're not going to starve.
I love my grandmother, but she is an inveterate hoverer. Also like today's overparenting adults, she has always focused most of her attention on and derived most of her meaning from her kids and grandkids, so much so that she doesn't mind giving up a separate social life of her own. People who became parents when the violence level was soaring, however, sought to keep something of a distance from their kids, both to keep from getting too attached just in case they lost them, as well as to encourage a toughness and independence in their kids so that when they finally confront the increasingly chaotic world, they'll be prepared.
So even as recently as 1984, my mother went out to dance clubs every weekend, with my dad if she could drag him out or with her girl friends otherwise. Parental independence held true back through the '60s and even late '50s, as the movies mentioned earlier show. Once you get back to the '40s, though, you get the Christmas Story scenario where the parents aren't out and about doing their own thing, and wind up being a bit too present in their kids' lives at home.
This greater level of indulgence is what allowed Dr. Spock's Common Sense Book of Baby and Child Care to reach superstar status when it was released in 1946. If parents were not already inclined to shower affection on their kids, they would have rejected the book as obviously false. But it struck a chord. "Earlier" childcare books advised parents to maintain a healthy amount of distance, which was fitting for the rising-crime times in which they were written (1933 back to the turn of the century). Now that the violence level has been falling for almost 20 years, parents have switched back to unrestrained indulgence and affection -- they see a much safer and more orderly world, and so less of a need to toughen their kids up to face it head on.
The Wikipedia article also tosses around the theory that helicopter parenting is due to the parents being Baby Boomers. Well both my parents were born in the mid-'50s and they were never helicopter parents, even during the 2000s when it had become too common to be ignored. The same is true for the parents of all the friends I ever had, or anyone I went to school with. It's only the Baby Boomers who waited so long to have kids, circa the late 1980s or early '90s, who became helicopter parents.
Before I've shown that trust levels fall before the crime rate falls. That is causal -- when people no longer trust others, they hunker down and shrink their lives into a very tiny private sphere. This dries up the pool of potential targets that the impulsive and opportunistic criminals rely on to thrive. If people who are withdrawing trust also have kids, they will take the same approach with them too -- keep them out of public spaces and locked up indoors (or at a tutoring center), where they can be closely monitored and micro-managed by the parents.
In fact, a good deal of helicopter parents aren't Baby Boomers at all. The first year of the baby bust was 1965, so if this cohort had kids in their early or mid-20s, their children would be Millennials and the parents would be hoverers. The same goes for anyone born even farther away from the baby boom.
Wikipedia also flails for a cause by pointing to the adoption of cell phones, which allow parents to keep much closer tabs on what their kids are doing. That was what really drove home the reality of how different parents have been lately -- when I made some undergrad friends upon starting grad school a few years ago, and seeing how they had to answer every call from their parents no matter what. Otherwise, they'd get bitched out and suspected of doing something bad -- "Why didn't you answer your phone when I called you?!" I felt like ripping the phone away from their kid and telling them to go get a life, but figured that would put a damper on our friendship.
Still, there were no cell phones in the late 1930s or '40s or '50s, so we don't need to mention them at all when explaining the helicopter parent phenomenon. Plus there's the pattern of the '60s through the '80s -- we went out whenever, wherever, and with whoever we wanted (only a slight exaggeration), and we weren't required to "touch base" every hour or so, often not at all until we showed up at the house again. "Oh, there you are! Well we're just about ready for dinner, so have a seat..." Our parents could have made us call them from the ubiquitous pay phones (perhaps collect), the phone in whoever's house we were hanging out at, etc.
Yet we were never required to do that, even though the phone technology of the time certainly allowed it. (Of course we often did call home -- but more to let our parents know what we were going to do than to ask their permission to do it.) Technological changes will have no effect on our social or cultural lives unless we wanted to move in that direction already.
January 31, 2011
January 30, 2011
Holiday-specific products are lower in quality and higher in price
Looking over the chocolates in the Valentine's Day section of the supermarket today, I noticed how inferior the ingredients are to everyday chocolates. Like several candy bar brands, quite a few of the Valentine's Day chocolates had replaced most or all of the cocoa butter with vegetable oils -- and not even a nice one full of saturated fat like palm kernel oil or coconut oil, which would at least retain some of the creaminess, but inflammatory polyunsaturated junk like soybean or cottonseed oils, usually hydrogenated.
I also noticed a greater proportion of chocolates that had some kind of cheap corn syrup instead of sugar or honey, and virtually none that had things like egg whites added for extra richness. Those are standard for a Toblerone bar, which isn't anywhere as expensive per gram as the Valentine's Day chocolates, but does cost more than a Hershey bar because it has higher quality ingredients and tastes better.
How can that be? -- that they deliver a lower-quality product and charge more?
When someone's buying a gift for a special occasion, one that's supposed to be sacred like Valentine's Day, they're not supposed to scrutinize what's in it. That's what you do when you're buying something for use, not for exchanging it as a signal of your thoughtfulness. The same goes for the recipient: they're not going to put the ingredient list under a laboratory microscope because they're not eating them for personal enjoyment but to show that they acknowledge and appreciate the gift.
After all, if these Valentine's Day chocolates were so great at giving personal enjoyment, wouldn't they be sold throughout the year in the candy aisle, rather than only a tiny slice of the year?
Although it would be cheaper and taste infinitely better to get some Reese's peanut butter cups, a Heath bar, and a Clark bar, those are ordinary chocolates, not specialty ones. And even the Valentine's Day version of the Reese's cups that are heart-shaped look too kiddie to serve as suitable gifts, unlike the "fine selection" chocolates that are pretentiously packaged in wrapping paper and have things printed on them like "hand-made in small batches" -- I guess adding "by artisans" would have been gilding the lily.
Everyday products generally get punished if they are low in quality and high in price, so this pattern is hardly pervasive. Still, watch out when the things are made only for sacred occasions.
I also noticed a greater proportion of chocolates that had some kind of cheap corn syrup instead of sugar or honey, and virtually none that had things like egg whites added for extra richness. Those are standard for a Toblerone bar, which isn't anywhere as expensive per gram as the Valentine's Day chocolates, but does cost more than a Hershey bar because it has higher quality ingredients and tastes better.
How can that be? -- that they deliver a lower-quality product and charge more?
When someone's buying a gift for a special occasion, one that's supposed to be sacred like Valentine's Day, they're not supposed to scrutinize what's in it. That's what you do when you're buying something for use, not for exchanging it as a signal of your thoughtfulness. The same goes for the recipient: they're not going to put the ingredient list under a laboratory microscope because they're not eating them for personal enjoyment but to show that they acknowledge and appreciate the gift.
After all, if these Valentine's Day chocolates were so great at giving personal enjoyment, wouldn't they be sold throughout the year in the candy aisle, rather than only a tiny slice of the year?
Although it would be cheaper and taste infinitely better to get some Reese's peanut butter cups, a Heath bar, and a Clark bar, those are ordinary chocolates, not specialty ones. And even the Valentine's Day version of the Reese's cups that are heart-shaped look too kiddie to serve as suitable gifts, unlike the "fine selection" chocolates that are pretentiously packaged in wrapping paper and have things printed on them like "hand-made in small batches" -- I guess adding "by artisans" would have been gilding the lily.
Everyday products generally get punished if they are low in quality and high in price, so this pattern is hardly pervasive. Still, watch out when the things are made only for sacred occasions.
January 29, 2011
What bad things have not (yet) been eliminated by market competition?
Both economists and other interested observers make a serious but simple mistake when they look around and judge anything that's been a stable feature for at least the short term as having "passed the market test." Hey, if it's so bad, then why hasn't one of the competitors come up with a different and better way of doing things and put the existing sellers out of business?
This is similar to the uber-adaptationist tendency of many who talk about evolution -- if some feature has not been genetically weeded out, how can you say it's maladaptive? It must have passed the test of natural selection or "survival of the fittest."
Yet one of the requirements of natural selection is that there is variability in what individuals do. (The other requirements are that this variability in some trait must be related to variability in fitness, and that the trait can be passed on at least somewhat through heredity.) If individuals don't vary in what they do, then natural selection cannot weed out the more fit from the less fit. Suppose that someone says there's an adaptive value to being right-handed. If there have never been any left-handers, then this argument fails. It rests on the assumption that there have been left-handers over whom the right-handers have prevailed.
Given how much churning there is in a modern market-based economy, there is often a lack of variability in the strategies of different firms, individuals, etc., because the niche occupied by the competitors may not last long enough for such a variety of strategies to catch on. Particularly when the competitors perceive their niche to be endangered -- then they go into "hold on for dear life" mode and wouldn't dream of introducing a radical alternative to the status quo. Sure it could save them, but if it's a wrong move, they don't just suffer a loss in profits -- they get wiped out for good. They are so close to the brink of extinction already.
With little or no variability in the strategies of these endangered industries, nothing can be said about their persistent features having passed the market test. Indeed, all manner of silly and low-quality things may persist during this stage of their existence.
Take the sellers of CDs. Most places where you buy new CDs (as in "unopened") have a terrible selection -- almost all of it is new garbage, perhaps going back as far as the mid-1990s, which doesn't raise the quality level much. Better music came out from the late '50s through the late '80s or even early '90s, so that's what should be available. At least, a large chunk of what's offered should be from that period, allowing the leftover space for newer junk.
Virtually no music seller does this, though. (Wal-Mart comes the closest that I've seen, although their CD section is so small that it's hard to consider them a major player.) Why doesn't some existing firm or an entrepreneur try out the alternatives -- from, say, 30% up through close to 100% being good music, instead of around 10% or less being good music? When music retailers were on top of the world, selling good music and selling contemporary music were not at odds with each other, so none of them ever had to try out the alternative of selling very little contemporary music and focusing on music that was 20 to 50 years old.
Rather, the heavy emphasis on current music is a carry-over from an earlier period where it was not harmful to business. It did not pass a market test in earlier times, and no one is trying out these alternatives now, so it's plausible -- and I think certain -- that this is a maladaptation. That's one major reason why people are so disappointed and leave without buying anything when they visit a music seller these days. They remember the not so distant past when Tower Records (or whoever) carried everything, rather than a large bulk of it being unlistenable trendoid junk. I remember that, at least as of the mid-1990s, even Best Buy -- not even a specialist in recorded music -- had a gigantic selection, including independent / college radio bands from the 1980s such as the Dead Milkmen, Camper Van Beethoven, the Jesus and Mary Chain, and so on.
I think that focus on good rather than new music would work even better today, since consumers who buy unused CDs are not as price-sensitive as those who buy used CDs or digital music, and they are more focused on quality rather than novelty. But again, everyone is too afraid to explore the range of possibilities since selling CDs is already risky enough -- one failed experiment could put them out of the music-selling business for good.
There are other examples, but I'm only going to walk through one for the sake of brevity, just to illustrate the main point that gets over-looked when observers of the economy praise something that has merely persisted rather than something that has passed the market test. What examples jump out at you?
This is similar to the uber-adaptationist tendency of many who talk about evolution -- if some feature has not been genetically weeded out, how can you say it's maladaptive? It must have passed the test of natural selection or "survival of the fittest."
Yet one of the requirements of natural selection is that there is variability in what individuals do. (The other requirements are that this variability in some trait must be related to variability in fitness, and that the trait can be passed on at least somewhat through heredity.) If individuals don't vary in what they do, then natural selection cannot weed out the more fit from the less fit. Suppose that someone says there's an adaptive value to being right-handed. If there have never been any left-handers, then this argument fails. It rests on the assumption that there have been left-handers over whom the right-handers have prevailed.
Given how much churning there is in a modern market-based economy, there is often a lack of variability in the strategies of different firms, individuals, etc., because the niche occupied by the competitors may not last long enough for such a variety of strategies to catch on. Particularly when the competitors perceive their niche to be endangered -- then they go into "hold on for dear life" mode and wouldn't dream of introducing a radical alternative to the status quo. Sure it could save them, but if it's a wrong move, they don't just suffer a loss in profits -- they get wiped out for good. They are so close to the brink of extinction already.
With little or no variability in the strategies of these endangered industries, nothing can be said about their persistent features having passed the market test. Indeed, all manner of silly and low-quality things may persist during this stage of their existence.
Take the sellers of CDs. Most places where you buy new CDs (as in "unopened") have a terrible selection -- almost all of it is new garbage, perhaps going back as far as the mid-1990s, which doesn't raise the quality level much. Better music came out from the late '50s through the late '80s or even early '90s, so that's what should be available. At least, a large chunk of what's offered should be from that period, allowing the leftover space for newer junk.
Virtually no music seller does this, though. (Wal-Mart comes the closest that I've seen, although their CD section is so small that it's hard to consider them a major player.) Why doesn't some existing firm or an entrepreneur try out the alternatives -- from, say, 30% up through close to 100% being good music, instead of around 10% or less being good music? When music retailers were on top of the world, selling good music and selling contemporary music were not at odds with each other, so none of them ever had to try out the alternative of selling very little contemporary music and focusing on music that was 20 to 50 years old.
Rather, the heavy emphasis on current music is a carry-over from an earlier period where it was not harmful to business. It did not pass a market test in earlier times, and no one is trying out these alternatives now, so it's plausible -- and I think certain -- that this is a maladaptation. That's one major reason why people are so disappointed and leave without buying anything when they visit a music seller these days. They remember the not so distant past when Tower Records (or whoever) carried everything, rather than a large bulk of it being unlistenable trendoid junk. I remember that, at least as of the mid-1990s, even Best Buy -- not even a specialist in recorded music -- had a gigantic selection, including independent / college radio bands from the 1980s such as the Dead Milkmen, Camper Van Beethoven, the Jesus and Mary Chain, and so on.
I think that focus on good rather than new music would work even better today, since consumers who buy unused CDs are not as price-sensitive as those who buy used CDs or digital music, and they are more focused on quality rather than novelty. But again, everyone is too afraid to explore the range of possibilities since selling CDs is already risky enough -- one failed experiment could put them out of the music-selling business for good.
There are other examples, but I'm only going to walk through one for the sake of brevity, just to illustrate the main point that gets over-looked when observers of the economy praise something that has merely persisted rather than something that has passed the market test. What examples jump out at you?
January 24, 2011
Contemporary Malthusian impressions 2: Children's lit
The dizzying pace of change that began as we moved from an agricultural to an industrial society has stopped. That's not only true for GDP and median wages, but for more concrete parts of life. Here I introduced the motivation for this series of posts about everyday evidence that progress has leveled off, and here is the first post about crime, housing, and nutrition. Later I noted how our vanishing interest in traveling into the future reveals our deep-down suspicion that it wouldn't look so incredibly different from today's world, so why bother imagining what it might hold?
Now let's look at something that exploded with the industrial revolution but is not constrained by technological sophistication -- children's books. Making them is much more dependent on their creators' imagination, not on how fast today's presses can print them or how many of them and in how much detail can be stored on today's computers. Stagnation in the output of dazzling new doo-hickies is understandable, and even stagnation within a specific genre of cultural work -- such as rock music -- but stagnation in children's literature as a whole? Well, believe it:
As part of my leisure research into what books to get my nephew as he nears 3 years of age, I came across these data that I've charted from 1001 Children's Books You Must Read Before You Grow Up (luckily for me, I've got plenty of time left for that). Almost all are from the industrial revolution and after; before then, children's books were not a well developed genre. I omitted 5 entries scattered through ancient and medieval times, and just to pick a round starting date of 1800 I left out 7 entries from 1620 to 1799, which lets us see the overall pattern in closer detail without altering the picture.
Once the shift to an industrial society got going, children's books immediately took hold in their own niche. As with technology, there was a decent burst of creativity throughout the second half of the 19th C., and another more impressive surge during the middle 50 years of the 20th C. At the end of this frenetic rise, there were about 10 times as many memorable kids books published per year compared to the very start in the early 1800s, when one or two would come out in a year. However, just as with technology, GDP, and median wages, around or a little after the mid-1970s the rate of increase slowed down and leveled off.
(I wouldn't make too much of the apparent nosedive at the end of the chart: the book came out in 2009 and only had 1 entry from 2008 and 3 from 2007 -- it probably takes the compilers of these gigantic lists several years to sift through all of the recent stuff to tell what's worth mentioning.)
Cultural innovation often reflects larger changes in people's lives, especially ones where they shift from one means of subsistence to another. For example, when people switched from hunting and gathering to settling down and planting crops, this brought about regular famines, epidemic diseases, greater social hierarchy, and wars involving large standing armies. It's hardly surprising that a lot of the mythology and religion that followed centered around the fall of man, battles among the gods, the apocalypse, and so on, rather than keeping on good terms with local plant and animal spirits.
The next shift in subsistence happened when agricultural people took up market-based industrial capitalism. A shift this massive introduces so many changes in people's daily lives that they can't help but write, paint, and sing about them -- not necessarily with the goal in mind of documenting these changes. For instance, the kids book Gorilla could not have been created in a world where there were no latchkey children to grow bored of their TV-babysitter and run off on a wild adventure with their toy gorilla as guide. (Think of how rare it was for the children of farmers and herders, let alone hunter-gatherers, to own toys, dolls, stuffed animals, etc., and how frequently these things play a role in the narratives of kids books.)
At some point the cultural innovation has more or less used up all of the great new stories to tell. During the hunter-to-farmer transition, we saw an explosion in new mythologies and religions -- and then it ground to a halt, with Islam being the last big religion to be invented. The farmer-to-laborer transition saw another burst of creativity, but that has stagnated in its turn.
And while on some level this awareness that culture won't be so spellbindingly fresh anymore is a bit of a downer, look back on the previous stagnation of brand new Big Stories -- people still found plenty in the ancient mythologies and religions to fascinate them for thousands of years, even to this day. And anyone who has made the effort to familiarize themselves with culture made before they were born knows how much fun stuff is already out there, so much that you probably won't be able to play with it all in your lifetime, let alone in enough depth to feel it growing old with you.
Now let's look at something that exploded with the industrial revolution but is not constrained by technological sophistication -- children's books. Making them is much more dependent on their creators' imagination, not on how fast today's presses can print them or how many of them and in how much detail can be stored on today's computers. Stagnation in the output of dazzling new doo-hickies is understandable, and even stagnation within a specific genre of cultural work -- such as rock music -- but stagnation in children's literature as a whole? Well, believe it:
As part of my leisure research into what books to get my nephew as he nears 3 years of age, I came across these data that I've charted from 1001 Children's Books You Must Read Before You Grow Up (luckily for me, I've got plenty of time left for that). Almost all are from the industrial revolution and after; before then, children's books were not a well developed genre. I omitted 5 entries scattered through ancient and medieval times, and just to pick a round starting date of 1800 I left out 7 entries from 1620 to 1799, which lets us see the overall pattern in closer detail without altering the picture.
Once the shift to an industrial society got going, children's books immediately took hold in their own niche. As with technology, there was a decent burst of creativity throughout the second half of the 19th C., and another more impressive surge during the middle 50 years of the 20th C. At the end of this frenetic rise, there were about 10 times as many memorable kids books published per year compared to the very start in the early 1800s, when one or two would come out in a year. However, just as with technology, GDP, and median wages, around or a little after the mid-1970s the rate of increase slowed down and leveled off.
(I wouldn't make too much of the apparent nosedive at the end of the chart: the book came out in 2009 and only had 1 entry from 2008 and 3 from 2007 -- it probably takes the compilers of these gigantic lists several years to sift through all of the recent stuff to tell what's worth mentioning.)
Cultural innovation often reflects larger changes in people's lives, especially ones where they shift from one means of subsistence to another. For example, when people switched from hunting and gathering to settling down and planting crops, this brought about regular famines, epidemic diseases, greater social hierarchy, and wars involving large standing armies. It's hardly surprising that a lot of the mythology and religion that followed centered around the fall of man, battles among the gods, the apocalypse, and so on, rather than keeping on good terms with local plant and animal spirits.
The next shift in subsistence happened when agricultural people took up market-based industrial capitalism. A shift this massive introduces so many changes in people's daily lives that they can't help but write, paint, and sing about them -- not necessarily with the goal in mind of documenting these changes. For instance, the kids book Gorilla could not have been created in a world where there were no latchkey children to grow bored of their TV-babysitter and run off on a wild adventure with their toy gorilla as guide. (Think of how rare it was for the children of farmers and herders, let alone hunter-gatherers, to own toys, dolls, stuffed animals, etc., and how frequently these things play a role in the narratives of kids books.)
At some point the cultural innovation has more or less used up all of the great new stories to tell. During the hunter-to-farmer transition, we saw an explosion in new mythologies and religions -- and then it ground to a halt, with Islam being the last big religion to be invented. The farmer-to-laborer transition saw another burst of creativity, but that has stagnated in its turn.
And while on some level this awareness that culture won't be so spellbindingly fresh anymore is a bit of a downer, look back on the previous stagnation of brand new Big Stories -- people still found plenty in the ancient mythologies and religions to fascinate them for thousands of years, even to this day. And anyone who has made the effort to familiarize themselves with culture made before they were born knows how much fun stuff is already out there, so much that you probably won't be able to play with it all in your lifetime, let alone in enough depth to feel it growing old with you.
January 21, 2011
The persistence and corrosion of the frontier spirit of togetherness
Peter Turchin has synthesized Ibn Khaldun's idea of asabiya, or group cohesiveness, that waxes and wanes, with the Russian ethnologists' ideas about meta-ethnic frontiers, where two people who are so radically different are right up against each other. For example, they may speak different languages (even from different families), follow different religions, make a living in different ways (like farming vs. herding), and so on. A classic example is the Mongol-Chinese frontier.
When one side is the underdog facing a powerful Other, they slowly begin to band together in order to take on the other side. After this purpose is served, they slowly start to lose that sense of solidarity because the necessity of survival has been taken care of. They start to become decadent and prone to in-fighting, which sets them up to be taken over by the next group of underdogs who must join up to take over the existing elite.
After spending most of Christmas break back in Maryland, I couldn't have been happier to return to the Mountain Time Zone. When you haven't lived anywhere else, you only slightly notice how atomized, trivial, and petty-status-contest-oriented life is on much of the East Coast (and the West Coast too, from what I experienced visiting my brother when he lived in Los Angeles). It's not a fly-over state vs. bi-coastal state difference, since from what I've heard about my family in Ohio, it's not one big block party. The South (i.e., the southeast) is no better.
The farther east you go, the more the Us vs. Them attitude declines, given how long ago any of those areas was pitted against a powerful Other, namely Indian tribes. There was plenty of that in New England in the late 1600s, but by now the whole Bos-Wash corridor has been insulated from a powerful external threat that would make them feel like they're in a cosmic battle for survival. The West Coast might have been different at one point, but it's now occupied by people from various decadent, in-fighting cultures back east.
Obviously people living in the Plains through the Intermontane West are no longer threatened by nomadic Indian raiders, but it's only been like that for a far shorter time than elsewhere in the country. And sure, there are pockets of the Stuff White People Like crowd. Yet overall it's the only part of the country that feels like it's hanging together. Especially when it comes to white culture, which they're not so ashamed of, and in particular the uplifting rather than the sneering parts of white culture.
To quantify this, I'd come up with a list of things that would qualify as "uplifting white culture," and look at what states or cities show up on a Google Trends search for each of the items. But I've done that a lot just messing around, and impressionistically the Plains-Intermontane West area shows up more than any other region, although Tennessee usually does well too. Just off the top of my head, though, do a Google Trends search (US only) for "Back to the Future," "INXS," and "Dr. Seuss," and see for yourself. It has nothing to do with the central part of the country being mostly white, since so is New England, and they're goners.
In the past couple weeks, the mainstream media has been charging full steam ahead in demonizing Arizonans for this or that fault of theirs that must have led to the Tucson shooting, rather than admit the obvious fact that it was just some nut. That hatred of Arizonans extends even further back to when they tried to scrub their public schools of hate-on-whitey ethnic studies programs, when they passed tougher legislation to keep out so many Mexican immigrants, etc. It's no accident that all of this standing-up-for-yourself took place in a part of the country where the Us vs. Them frontier lasted a lot longer.
I mean, look at how no one is willing to do anything about the Mexican immigration problem in southern California. A good deal of people living there are just deluded or blind, but even those who do notice the problem and complain about it to their friends -- even liberals -- haven't organized anything real. At best they're just getting the fuck out of there -- jumping ship and letting it go under. (Now where is that online tool that lets you see net out-migration and in-migration across the Census districts? At any rate, more people are living southern California for the Intermontane West than vice versa.) That's what happens when most of the people living on the West Coast come from apathetic, bickering parts of the country.
Hell, if I were a native Arizonan, I'd be slow to admit in even an American citizen from a solidarity-deprived part of the country. A small percentage of Arizonans being such transplants -- OK. But too many and it becomes California, impotent to hold itself together. I do occasionally get glares from people who can just tell I'm from back East -- I might as well have green skin -- and are hoping I'm not the straw that will break the camel's back.
They are right to be overly cautious, since the damage caused to their culture is an exponential function of the concentration of transplants from in-fighting parts of the country. (An exponential function rises faster and faster, not like a straight line or one that quickly plateaus.) Each transplant either could or could not bring that mentality along with them, much like an infectious disease. Also like an infectious disease, they will impact several others, not just one -- either by transmitting the smug, cynical, in-fighting mindset, or just rubbing them the wrong way and eroding their level of trust. And once each of those people interact with other natives, they could transmit the mindset or communicate their falling level of trust, which the others might take seriously and lower their own trust levels in turn ("hey, if what they say is true, we better not be so trusting").
Unlike the Stuff White People Like practice of trying to locate a cool neighborhood and live there before it becomes popular, and thus no longer cool, I'm not worried that I'll attract a lot of other people where I came from. Just about all of them, having grown up and lived there, have been more or less swallowed by The Nothing, and so don't care if there's a stretch of the country where people can relate to each other in a more real, less fake way. And even if they did care, their willingness to try bold new things -- other than whatever the new ethnic cuisine du jour may be -- has been eroded by living in such an atomized culture, where they lack the strong social support that any pioneer needs just in case things don't go so well at first.
I do worry somewhat about Californians coming here, though, as there's a lot more of that than transplants from the eastern half of the US. First the refugees began fleeing northward, still along the West Coast, lest they lose their ever important hip cred. Then they fanned out toward Las Vegas, with predictable consequences. It would seem that this migration into the center of the country won't stop anytime soon, as long as the pressure remains in California -- and especially if that pressure to leave increases.
Perhaps some of the Minutemen or just a rowdy bunch of Rocky Mountain boys will go down there and choke off the border, if Californians aren't going to do it. That would still leave the problem of everyone who is already in, though. In general I don't like putting the federal government in charge of solving a problem since they're so far removed, but the states in the Plains-Intermontane West are pretty powerless to slow or end the flow of SWPLs into their land. At least at the federal level, the affected states themselves and anyone they could get sympathy from could get something done to keep the decadent in-fighters in their own part of the country.
Since the main reason people from SoCal want to leave is due to the flood of Mexicans down there, rather than the appeal of getting to talk about heavy metal or country music with their new neighbors, that national solution would only have to stem the tide of immigration, not enact the more Draconian measure of preventing Californians from moving to other states.
It's completely unclear what the net result of all these forces will be. The future will not be boring, that's for sure.
When one side is the underdog facing a powerful Other, they slowly begin to band together in order to take on the other side. After this purpose is served, they slowly start to lose that sense of solidarity because the necessity of survival has been taken care of. They start to become decadent and prone to in-fighting, which sets them up to be taken over by the next group of underdogs who must join up to take over the existing elite.
After spending most of Christmas break back in Maryland, I couldn't have been happier to return to the Mountain Time Zone. When you haven't lived anywhere else, you only slightly notice how atomized, trivial, and petty-status-contest-oriented life is on much of the East Coast (and the West Coast too, from what I experienced visiting my brother when he lived in Los Angeles). It's not a fly-over state vs. bi-coastal state difference, since from what I've heard about my family in Ohio, it's not one big block party. The South (i.e., the southeast) is no better.
The farther east you go, the more the Us vs. Them attitude declines, given how long ago any of those areas was pitted against a powerful Other, namely Indian tribes. There was plenty of that in New England in the late 1600s, but by now the whole Bos-Wash corridor has been insulated from a powerful external threat that would make them feel like they're in a cosmic battle for survival. The West Coast might have been different at one point, but it's now occupied by people from various decadent, in-fighting cultures back east.
Obviously people living in the Plains through the Intermontane West are no longer threatened by nomadic Indian raiders, but it's only been like that for a far shorter time than elsewhere in the country. And sure, there are pockets of the Stuff White People Like crowd. Yet overall it's the only part of the country that feels like it's hanging together. Especially when it comes to white culture, which they're not so ashamed of, and in particular the uplifting rather than the sneering parts of white culture.
To quantify this, I'd come up with a list of things that would qualify as "uplifting white culture," and look at what states or cities show up on a Google Trends search for each of the items. But I've done that a lot just messing around, and impressionistically the Plains-Intermontane West area shows up more than any other region, although Tennessee usually does well too. Just off the top of my head, though, do a Google Trends search (US only) for "Back to the Future," "INXS," and "Dr. Seuss," and see for yourself. It has nothing to do with the central part of the country being mostly white, since so is New England, and they're goners.
In the past couple weeks, the mainstream media has been charging full steam ahead in demonizing Arizonans for this or that fault of theirs that must have led to the Tucson shooting, rather than admit the obvious fact that it was just some nut. That hatred of Arizonans extends even further back to when they tried to scrub their public schools of hate-on-whitey ethnic studies programs, when they passed tougher legislation to keep out so many Mexican immigrants, etc. It's no accident that all of this standing-up-for-yourself took place in a part of the country where the Us vs. Them frontier lasted a lot longer.
I mean, look at how no one is willing to do anything about the Mexican immigration problem in southern California. A good deal of people living there are just deluded or blind, but even those who do notice the problem and complain about it to their friends -- even liberals -- haven't organized anything real. At best they're just getting the fuck out of there -- jumping ship and letting it go under. (Now where is that online tool that lets you see net out-migration and in-migration across the Census districts? At any rate, more people are living southern California for the Intermontane West than vice versa.) That's what happens when most of the people living on the West Coast come from apathetic, bickering parts of the country.
Hell, if I were a native Arizonan, I'd be slow to admit in even an American citizen from a solidarity-deprived part of the country. A small percentage of Arizonans being such transplants -- OK. But too many and it becomes California, impotent to hold itself together. I do occasionally get glares from people who can just tell I'm from back East -- I might as well have green skin -- and are hoping I'm not the straw that will break the camel's back.
They are right to be overly cautious, since the damage caused to their culture is an exponential function of the concentration of transplants from in-fighting parts of the country. (An exponential function rises faster and faster, not like a straight line or one that quickly plateaus.) Each transplant either could or could not bring that mentality along with them, much like an infectious disease. Also like an infectious disease, they will impact several others, not just one -- either by transmitting the smug, cynical, in-fighting mindset, or just rubbing them the wrong way and eroding their level of trust. And once each of those people interact with other natives, they could transmit the mindset or communicate their falling level of trust, which the others might take seriously and lower their own trust levels in turn ("hey, if what they say is true, we better not be so trusting").
Unlike the Stuff White People Like practice of trying to locate a cool neighborhood and live there before it becomes popular, and thus no longer cool, I'm not worried that I'll attract a lot of other people where I came from. Just about all of them, having grown up and lived there, have been more or less swallowed by The Nothing, and so don't care if there's a stretch of the country where people can relate to each other in a more real, less fake way. And even if they did care, their willingness to try bold new things -- other than whatever the new ethnic cuisine du jour may be -- has been eroded by living in such an atomized culture, where they lack the strong social support that any pioneer needs just in case things don't go so well at first.
I do worry somewhat about Californians coming here, though, as there's a lot more of that than transplants from the eastern half of the US. First the refugees began fleeing northward, still along the West Coast, lest they lose their ever important hip cred. Then they fanned out toward Las Vegas, with predictable consequences. It would seem that this migration into the center of the country won't stop anytime soon, as long as the pressure remains in California -- and especially if that pressure to leave increases.
Perhaps some of the Minutemen or just a rowdy bunch of Rocky Mountain boys will go down there and choke off the border, if Californians aren't going to do it. That would still leave the problem of everyone who is already in, though. In general I don't like putting the federal government in charge of solving a problem since they're so far removed, but the states in the Plains-Intermontane West are pretty powerless to slow or end the flow of SWPLs into their land. At least at the federal level, the affected states themselves and anyone they could get sympathy from could get something done to keep the decadent in-fighters in their own part of the country.
Since the main reason people from SoCal want to leave is due to the flood of Mexicans down there, rather than the appeal of getting to talk about heavy metal or country music with their new neighbors, that national solution would only have to stem the tide of immigration, not enact the more Draconian measure of preventing Californians from moving to other states.
It's completely unclear what the net result of all these forces will be. The future will not be boring, that's for sure.
Under or over-estimating the influence of genes, depending on how hierarchical your society is
We ascribe so much of so many differences between people to environmental differences, and I'm not just talking about "don't go there" topics like whether someone is smart vs. dumb, where you have to say it's mostly environmental or suffer social shunning. One girl sees another girl with more lustrous hair than hers -- "I wonder what kind of product she uses?" One guy sees another guy with a man-boob chest -- "He must do a lot of bench pressing."
From what I've heard anecdotally, as well as what I've read about pre-state people's "folk biology," more primitive groups like hunter-gatherers are more likely to see the power of blood, heredity, or some transmissible inner essence -- whether or not they know what genes are.
Is this difference because primitive people live in fairly egalitarian societies, where the environment doesn't vary so much across people like it does in a more advanced society like ours? Their social hierarchy is a lot flatter than ours, and their diet is pretty similar across individuals because the easy stuff everybody picks (like nuts and berries) and the hard-to-get stuff like meat and organs winds up getting pretty evenly distributed among the group after a kill.
So if one girl has more voluminous hair than another, she probably has some genetic advantage whereby her body doesn't require as many resources to grow great big hair, or her genes make her more resistant to disease that would halt hair growth and rob it of its natural oils. It's probably not due to some difference in environment, like if she were eating a wholesome hunter-gatherer diet while the other girl was a nutrition-deprived vegan.
In a highly stratified society like ours, though, or any settled agrarian society, those are very real possibilities. This must be a decently large part behind the trend in more advanced societies toward down-playing the role of heredity and emphasizing the role of the environment. On some level, it's true: there is more environmental variance in an industrial society, so "nurture" probably plays a larger role in making people different. Not for all traits -- certainly not for ones where the advanced society has eliminated environmental variation in, for example, pathogen exposure by cleaning up the water and giving everyone shots early in life. But overall, it should go in that direction.
Hunter-gatherers are going to over-estimate the heritability of traits, and industrial people are going to under-estimate them. Still, it would be nice to survey a lot of them about a lot of traits and compare them to the actual estimates (taken from their own society), so we can see who's closer to the truth. My guess is the primitive people, since moderns are so completely clueless about so much of the world, being so divorced from our natural state. But it would be neat to see no matter who wins.
From what I've heard anecdotally, as well as what I've read about pre-state people's "folk biology," more primitive groups like hunter-gatherers are more likely to see the power of blood, heredity, or some transmissible inner essence -- whether or not they know what genes are.
Is this difference because primitive people live in fairly egalitarian societies, where the environment doesn't vary so much across people like it does in a more advanced society like ours? Their social hierarchy is a lot flatter than ours, and their diet is pretty similar across individuals because the easy stuff everybody picks (like nuts and berries) and the hard-to-get stuff like meat and organs winds up getting pretty evenly distributed among the group after a kill.
So if one girl has more voluminous hair than another, she probably has some genetic advantage whereby her body doesn't require as many resources to grow great big hair, or her genes make her more resistant to disease that would halt hair growth and rob it of its natural oils. It's probably not due to some difference in environment, like if she were eating a wholesome hunter-gatherer diet while the other girl was a nutrition-deprived vegan.
In a highly stratified society like ours, though, or any settled agrarian society, those are very real possibilities. This must be a decently large part behind the trend in more advanced societies toward down-playing the role of heredity and emphasizing the role of the environment. On some level, it's true: there is more environmental variance in an industrial society, so "nurture" probably plays a larger role in making people different. Not for all traits -- certainly not for ones where the advanced society has eliminated environmental variation in, for example, pathogen exposure by cleaning up the water and giving everyone shots early in life. But overall, it should go in that direction.
Hunter-gatherers are going to over-estimate the heritability of traits, and industrial people are going to under-estimate them. Still, it would be nice to survey a lot of them about a lot of traits and compare them to the actual estimates (taken from their own society), so we can see who's closer to the truth. My guess is the primitive people, since moderns are so completely clueless about so much of the world, being so divorced from our natural state. But it would be neat to see no matter who wins.
January 19, 2011
Lower-status females are more sensitive to signals that the mating season has begun
I can always tell when spring arrives because that's when nubile girls who are ovulating start pumping out their distinctive scent. Apparently only a small fraction of guys can detect this, in the same way that some people have super-sensitive tastebuds, and in the way that some people see colors well while others are color-blind.
It's even easier to tell because at '80s night there are perhaps 100 girls, all college-age, and if even a good fraction of them are ovulating, it knocks me over like a tidal wave when I'm walking up the stairs. However, last week I detected one girl in heat -- confirmed by her behavior in the dance cages -- even though it was still winter. Still, it had warmed up a fair amount since the dead-of-winter temperatures -- freezing, but not in the teens or 20s like it was before.
This made me realize that there must be differences between individual girls in how soon they come out from winter hibernation and get down to the business of looking for a mate and making babies. There's a trade-off: if she's highly sensitive, she will go into mating mode at even slight hints that the season has begun, so she'll never get a late start, although she'll occasionally come out too early (false alarm) and be left out in the cold. If she's hardly sensitive at all, she'll only come out when it's beyond a shadow of a doubt that mating season has begun, so she'll never be in mating mode too early and with little to do, but she'll occasionally remain in hibernation when she should be out and about.
There's no single best strategy to resolve this trade-off -- the more you try to eliminate one type of error, like the mistake of coming out too early and having little to do, you automatically suffer the other type of error, like sleeping in too late when there's plenty for you to be doing. So, a range of strategies will evolve and persist, rather than the single best strategy out-breeding all the others.
It follows as a prediction that females who are the most sexualized -- who lose their virginity earlier, have their first kid earlier, have more partners, have sex and kids outside of wedlock, etc. -- will be the early birds, while the least sexualized will be the stragglers. The most sexualized ones are really worried about sleeping through the party once mating season has begun, so they'll take the risk of going into mating mode too early. The least sexualized ones are more worried choosing the wrong partner, not having a great environment to conceive the baby in, and so on, from jumping the gun, so they'll take the risk of having their uteruses running idle when they could be put to good use.
And sure enough, that's just what we see. Here is a relevant WSJ article; skip to the chart at the very bottom. Mothers who give birth in January are the least likely to be married, the most likely to be teenagers, and the least educated, whereas mothers who give birth in May are the mirror image. Rewinding back to conception, that means the most sexualized mothers conceived in April, which leaves only 10 to 40 days after the beginning of spring for them to get into mating mode. The least sexualized mothers conceive in August, fully four months later -- talk about sleeping through the party! -- when there is no doubt left that mating season has begun and while the weather is still warm enough before the cooling of autumn.
This doesn't mean that the most sexualized mothers took only 10 to 40 days to find a stranger to mate with. They could have already been with someone for years. The point is even if they've been with someone that long, when during the year that they choose to have a kid do they go into baby-making mode? Damn early.
I wonder if colleges that have more lower-status students have earlier spring breaks than ones with more higher-status students?
It's even easier to tell because at '80s night there are perhaps 100 girls, all college-age, and if even a good fraction of them are ovulating, it knocks me over like a tidal wave when I'm walking up the stairs. However, last week I detected one girl in heat -- confirmed by her behavior in the dance cages -- even though it was still winter. Still, it had warmed up a fair amount since the dead-of-winter temperatures -- freezing, but not in the teens or 20s like it was before.
This made me realize that there must be differences between individual girls in how soon they come out from winter hibernation and get down to the business of looking for a mate and making babies. There's a trade-off: if she's highly sensitive, she will go into mating mode at even slight hints that the season has begun, so she'll never get a late start, although she'll occasionally come out too early (false alarm) and be left out in the cold. If she's hardly sensitive at all, she'll only come out when it's beyond a shadow of a doubt that mating season has begun, so she'll never be in mating mode too early and with little to do, but she'll occasionally remain in hibernation when she should be out and about.
There's no single best strategy to resolve this trade-off -- the more you try to eliminate one type of error, like the mistake of coming out too early and having little to do, you automatically suffer the other type of error, like sleeping in too late when there's plenty for you to be doing. So, a range of strategies will evolve and persist, rather than the single best strategy out-breeding all the others.
It follows as a prediction that females who are the most sexualized -- who lose their virginity earlier, have their first kid earlier, have more partners, have sex and kids outside of wedlock, etc. -- will be the early birds, while the least sexualized will be the stragglers. The most sexualized ones are really worried about sleeping through the party once mating season has begun, so they'll take the risk of going into mating mode too early. The least sexualized ones are more worried choosing the wrong partner, not having a great environment to conceive the baby in, and so on, from jumping the gun, so they'll take the risk of having their uteruses running idle when they could be put to good use.
And sure enough, that's just what we see. Here is a relevant WSJ article; skip to the chart at the very bottom. Mothers who give birth in January are the least likely to be married, the most likely to be teenagers, and the least educated, whereas mothers who give birth in May are the mirror image. Rewinding back to conception, that means the most sexualized mothers conceived in April, which leaves only 10 to 40 days after the beginning of spring for them to get into mating mode. The least sexualized mothers conceive in August, fully four months later -- talk about sleeping through the party! -- when there is no doubt left that mating season has begun and while the weather is still warm enough before the cooling of autumn.
This doesn't mean that the most sexualized mothers took only 10 to 40 days to find a stranger to mate with. They could have already been with someone for years. The point is even if they've been with someone that long, when during the year that they choose to have a kid do they go into baby-making mode? Damn early.
I wonder if colleges that have more lower-status students have earlier spring breaks than ones with more higher-status students?
January 17, 2011
When do blacks and whites join cultural forces?
Since the mid-1990s, all white American schoolchildren have written every important essay about Martin Luther King, Jr. During this same time, they and indeed whites of all ages have completely unplugged their interest in black culture. Sure, some still kind of follow what passes for rap ("hip-hop") and R&B, but at most that's it. They don't watch all-black TV shows, they don't enjoy movies where one or more blacks have teamed up with whites (exceptions: Forrest Gump, Pulp Fiction, and Die Hard with a Vengeance from the mid-'90s), and they feel tepid about musical collaborations between big white and black artists -- I mean long-term, not one making a cameo in another's performance.
Things were very different from the mid-'70s through the early '90s. In the 1974-'75 TV season, three of the top 10 shows in ratings were all-black sit-coms -- Sanford and Son, The Jeffersons, and Good Times. Given how small the black population is compared to whites at that time, obviously that was not due to lots of blacks supporting their people's TV shows and whites not caring. They had to win over tons of white viewers in order to dominate the ratings like that. This continued through the 1980s with The Cosby Show and A Different World, and enjoyed another final burst in the late '80s / early '90s with Family Matters, The Fresh Prince of Bel-Air, and In Living Color -- a rare nearly all-black sketch comedy show. In Living Color wasn't as popular as the first two, but it was still well known enough that most white kids knew who fire marshall Bill and Homey the Clown were.
In the theaters, black-white buddy movies got going during the same time period. Blockbuster examples from the later half of the '70s escape me, but 1976 saw the independent Assault on Precinct 13. In 1979 there's Alien, 1980 saw Stir Crazy, and it only grew from there -- the 48 Hrs. series, the Beverly Hills Cop series, the Lethal Weapon series, Commando (remember the black girl who plays Arnold's sidekick), Predator, Aliens, Die Hard, the two Ghostbusters movies, Rocky IV, the Police Academy series, Terminator 2, and so on.
One consequence of the evaporation of interest in the Vietnam War and the newfound obsession with WWII is that even in military movies, where blacks and whites are most likely to unite since it's either that or get mowed down by their common enemy, we don't see blacks and whites teaming up. At best it's the Jewish soldier who joins the WASPs.
Ever since disco music exploded in the mid-'70s, whites were gripped by music that was created by both whites and blacks. Aside from Prince, blacks never really got back into rock music after Jimi Hendrix, but that still left R&B and pop music. And there the biggest sensation throughout the 1980s and even the early '90s was Michael Jackson, whose music and performances relied on both black and white artists. Still, we shouldn't put too much emphasis on listening to music made by blacks as an indicator of white interest in black culture, since that's just about constant. Whites were into Sammy Davis, Jr. and the black girl groups of the 1960s, when they weren't otherwise curious about relating to blacks. And during the '90s and 2000s there's been interest in black musicians, despite pulling out of all other areas of black culture. Interest in black athletes shows the same pattern.
So what accounts for the rises and falls of the willingness of whites and blacks to mingle socially and culturally? Like just about everything else, it comes down to when the violence level is rising or falling, but here the pattern must be explained by the second-order differences in the crime rate. It's clearly not just whether it's going up or down, since white-black collaboration only begins in the mid-'70s -- not in the late '50s or early '60s when the crime rate starts to soar.
Within the period of rising violence levels, there's an earlier half and a later half. During the earlier half, people see the order starting to come undone and worry about the world going down the tubes. However, they have confidence in the powers that be to fix the problem. This confidence is a carry-over from the previous period of falling crime, when the experts and leaders were apparently doing something right. And yet despite the best efforts of the elite to plan around and correct the problem -- for example, JFK's idealism, Johnson's Great Society programs, and so on into the early '70s -- the crime rates keep soaring. So people lose confidence in the power of experts to solve the problem of growing chaos.
Having withdrawn their faith in faraway experts, people start to realize that they're going to have to cooperate with a lot more people -- including those they would otherwise not prefer to relate with -- if they're going to stand up to the growing force of evil that threatens to undo them all. People will always cooperate with their close kin and perhaps even their neighbors within a very small radius, but an ever escalating level of violence calls for more extensive cooperation than that. Now it's time to reach out to other races. And if different races are going to team up, they might as well get acquainted with each other's culture and join in the fun.
That explains why whites and blacks were rather uninterested in each other's worlds from the late '50s through the early '70s, and why that switched from the mid-'70s through the early '90s. So why then did they voluntarily segregate themselves from each other during the mid-'90s? Because as crime rates plummeted after 1992, people saw their world getting safer and order being restored to the universe. Now that the pressing need of uniting to stop rising violence levels had started to abate, blacks and whites saw little point in staying together -- once the war is over, you go back to the different parts of the world that you came from beforehand, even if you keep some memories of your common struggle.
This voluntary cultural segregation will remain until halfway through the next steady rise in the crime rate, or at least 20 years away.
Is there a similar pattern during an earlier crime wave? Yes. Recall that homicide rates began shooting up at least by 1900, perhaps 5 to 10 years earlier, and peaked in 1933, after which they fell through 1958. As predicted, during the falling crime period of 1934 to 1958, there's basically no white interest in black culture other than the constant focus on black athletes and musicians (not necessarily ones integrated with whites). And during the rising crime period of the turn-of-the-century through 1933, whites only really get interested in black culture during the Roaring Twenties.
In the earlier half of rising crime times, they took some interest in ragtime or Dixie jazz music, but during The Jazz Age they immersed themselves in the larger culture surrounding jazz music, not just listening to the sounds but frequenting speakeasies, taking drugs, protecting one another from violent gangsters, evading the coppers, and so on. Not to mention the Harlem Renaissance. If we are to believe a movie like Harlem Nights, white men and black women would have even gotten it on with each other, if the physical attraction were there. That wasn't apparent during the earlier half of rising crime times, and certainly not during the falling crime times of the mid-'30s through the late '50s.
So it looks like only a small fraction of the time will blacks and whites join forces at a steady level, not just for some short-term fix after which they'll disband. Tallying up the numbers, there was one complete cycle in the homicide rate starting with 1934 and lasting through 1992, or 59 years, and only the second half of rising crime times -- from, say, 1975 through 1991 -- saw strong black-white collaboration. Roughly, then, only 1/4 to 1/3 of historical time will feature blacks and whites taking an interest in each other's cultures and more generally feeling like they're part of a larger team.
The bigger point is that it takes very strong social forces to bring different races together, and if they're not there, even the most charismatic individuals are impotent to make the races play nice with each other. White people may have elected a black-white President, but as it was pointed out during the election season, he was basically white people's hip imaginary black friend that they wish they had. The mid-'90s and after have seen a reversion to the "token black guy" role, in contrast to the mid-'70s through early '90s when black-white interaction was essential, not a frivolous luxury item. Obama's efforts to greenify the society will do nothing to unite the races -- for that, we need a team like Riggs and Murtaugh cleaning up the filthy streets of Los Angeles.
Things were very different from the mid-'70s through the early '90s. In the 1974-'75 TV season, three of the top 10 shows in ratings were all-black sit-coms -- Sanford and Son, The Jeffersons, and Good Times. Given how small the black population is compared to whites at that time, obviously that was not due to lots of blacks supporting their people's TV shows and whites not caring. They had to win over tons of white viewers in order to dominate the ratings like that. This continued through the 1980s with The Cosby Show and A Different World, and enjoyed another final burst in the late '80s / early '90s with Family Matters, The Fresh Prince of Bel-Air, and In Living Color -- a rare nearly all-black sketch comedy show. In Living Color wasn't as popular as the first two, but it was still well known enough that most white kids knew who fire marshall Bill and Homey the Clown were.
In the theaters, black-white buddy movies got going during the same time period. Blockbuster examples from the later half of the '70s escape me, but 1976 saw the independent Assault on Precinct 13. In 1979 there's Alien, 1980 saw Stir Crazy, and it only grew from there -- the 48 Hrs. series, the Beverly Hills Cop series, the Lethal Weapon series, Commando (remember the black girl who plays Arnold's sidekick), Predator, Aliens, Die Hard, the two Ghostbusters movies, Rocky IV, the Police Academy series, Terminator 2, and so on.
One consequence of the evaporation of interest in the Vietnam War and the newfound obsession with WWII is that even in military movies, where blacks and whites are most likely to unite since it's either that or get mowed down by their common enemy, we don't see blacks and whites teaming up. At best it's the Jewish soldier who joins the WASPs.
Ever since disco music exploded in the mid-'70s, whites were gripped by music that was created by both whites and blacks. Aside from Prince, blacks never really got back into rock music after Jimi Hendrix, but that still left R&B and pop music. And there the biggest sensation throughout the 1980s and even the early '90s was Michael Jackson, whose music and performances relied on both black and white artists. Still, we shouldn't put too much emphasis on listening to music made by blacks as an indicator of white interest in black culture, since that's just about constant. Whites were into Sammy Davis, Jr. and the black girl groups of the 1960s, when they weren't otherwise curious about relating to blacks. And during the '90s and 2000s there's been interest in black musicians, despite pulling out of all other areas of black culture. Interest in black athletes shows the same pattern.
So what accounts for the rises and falls of the willingness of whites and blacks to mingle socially and culturally? Like just about everything else, it comes down to when the violence level is rising or falling, but here the pattern must be explained by the second-order differences in the crime rate. It's clearly not just whether it's going up or down, since white-black collaboration only begins in the mid-'70s -- not in the late '50s or early '60s when the crime rate starts to soar.
Within the period of rising violence levels, there's an earlier half and a later half. During the earlier half, people see the order starting to come undone and worry about the world going down the tubes. However, they have confidence in the powers that be to fix the problem. This confidence is a carry-over from the previous period of falling crime, when the experts and leaders were apparently doing something right. And yet despite the best efforts of the elite to plan around and correct the problem -- for example, JFK's idealism, Johnson's Great Society programs, and so on into the early '70s -- the crime rates keep soaring. So people lose confidence in the power of experts to solve the problem of growing chaos.
Having withdrawn their faith in faraway experts, people start to realize that they're going to have to cooperate with a lot more people -- including those they would otherwise not prefer to relate with -- if they're going to stand up to the growing force of evil that threatens to undo them all. People will always cooperate with their close kin and perhaps even their neighbors within a very small radius, but an ever escalating level of violence calls for more extensive cooperation than that. Now it's time to reach out to other races. And if different races are going to team up, they might as well get acquainted with each other's culture and join in the fun.
That explains why whites and blacks were rather uninterested in each other's worlds from the late '50s through the early '70s, and why that switched from the mid-'70s through the early '90s. So why then did they voluntarily segregate themselves from each other during the mid-'90s? Because as crime rates plummeted after 1992, people saw their world getting safer and order being restored to the universe. Now that the pressing need of uniting to stop rising violence levels had started to abate, blacks and whites saw little point in staying together -- once the war is over, you go back to the different parts of the world that you came from beforehand, even if you keep some memories of your common struggle.
This voluntary cultural segregation will remain until halfway through the next steady rise in the crime rate, or at least 20 years away.
Is there a similar pattern during an earlier crime wave? Yes. Recall that homicide rates began shooting up at least by 1900, perhaps 5 to 10 years earlier, and peaked in 1933, after which they fell through 1958. As predicted, during the falling crime period of 1934 to 1958, there's basically no white interest in black culture other than the constant focus on black athletes and musicians (not necessarily ones integrated with whites). And during the rising crime period of the turn-of-the-century through 1933, whites only really get interested in black culture during the Roaring Twenties.
In the earlier half of rising crime times, they took some interest in ragtime or Dixie jazz music, but during The Jazz Age they immersed themselves in the larger culture surrounding jazz music, not just listening to the sounds but frequenting speakeasies, taking drugs, protecting one another from violent gangsters, evading the coppers, and so on. Not to mention the Harlem Renaissance. If we are to believe a movie like Harlem Nights, white men and black women would have even gotten it on with each other, if the physical attraction were there. That wasn't apparent during the earlier half of rising crime times, and certainly not during the falling crime times of the mid-'30s through the late '50s.
So it looks like only a small fraction of the time will blacks and whites join forces at a steady level, not just for some short-term fix after which they'll disband. Tallying up the numbers, there was one complete cycle in the homicide rate starting with 1934 and lasting through 1992, or 59 years, and only the second half of rising crime times -- from, say, 1975 through 1991 -- saw strong black-white collaboration. Roughly, then, only 1/4 to 1/3 of historical time will feature blacks and whites taking an interest in each other's cultures and more generally feeling like they're part of a larger team.
The bigger point is that it takes very strong social forces to bring different races together, and if they're not there, even the most charismatic individuals are impotent to make the races play nice with each other. White people may have elected a black-white President, but as it was pointed out during the election season, he was basically white people's hip imaginary black friend that they wish they had. The mid-'90s and after have seen a reversion to the "token black guy" role, in contrast to the mid-'70s through early '90s when black-white interaction was essential, not a frivolous luxury item. Obama's efforts to greenify the society will do nothing to unite the races -- for that, we need a team like Riggs and Murtaugh cleaning up the filthy streets of Los Angeles.
January 16, 2011
WWII remembered, Vietnam and WWI forgotten
This is just an empirical account of which wars our culture remembers right now, although in a future post I'll explain why the current crime level determines what type of wars we remember and are interested in.
When I was a boy my conception of war was Vietnam, although it had been over for at least a decade before my earliest memories of the world at all. The ambush in the jungle, the M16, young guys with bandanas tied across their foreheads, referring to the enemy as "Charlie," and so on. Then once the crime rate started plummeting after 1992, I hardly saw the Vietnam type of war anywhere -- suddenly the prototypical war became WWII, and only American involvement in Europe and perhaps Pearl Harbor, nothing about dropping the atomic bomb, the Rape of Nanking, the Bataan Death March, or the civil wars between fascists and partisans that tore apart many European societies.
Here it is, nearly 20 years later, and it's still as if Vietnam never happened. A point I keep emphasizing in this "safe times vs. dangerous times" stuff is that the period from roughly 1959 to 1991 or '92 is, to a first order approximation, a single unbroken period of history. (There is a second order division that I'll detail and explain later, which sub-divides the dangerous times into the 1959-1974ish period and the 1975ish-1992 period.) So, anything that made a big impact in the earlier part, such as the Vietnam War, will continue to reverberate throughout the culture until the very end of rising-crime times.
Most people who lived through the decade, and certainly those who have no memory of it, have retroactively erased the heavy presence of '60s culture during the 1980s because it doesn't make sense -- and to the modern autistic mind, if it doesn't make sense, it isn't true. And yet it is. (And it does make sense, as long as you bear in mind that the violence level was soaring the whole time.) I'm not talking about bell bottoms or taking over the student union, but the key events of the '60s, the supreme one being the Vietnam War.
So in the interest of historical accuracy, let's remind ourselves how popular the Vietnam War was during the second part of the '70s and the '80s and even early '90s, after the war had ended. Here is a list of films about it. In the table listing them, click the box with triangles next to "Country" and scroll down to those made in America. Notice how they begin right away, and more importantly that they continue unbroken through the early 1990s. If we weight the movies by how many tickets they sold, how memorable they are, how well they capture what happened, etc., we'll find that the best Vietnam movies were made during the 1980s. That's certainly true for TV shows, as there was only one -- Tour of Duty -- which began in the late '80s and lasted for three seasons.
Of course, that list leaves out military movies that show a Vietnam-esque quagmire, such as Predator and Aliens.
The memory of Vietnam was still so alive that the plot of Rambo II was all about the POW/MIA urban legend that had begun to gain popularity by the mid-'80s. By roughly 1990, this idea gained so much steam that several Congressional investigations looked into it. At the man-in-the-street level, even people who weren't Vietnam veterans, who didn't have relatives who fought there, etc., were likely to put one of these decals on the rear window of their car:
I normally didn't care about the decals my dad had on his car windows -- one as a parking permit for work, another for Ohio State, etc. -- but that one was so provocative that after looking it over a couple times, I couldn't help but ask what it meant. He retold the story to me as though he were convinced that it was a plausible legend, not fanatically sure nor wafflingly skeptical. I saw these decals on other people's cars and occasionally flags with the design outside people's houses -- they were about as common, or maybe slightly less, than yellow ribbons.
During the post-'92 period, and again sticking just with those movies that caught on, part of Forrest Gump focused on Vietnam, and We Were Soldiers was the only major movie to focus entirely on it.
Now check out the list of WWII movies. There are a few memorable ones from the 1959-1992 period -- The Dirty Dozen, Patton -- but not many, and especially not during the late '70s through early '90s. That all changes in the post-'92 period, beginning right away with Schindler's List. Suddenly there's a big WWII movie made every year or other.
To get more quantitative on the difference in interest right now, at my local Barnes and Noble there is only one shelf for books on Vietnam in the military history section, as opposed to nine shelves for books on WWII. Furthermore, just about all of the "New Releases" that hog the top shelf are about WWII, and maybe one about Vietnam. At Amazon's military history section (in books), the Vietnam category has 2,700 items, whereas the WWII category has 30,001 items. Among the best-sellers in military history, 11 of the top 20 are about WWII (more if you count alternate versions of the same book), and 0 are about Vietnam. Going to the top 40, there is a single one (at #36) about Vietnam, but that's it. The rest are about the American Civil War, the current war in Iraq, etc.
I haven't said much about WWI, but it's basically treated the same as Vietnam, not surprising since they were much more similar to each other than either was to WWII. There was Lawrence of Arabia in 1962, a TV re-make of All Quiet on the Western Front in 1979, Gallipoli in 1983, and Johnny Got His Gun in 1971 and again in 1988 when featured in Metallica's music video for "One". No big ones from the post-'92 period. In Amazon's military history section, there are 6,557 items in the WWI category and no books in the top 40 best-sellers about it. At my local Barnes and Noble, WWI gets just one shelf.
Given how long our involvement was in Vietnam, and how focal the war was during the tumultuous '60s and throughout the '80s, you would never have guessed that there would come a time when it would be more or less forgotten. And yet here we are.
When I was a boy my conception of war was Vietnam, although it had been over for at least a decade before my earliest memories of the world at all. The ambush in the jungle, the M16, young guys with bandanas tied across their foreheads, referring to the enemy as "Charlie," and so on. Then once the crime rate started plummeting after 1992, I hardly saw the Vietnam type of war anywhere -- suddenly the prototypical war became WWII, and only American involvement in Europe and perhaps Pearl Harbor, nothing about dropping the atomic bomb, the Rape of Nanking, the Bataan Death March, or the civil wars between fascists and partisans that tore apart many European societies.
Here it is, nearly 20 years later, and it's still as if Vietnam never happened. A point I keep emphasizing in this "safe times vs. dangerous times" stuff is that the period from roughly 1959 to 1991 or '92 is, to a first order approximation, a single unbroken period of history. (There is a second order division that I'll detail and explain later, which sub-divides the dangerous times into the 1959-1974ish period and the 1975ish-1992 period.) So, anything that made a big impact in the earlier part, such as the Vietnam War, will continue to reverberate throughout the culture until the very end of rising-crime times.
Most people who lived through the decade, and certainly those who have no memory of it, have retroactively erased the heavy presence of '60s culture during the 1980s because it doesn't make sense -- and to the modern autistic mind, if it doesn't make sense, it isn't true. And yet it is. (And it does make sense, as long as you bear in mind that the violence level was soaring the whole time.) I'm not talking about bell bottoms or taking over the student union, but the key events of the '60s, the supreme one being the Vietnam War.
So in the interest of historical accuracy, let's remind ourselves how popular the Vietnam War was during the second part of the '70s and the '80s and even early '90s, after the war had ended. Here is a list of films about it. In the table listing them, click the box with triangles next to "Country" and scroll down to those made in America. Notice how they begin right away, and more importantly that they continue unbroken through the early 1990s. If we weight the movies by how many tickets they sold, how memorable they are, how well they capture what happened, etc., we'll find that the best Vietnam movies were made during the 1980s. That's certainly true for TV shows, as there was only one -- Tour of Duty -- which began in the late '80s and lasted for three seasons.
Of course, that list leaves out military movies that show a Vietnam-esque quagmire, such as Predator and Aliens.
The memory of Vietnam was still so alive that the plot of Rambo II was all about the POW/MIA urban legend that had begun to gain popularity by the mid-'80s. By roughly 1990, this idea gained so much steam that several Congressional investigations looked into it. At the man-in-the-street level, even people who weren't Vietnam veterans, who didn't have relatives who fought there, etc., were likely to put one of these decals on the rear window of their car:
I normally didn't care about the decals my dad had on his car windows -- one as a parking permit for work, another for Ohio State, etc. -- but that one was so provocative that after looking it over a couple times, I couldn't help but ask what it meant. He retold the story to me as though he were convinced that it was a plausible legend, not fanatically sure nor wafflingly skeptical. I saw these decals on other people's cars and occasionally flags with the design outside people's houses -- they were about as common, or maybe slightly less, than yellow ribbons.
During the post-'92 period, and again sticking just with those movies that caught on, part of Forrest Gump focused on Vietnam, and We Were Soldiers was the only major movie to focus entirely on it.
Now check out the list of WWII movies. There are a few memorable ones from the 1959-1992 period -- The Dirty Dozen, Patton -- but not many, and especially not during the late '70s through early '90s. That all changes in the post-'92 period, beginning right away with Schindler's List. Suddenly there's a big WWII movie made every year or other.
To get more quantitative on the difference in interest right now, at my local Barnes and Noble there is only one shelf for books on Vietnam in the military history section, as opposed to nine shelves for books on WWII. Furthermore, just about all of the "New Releases" that hog the top shelf are about WWII, and maybe one about Vietnam. At Amazon's military history section (in books), the Vietnam category has 2,700 items, whereas the WWII category has 30,001 items. Among the best-sellers in military history, 11 of the top 20 are about WWII (more if you count alternate versions of the same book), and 0 are about Vietnam. Going to the top 40, there is a single one (at #36) about Vietnam, but that's it. The rest are about the American Civil War, the current war in Iraq, etc.
I haven't said much about WWI, but it's basically treated the same as Vietnam, not surprising since they were much more similar to each other than either was to WWII. There was Lawrence of Arabia in 1962, a TV re-make of All Quiet on the Western Front in 1979, Gallipoli in 1983, and Johnny Got His Gun in 1971 and again in 1988 when featured in Metallica's music video for "One". No big ones from the post-'92 period. In Amazon's military history section, there are 6,557 items in the WWI category and no books in the top 40 best-sellers about it. At my local Barnes and Noble, WWI gets just one shelf.
Given how long our involvement was in Vietnam, and how focal the war was during the tumultuous '60s and throughout the '80s, you would never have guessed that there would come a time when it would be more or less forgotten. And yet here we are.
January 15, 2011
What happened to personal sound recording?
We keep hearing about how creative and do-it-yourself people have become, especially young people, since digital age began. That's clearly not true -- just look around and see what the result has been, compared to what young people were up to in the late '50s through the early '90s. They invented a hell of a lot more with less dazzling technology.
Obviously it's the qualities of the person, and most importantly of the zeitgeist that influences them, rather than of the technology the person employs that determines whether the outcome is good or bad. Therefore, huge technological change will have almost no effect one way or the other on the quality of the little things that people create. If it were otherwise, you would be able to classify things you'd never seen or heard before with high accuracy into the pre-digital and post-digital periods, as long as you stripped away other giveaway clues. In reality, the terrible-sounding single-focused music of the post-mp3 era doesn't sound so different from the junk of most of the '90s. Albums were dead for most of that decade, too, well before Napster.
In contrast, you can tell what rough stretch of time it was made because the zeitgeist is so powerful in shaping things that songs from two different zeitgeists will sound completely different, the Rolling Stones vs. Weezer for example. However, someone who hadn't heard their music would not be able to distinguish the songs of Weezer and Arcade Fire as being on opposite sides of the digital divide, or even which Weezer songs were from the late '90s (pre-Napster) and which from the late 2000s.
As with the professionals, so it is with the average person who has a handful of consumer electronics that allow for creative output. The average person is not super artistic and dedicated, or even an amateur. What the average person's pictures look like are what you see throughout people's Facebook albums, and they look no higher in quality or creativity than what you would've found in the average person's photo albums, or among the loose pictures that they kept in envelopes, shoeboxes, or just lying around a drawer.
One way in which people were more inclined to use technology creatively rather than passively during the pre-digital age, though, was making their own sound recordings. Even the lowest quality boom boxes and home cassette decks had some kind of microphone, and so did the higher quality walkmans. In between these, in terms of power and portability, were tape players and recorders about the size of a largish hardcover book. Often these built-in mics were improved on by a not-so-expensive handheld microphone (the ice-cream-cone-sized ones with wire mesh or foam padding at the top) that plugged into the tape player.
Even when CDs became the dominant media for listening to recorded music, tape players still held on because even if you weren't listening to recorded music on cassettes, you were still probably recording your own sounds on them, since you couldn't make your own recordings onto a CD. That's why every home CD player through the '90s had a tape player (and also for copying a CD onto a tape).
Yet digital music players, whether home or portable versions, lack a microphone (except for the few home stereos that still include a cassette player). The 2010 model of the iPod Touch has a camera that records video, although I'm not sure if it has a microphone to record sound as well. If it does, then it's finally reached the state-of-the-art from 1985, although home stereos that serve as docking stations for the iPod will still lack a microphone.
And even the sound recording that comes along with the video recording function of digital cameras and smartphones is pathetic. On any cassette player, you can record for up to 60 or 90 or however many minutes straight, depending on the type of blank tape you got and what speed your recorder recorded it at. When you take a video with your iPhone, which will include recording the sound, I'm not sure how long you can go, but it's nowhere near that long. Even shorter for the average digital camera. There is simply too much information being recorded visually for it to be that long.
One objection is that microphones have migrated from music or video playing devices to computers, but again I'm focused on what people are doing with the technology. And people who have a mic on their laptop are generally not using it for recording sound -- transmitting perhaps, like during a skype chat, but not recording it. Lots of people watch YouTube videos, but few record/make even one, let alone several. One reason is surely that you feel so much more nervous recording both your image and your sound than just your sound alone. You feel less of a spotlight when you're talking into the mic on your tape player than shooting a video of yourself near your computer through a mic-webcam. Thank god phones are still largely sound-only rather than sound-and-image.
The more important objection is -- well, how do we know that people were doing such creative DIY things with tape recorders in the pre-digital age? I'll start a list of typical things we used to do with the mic, although I wasn't even a teenager in the '80s when personal sound recording hit its peak, so please chime in in the comments with anything I've left out.
- Recording the sounds from instruments that you, and maybe a group of people, were playing yourselves. (This was back when people still played music.) Whether you had an electric guitar or a keyboard with a line out, there was sound coming from the amplifier, and a cheap and easy way to record that was to set the microphone near it and capture it on tape. I know there are computer programs that do this now, but they're either too expensive or complicated for the average person to be using them.
- Recording your voice singing along to something. OK, so you don't know how to play an instrument -- not even power chords on a guitar -- but who doesn't know how to sing, even if badly? Sometimes you would just sing on your own, other times with the original song you coming in through your headphones that were hooked into a separate music player. I even did this on-the-go as a child. Whenever I got bored on longish trips to my grandparents' house, a fun and easy way to pass the time only required two walkmans -- one to play a song through the headphones so I could hear the music, and another to record my voice as I sang along. I remember doing that for "Bohemian Rhapsody" a lot because it was longer than most songs and killed more time.
In case you're wondering if I drove my mother crazy singing like that, no, I did this in the hatchback part of the stationwagon, not in the passenger's seat -- this was before it was illegal to roam wherever you wanted inside the car -- and usually with a sheet or blanket over me, both to keep outside noise from getting in and to spare everyone else from my performance.
- Recording a one-way conversation for someone who can't be there. My dad was in the navy for several years before and after I was born, and sometimes he'd be away for months at a time. To keep him in touch with how I was coming along, my mother used to record entire cassettes of her talking by herself and also with me. She would prompt me to say what I ate for lunch, what I was doing for fun, to sing the songs I learned in Sunday school, etc. Again the difference in length of sound recordings from tapes vs. digital is crucial. Leaving a voice message on someone's cell phone doesn't allow for any depth, and neither does a sound-and-image recording from a smartphone or digital camera. A YouTube-style video recorded with a webcam could work, but again people feel too nervous when their image is being recorded that they tend not to record for as long and don't open up as much as when they're just talking into the mic. Plus cassettes gave my dad something tactile, not just some file transferred over the internet, and that makes it more special. He kept those tapes for at least 10 years after we made them, and may still have them.
- Recording your thoughts in a "Dear diary" way. It didn't even have to be stream-of-consciousness, since you could always pause and resume, or even rewind and tape over a part you thought was wrong. I don't remember doing this; if I did, it must have only been once or twice and not about anything important. That's because boys are a lot less likely to keep diaries than girls. I know that it was common enough among girls for it to be accepted as natural in Twin Peaks, where Laura Palmer records her thoughts onto tapes that she sends to her psychiatrist Dr. Jacoby. As with the previous example, the length makes a real difference because you can go much deeper by recording onto cassettes than on the memory in your iPhone or whatever. I wasn't old enough when this was popular to get tapes from girls, but I can tell from Dr. Jacoby's reaction that it would have been heaven to listen to a tape of a girl speaking just for you like that.
- Recording other people's responses to your questions, reporter-style. I don't mean when you worked for the school paper, since I assume those guys still do that. I'm talking about whipping out the walkman with a record button on it at the lunch table or going around the cafeteria or schoolyard and goofing around with the question-and-answer format. I know this is still done with digital cameras and smart phones, and here their shorter length is not even a weakness since this behavior doesn't last long and isn't meant to be deep. Still, I think the presence of a camera makes people less likely to let go and participate since they'll get stage fright -- either that, or they won't say much at all, and just make a bunch of kabuki faces to get out their goofiness visually rather than verbally.
What else did people use sound recorders for? I'm especially interested in what role they played in a boyfriend/girlfriend relationship. Mix tapes are copying already-recorded sound. I'm talking about recording something of your own. Did they used to send each other tapes the way they used to pass notes or send hand-written letters? I don't recall seeing that in movies or TV shows from the time, although if I thought hard enough I could probably remember an episode where Kelly sent Zack a tape of her talking.
Obviously it's the qualities of the person, and most importantly of the zeitgeist that influences them, rather than of the technology the person employs that determines whether the outcome is good or bad. Therefore, huge technological change will have almost no effect one way or the other on the quality of the little things that people create. If it were otherwise, you would be able to classify things you'd never seen or heard before with high accuracy into the pre-digital and post-digital periods, as long as you stripped away other giveaway clues. In reality, the terrible-sounding single-focused music of the post-mp3 era doesn't sound so different from the junk of most of the '90s. Albums were dead for most of that decade, too, well before Napster.
In contrast, you can tell what rough stretch of time it was made because the zeitgeist is so powerful in shaping things that songs from two different zeitgeists will sound completely different, the Rolling Stones vs. Weezer for example. However, someone who hadn't heard their music would not be able to distinguish the songs of Weezer and Arcade Fire as being on opposite sides of the digital divide, or even which Weezer songs were from the late '90s (pre-Napster) and which from the late 2000s.
As with the professionals, so it is with the average person who has a handful of consumer electronics that allow for creative output. The average person is not super artistic and dedicated, or even an amateur. What the average person's pictures look like are what you see throughout people's Facebook albums, and they look no higher in quality or creativity than what you would've found in the average person's photo albums, or among the loose pictures that they kept in envelopes, shoeboxes, or just lying around a drawer.
One way in which people were more inclined to use technology creatively rather than passively during the pre-digital age, though, was making their own sound recordings. Even the lowest quality boom boxes and home cassette decks had some kind of microphone, and so did the higher quality walkmans. In between these, in terms of power and portability, were tape players and recorders about the size of a largish hardcover book. Often these built-in mics were improved on by a not-so-expensive handheld microphone (the ice-cream-cone-sized ones with wire mesh or foam padding at the top) that plugged into the tape player.
Even when CDs became the dominant media for listening to recorded music, tape players still held on because even if you weren't listening to recorded music on cassettes, you were still probably recording your own sounds on them, since you couldn't make your own recordings onto a CD. That's why every home CD player through the '90s had a tape player (and also for copying a CD onto a tape).
Yet digital music players, whether home or portable versions, lack a microphone (except for the few home stereos that still include a cassette player). The 2010 model of the iPod Touch has a camera that records video, although I'm not sure if it has a microphone to record sound as well. If it does, then it's finally reached the state-of-the-art from 1985, although home stereos that serve as docking stations for the iPod will still lack a microphone.
And even the sound recording that comes along with the video recording function of digital cameras and smartphones is pathetic. On any cassette player, you can record for up to 60 or 90 or however many minutes straight, depending on the type of blank tape you got and what speed your recorder recorded it at. When you take a video with your iPhone, which will include recording the sound, I'm not sure how long you can go, but it's nowhere near that long. Even shorter for the average digital camera. There is simply too much information being recorded visually for it to be that long.
One objection is that microphones have migrated from music or video playing devices to computers, but again I'm focused on what people are doing with the technology. And people who have a mic on their laptop are generally not using it for recording sound -- transmitting perhaps, like during a skype chat, but not recording it. Lots of people watch YouTube videos, but few record/make even one, let alone several. One reason is surely that you feel so much more nervous recording both your image and your sound than just your sound alone. You feel less of a spotlight when you're talking into the mic on your tape player than shooting a video of yourself near your computer through a mic-webcam. Thank god phones are still largely sound-only rather than sound-and-image.
The more important objection is -- well, how do we know that people were doing such creative DIY things with tape recorders in the pre-digital age? I'll start a list of typical things we used to do with the mic, although I wasn't even a teenager in the '80s when personal sound recording hit its peak, so please chime in in the comments with anything I've left out.
- Recording the sounds from instruments that you, and maybe a group of people, were playing yourselves. (This was back when people still played music.) Whether you had an electric guitar or a keyboard with a line out, there was sound coming from the amplifier, and a cheap and easy way to record that was to set the microphone near it and capture it on tape. I know there are computer programs that do this now, but they're either too expensive or complicated for the average person to be using them.
- Recording your voice singing along to something. OK, so you don't know how to play an instrument -- not even power chords on a guitar -- but who doesn't know how to sing, even if badly? Sometimes you would just sing on your own, other times with the original song you coming in through your headphones that were hooked into a separate music player. I even did this on-the-go as a child. Whenever I got bored on longish trips to my grandparents' house, a fun and easy way to pass the time only required two walkmans -- one to play a song through the headphones so I could hear the music, and another to record my voice as I sang along. I remember doing that for "Bohemian Rhapsody" a lot because it was longer than most songs and killed more time.
In case you're wondering if I drove my mother crazy singing like that, no, I did this in the hatchback part of the stationwagon, not in the passenger's seat -- this was before it was illegal to roam wherever you wanted inside the car -- and usually with a sheet or blanket over me, both to keep outside noise from getting in and to spare everyone else from my performance.
- Recording a one-way conversation for someone who can't be there. My dad was in the navy for several years before and after I was born, and sometimes he'd be away for months at a time. To keep him in touch with how I was coming along, my mother used to record entire cassettes of her talking by herself and also with me. She would prompt me to say what I ate for lunch, what I was doing for fun, to sing the songs I learned in Sunday school, etc. Again the difference in length of sound recordings from tapes vs. digital is crucial. Leaving a voice message on someone's cell phone doesn't allow for any depth, and neither does a sound-and-image recording from a smartphone or digital camera. A YouTube-style video recorded with a webcam could work, but again people feel too nervous when their image is being recorded that they tend not to record for as long and don't open up as much as when they're just talking into the mic. Plus cassettes gave my dad something tactile, not just some file transferred over the internet, and that makes it more special. He kept those tapes for at least 10 years after we made them, and may still have them.
- Recording your thoughts in a "Dear diary" way. It didn't even have to be stream-of-consciousness, since you could always pause and resume, or even rewind and tape over a part you thought was wrong. I don't remember doing this; if I did, it must have only been once or twice and not about anything important. That's because boys are a lot less likely to keep diaries than girls. I know that it was common enough among girls for it to be accepted as natural in Twin Peaks, where Laura Palmer records her thoughts onto tapes that she sends to her psychiatrist Dr. Jacoby. As with the previous example, the length makes a real difference because you can go much deeper by recording onto cassettes than on the memory in your iPhone or whatever. I wasn't old enough when this was popular to get tapes from girls, but I can tell from Dr. Jacoby's reaction that it would have been heaven to listen to a tape of a girl speaking just for you like that.
- Recording other people's responses to your questions, reporter-style. I don't mean when you worked for the school paper, since I assume those guys still do that. I'm talking about whipping out the walkman with a record button on it at the lunch table or going around the cafeteria or schoolyard and goofing around with the question-and-answer format. I know this is still done with digital cameras and smart phones, and here their shorter length is not even a weakness since this behavior doesn't last long and isn't meant to be deep. Still, I think the presence of a camera makes people less likely to let go and participate since they'll get stage fright -- either that, or they won't say much at all, and just make a bunch of kabuki faces to get out their goofiness visually rather than verbally.
What else did people use sound recorders for? I'm especially interested in what role they played in a boyfriend/girlfriend relationship. Mix tapes are copying already-recorded sound. I'm talking about recording something of your own. Did they used to send each other tapes the way they used to pass notes or send hand-written letters? I don't recall seeing that in movies or TV shows from the time, although if I thought hard enough I could probably remember an episode where Kelly sent Zack a tape of her talking.
January 13, 2011
Justin Bieber
Finally got around to watching the music videos for his singles, as I was curious why he attracts so much hate on YouTube and elsewhere. In today's culture, that doesn't mean anything since all that the vocal critics listen to is total shit as well. In fact, maybe it meant that he was actually good!
Well no, he does stink, but no more than Linkin Park, Ying Yang Twins, and all the other garbage that's come out since the death of pop music in 1991. I had no desire to listen to any of the songs a second time, and about half of them I just stopped half or three-quarters of the way through. That's still farther than I can get into a techno or nu metal song, though.
Here's what I notice as the major differences from the heyday of R&B boy groups (like that of rock, also from the late '50s through about 1990), ending with New Kids on the Block... maybe Boyz II Men. I'm using Bieber as the contrast, but it applies just as well to the boy bands of the late '90s and early 2000s.
- He smiles way too much, making him look infantile. Kids who grew up after the crime rate started falling after 1992 have encountered little danger personally, have had no sense that the rest of the society was about to fall apart, haven't had an early relationship or sexual experience, especially one where they got burned at a young age, didn't see or participate in much drug use, and can't remember visiting or living in cities where parts where taken over by prostitutes, junkies, and crazy homeless people. As a result, they don't ever get that look like they've been through some heavy shit.
Even the youngest New Kid looked more mature than Bieber, if you check the video for "Please Don't Go Girl" (the one with the older black chick, not the one with the young white girls at Coney Island). The 1980s were the most dangerous decade for children and adolescents, so that little dude likely saw and lived through things that Bieber won't until he's at least 30 or 40, and it shows in his facial expressions. (This is also why you don't see Slavic women smiling brightly in pictures -- their societies have been much more violent than Western Europe for awhile, including today.)
- Somewhat related, there's no attempt to look manly. I don't mean doing a Marlon Brando impersonation, but hell at least project that you're in control of the situation and give orders rather than take them. Once more, even the New Kids had the token bad boy song "Hangin' Tough," and you see them with ripped jeans and black leather jackets in several of their videos. Or look at Michael Jackson's videos from the Thriller and Bad albums; he didn't go for a macho image, but you can still tell he's making testosterone.
- He doesn't swing his hips, thrust his pelvis, or do anything with any part of his torso when he dances (nor do his backup dancers). Most of his moves are footwork and hand movements, with whatever leg and arm moves that they require. That's how teenage and college guys dance in night clubs when they try to dance for an audience, although some of the goofballs do pelvic thrusts when they're joking around.
Again even as recently as the New Kids, white guys knew how to move their torso, including their pelvis, whether or not they added some choreographed footwork as well. Since the level of wildness among young people began plummeting in the early '90s, it's as though guys don't want to let go and show their sexual nature while dancing -- the focus of movement goes away from the core, and out toward the most peripheral areas. Distract the viewers with all that frenetic stuff going on in your feet and hands, and they won't even notice that you have a dick. With Elvis, Mick Jagger, Prince, Michael Hutchence, etc., it's hard not to zoom in on their butt or crotch when they dance. Dancers who don't move their core a lot and focus so much on the hands and feet look like wooden puppets, not real people.
- Why don't you ever see him driving a car? Since the early '90s the fraction of 16 or 17 year-olds who have a license has been falling, and within the past few years it reached the point where the median 17 year-old does not have one. I got my learner's permit as early as possible, at 15 and a half, and he turned that age last fall. Plus the video doesn't have to be that realistic -- show him driving even if he is too young to do so in real life. If you can't drive and don't have a car, you're stuck in childhood -- simple as that. That's why when young people used to have a life, getting their learner's and their full license was a rite of passage they dreamed of for years in advance -- "finally I won't have to have my mom drive me around or sponge off my friends who can drive."
You guessed it, even the youngest New Kid is shown driving in the video for "You Got It (The Right Stuff)". They're all cruising around unsupervised, and succeed in picking up some girls. I don't remember which video it is, but the Wahlberg guy is shown driving a motorcycle, and not one of those itty bitty ones either.
Despite all of the signs of the further infantilization of young people that began nearly 20 years ago -- including one video that opens with him and his dorky friend shackled to an Xbox -- there are still some things that are nice to see, especially compared to the rest of the garbage heap of recent pop music:
- He's a high schooler, and when was the last time you saw one of those making music? Hanson? (Miley Cyrus if we count girls.) At least he's unsupervised enough to be a music star instead of drowning in the Japanese cram school environment that North American high schools have come to resemble. And there are lots of teenage girls in his videos -- again, when was the last time you saw that? They can't be beat for giggliness and being incapable of faking their feelings -- very refreshing to take in.
- Most of the dancing in his videos reflects the near total segregation of the sexes during the past 20 years, that is where each person is dancing unaccompanied and generally near people of the same sex. However, there is one where he and a girl he's been pursuing share a slow dance, and she's looking him dead in the eyes and fighting back a smile. Now that might actually go somewhere, unlike the grinding that characterizes the post-'92 era of "dancing near" rather than "dancing with."
- Even though he comes off as infantile and a bit corny, at least he maintains a sincere image the whole time, unlike the general trend of the past 20 years toward mockery, caricature, and meta-ironic dweebery. As far as I could tell, there were no duckfaces or kabuki faces, emo contortions, or other display of smug self-awareness. Neither do the legions of high school cuties in the background -- unlike most pictures that girls that age post to their Facebook, Flickr, and (earlier) MySpace profiles.
Overall, nothing I would have missed if I'd never heard or seen it, but then that's true of pop music in general since about 1991. He does show the sorry infantilized state that young people have been in since that time, but at least he lacks that aggressive smugness that other loser musicians have had, Fergie probably being the worst case there.
Well no, he does stink, but no more than Linkin Park, Ying Yang Twins, and all the other garbage that's come out since the death of pop music in 1991. I had no desire to listen to any of the songs a second time, and about half of them I just stopped half or three-quarters of the way through. That's still farther than I can get into a techno or nu metal song, though.
Here's what I notice as the major differences from the heyday of R&B boy groups (like that of rock, also from the late '50s through about 1990), ending with New Kids on the Block... maybe Boyz II Men. I'm using Bieber as the contrast, but it applies just as well to the boy bands of the late '90s and early 2000s.
- He smiles way too much, making him look infantile. Kids who grew up after the crime rate started falling after 1992 have encountered little danger personally, have had no sense that the rest of the society was about to fall apart, haven't had an early relationship or sexual experience, especially one where they got burned at a young age, didn't see or participate in much drug use, and can't remember visiting or living in cities where parts where taken over by prostitutes, junkies, and crazy homeless people. As a result, they don't ever get that look like they've been through some heavy shit.
Even the youngest New Kid looked more mature than Bieber, if you check the video for "Please Don't Go Girl" (the one with the older black chick, not the one with the young white girls at Coney Island). The 1980s were the most dangerous decade for children and adolescents, so that little dude likely saw and lived through things that Bieber won't until he's at least 30 or 40, and it shows in his facial expressions. (This is also why you don't see Slavic women smiling brightly in pictures -- their societies have been much more violent than Western Europe for awhile, including today.)
- Somewhat related, there's no attempt to look manly. I don't mean doing a Marlon Brando impersonation, but hell at least project that you're in control of the situation and give orders rather than take them. Once more, even the New Kids had the token bad boy song "Hangin' Tough," and you see them with ripped jeans and black leather jackets in several of their videos. Or look at Michael Jackson's videos from the Thriller and Bad albums; he didn't go for a macho image, but you can still tell he's making testosterone.
- He doesn't swing his hips, thrust his pelvis, or do anything with any part of his torso when he dances (nor do his backup dancers). Most of his moves are footwork and hand movements, with whatever leg and arm moves that they require. That's how teenage and college guys dance in night clubs when they try to dance for an audience, although some of the goofballs do pelvic thrusts when they're joking around.
Again even as recently as the New Kids, white guys knew how to move their torso, including their pelvis, whether or not they added some choreographed footwork as well. Since the level of wildness among young people began plummeting in the early '90s, it's as though guys don't want to let go and show their sexual nature while dancing -- the focus of movement goes away from the core, and out toward the most peripheral areas. Distract the viewers with all that frenetic stuff going on in your feet and hands, and they won't even notice that you have a dick. With Elvis, Mick Jagger, Prince, Michael Hutchence, etc., it's hard not to zoom in on their butt or crotch when they dance. Dancers who don't move their core a lot and focus so much on the hands and feet look like wooden puppets, not real people.
- Why don't you ever see him driving a car? Since the early '90s the fraction of 16 or 17 year-olds who have a license has been falling, and within the past few years it reached the point where the median 17 year-old does not have one. I got my learner's permit as early as possible, at 15 and a half, and he turned that age last fall. Plus the video doesn't have to be that realistic -- show him driving even if he is too young to do so in real life. If you can't drive and don't have a car, you're stuck in childhood -- simple as that. That's why when young people used to have a life, getting their learner's and their full license was a rite of passage they dreamed of for years in advance -- "finally I won't have to have my mom drive me around or sponge off my friends who can drive."
You guessed it, even the youngest New Kid is shown driving in the video for "You Got It (The Right Stuff)". They're all cruising around unsupervised, and succeed in picking up some girls. I don't remember which video it is, but the Wahlberg guy is shown driving a motorcycle, and not one of those itty bitty ones either.
Despite all of the signs of the further infantilization of young people that began nearly 20 years ago -- including one video that opens with him and his dorky friend shackled to an Xbox -- there are still some things that are nice to see, especially compared to the rest of the garbage heap of recent pop music:
- He's a high schooler, and when was the last time you saw one of those making music? Hanson? (Miley Cyrus if we count girls.) At least he's unsupervised enough to be a music star instead of drowning in the Japanese cram school environment that North American high schools have come to resemble. And there are lots of teenage girls in his videos -- again, when was the last time you saw that? They can't be beat for giggliness and being incapable of faking their feelings -- very refreshing to take in.
- Most of the dancing in his videos reflects the near total segregation of the sexes during the past 20 years, that is where each person is dancing unaccompanied and generally near people of the same sex. However, there is one where he and a girl he's been pursuing share a slow dance, and she's looking him dead in the eyes and fighting back a smile. Now that might actually go somewhere, unlike the grinding that characterizes the post-'92 era of "dancing near" rather than "dancing with."
- Even though he comes off as infantile and a bit corny, at least he maintains a sincere image the whole time, unlike the general trend of the past 20 years toward mockery, caricature, and meta-ironic dweebery. As far as I could tell, there were no duckfaces or kabuki faces, emo contortions, or other display of smug self-awareness. Neither do the legions of high school cuties in the background -- unlike most pictures that girls that age post to their Facebook, Flickr, and (earlier) MySpace profiles.
Overall, nothing I would have missed if I'd never heard or seen it, but then that's true of pop music in general since about 1991. He does show the sorry infantilized state that young people have been in since that time, but at least he lacks that aggressive smugness that other loser musicians have had, Fergie probably being the worst case there.
January 12, 2011
Why did crying evolve? An example of academic cluelessness
Here's an NYT article on the function of crying. In an experiment, males who smelled female tears saw a drop in their testosterone level and in general were less sexually arousable than males who smelled a saline solution dribbled down a woman's cheek. The academics interviewed, both those who performed the study and others commenting on it, talk about the function that crying might serve -- for a girlfriend or wife to dampen a mate's sexual interest when she doesn't feel like it. One of the study's authors suggests that follow-up work could reveal a similar effect of male tears on men's aggression levels, and that crying might function to reduce aggression in other males.
It's 100% obvious that these suggestions about the function of crying are wrong, and only a moment's thought would guide you to the correct answer. But let's play naive and point to a couple problems for the theory, and then see if there's some other view of crying's function that resolves these problems.
1) The amount of time that women are not interested in sex, while their mate is pressuring them for it, is vastly greater than the amount of time that women spend crying. If crying works so well, why do they hardly ever do it?
2) As the interviewees point out, a much more effective way for a woman to say "no" is to say "no." Maybe there's some arguing back and forth, but that's more effective than crying.
To fix problem 1, is there some other group of people who spend a lot of their stubbornness time by crying? To fix problem 2, is there some other group of people who more or less lack the ability to argue back and forth, and so for whom crying would be the most effective way of communicating their displeasure?
Why, yes there is -- and they're called babies. Ever since we left a hunter-gatherer way of life some thousands of years ago, we've become more and more detached from real life, especially after the industrial revolution. One part of that is plummeting fertility rates, and the complete ignorance of human development and human nature in general that comes from not having children, and not spending time around other people's children. So it is possible for dozens of triple-digit IQ people to talk on and on about the function that tears play in an adult female dampening the sexual interest of an adult male, while never bringing up the correct answer.
Now that we've identified who the true beneficiary of crying is, why did they evolve to dampen a male's sexual interest? Parent-offspring conflict. There will be times when dad wants to make out or have sex with mom, when the baby wants that time spent on him -- feed me, protect me, and so on. The male sex drive is pretty hard to turn off once it's switched on, so the baby needs a reliably strong way to defuse dad's boner, and crying is one surefire way. Why did dad not evolve a defense against crying? Because most of the time the baby is howling for some good reason (howling is a costly and therefore honest signal of need). So if he had kept pursuing sex instead of tending to his kid's needs, the baby would have been at greater risk of dying. Such a tear-proof dad would have been genetically replaced by dads who were at least somewhat responsive to crying.
This also explains why testosterone levels fall, and why a similar study might find that male tears lowered male feelings of aggression. It would have nothing to do with what the NYT article discusses -- like an adolescent male crying to prevent others from beating him up. In fact, that works the other way -- effeminate behavior such as being a cry-baby provokes bullies into beating up on target males. Again it's babies who this is designed for, as they have no verbal way to defend themselves, and are physically outclassed by even a 10 year-old, let alone any older male. It takes someone with brain damage to beat up on a crying baby, even if he wanted to.
And it also explains why females cry more than males: in general, females are more neotenous, or resembling babies. They have more babyish faces, are more hairless, smile and laugh more, and so on. Whatever hormones or genes bring that about will also make them more prone to crying than males.
Adult males also benefit from crying in rare cases, like when they have to signal to someone who's beating them up just to teach them a lesson that they've learned their lesson and the beater should stop. A prediction from the correct view of crying's function is that adult males will be more likely to cry when they face another adult male who doesn't speak their language, and so who are back in the baby's place of being unable to argue verbally back and forth.
When lots of smart people can discuss the role of crying in helping adult females to dampen adult male sex interest, without even bringing up babies -- let alone smacking themselves on the forehead for having been so clueless before -- it makes you doubt the possibility that social science wisdom will be cumulative. Jesus, they've already forgotten the clear-as-day fact that it's babies who benefit from crying, not women.
It's 100% obvious that these suggestions about the function of crying are wrong, and only a moment's thought would guide you to the correct answer. But let's play naive and point to a couple problems for the theory, and then see if there's some other view of crying's function that resolves these problems.
1) The amount of time that women are not interested in sex, while their mate is pressuring them for it, is vastly greater than the amount of time that women spend crying. If crying works so well, why do they hardly ever do it?
2) As the interviewees point out, a much more effective way for a woman to say "no" is to say "no." Maybe there's some arguing back and forth, but that's more effective than crying.
To fix problem 1, is there some other group of people who spend a lot of their stubbornness time by crying? To fix problem 2, is there some other group of people who more or less lack the ability to argue back and forth, and so for whom crying would be the most effective way of communicating their displeasure?
Why, yes there is -- and they're called babies. Ever since we left a hunter-gatherer way of life some thousands of years ago, we've become more and more detached from real life, especially after the industrial revolution. One part of that is plummeting fertility rates, and the complete ignorance of human development and human nature in general that comes from not having children, and not spending time around other people's children. So it is possible for dozens of triple-digit IQ people to talk on and on about the function that tears play in an adult female dampening the sexual interest of an adult male, while never bringing up the correct answer.
Now that we've identified who the true beneficiary of crying is, why did they evolve to dampen a male's sexual interest? Parent-offspring conflict. There will be times when dad wants to make out or have sex with mom, when the baby wants that time spent on him -- feed me, protect me, and so on. The male sex drive is pretty hard to turn off once it's switched on, so the baby needs a reliably strong way to defuse dad's boner, and crying is one surefire way. Why did dad not evolve a defense against crying? Because most of the time the baby is howling for some good reason (howling is a costly and therefore honest signal of need). So if he had kept pursuing sex instead of tending to his kid's needs, the baby would have been at greater risk of dying. Such a tear-proof dad would have been genetically replaced by dads who were at least somewhat responsive to crying.
This also explains why testosterone levels fall, and why a similar study might find that male tears lowered male feelings of aggression. It would have nothing to do with what the NYT article discusses -- like an adolescent male crying to prevent others from beating him up. In fact, that works the other way -- effeminate behavior such as being a cry-baby provokes bullies into beating up on target males. Again it's babies who this is designed for, as they have no verbal way to defend themselves, and are physically outclassed by even a 10 year-old, let alone any older male. It takes someone with brain damage to beat up on a crying baby, even if he wanted to.
And it also explains why females cry more than males: in general, females are more neotenous, or resembling babies. They have more babyish faces, are more hairless, smile and laugh more, and so on. Whatever hormones or genes bring that about will also make them more prone to crying than males.
Adult males also benefit from crying in rare cases, like when they have to signal to someone who's beating them up just to teach them a lesson that they've learned their lesson and the beater should stop. A prediction from the correct view of crying's function is that adult males will be more likely to cry when they face another adult male who doesn't speak their language, and so who are back in the baby's place of being unable to argue verbally back and forth.
When lots of smart people can discuss the role of crying in helping adult females to dampen adult male sex interest, without even bringing up babies -- let alone smacking themselves on the forehead for having been so clueless before -- it makes you doubt the possibility that social science wisdom will be cumulative. Jesus, they've already forgotten the clear-as-day fact that it's babies who benefit from crying, not women.
January 11, 2011
Video games replacing music, indie snob edition
Among young people, video games began replacing music as a personal obsession, means of tribal identification, and so on, during the wussification of society that began in the mid-1990s. By now there's not even a coexistence as one wanes while the other waxes -- young people are just totally unplugged from the music world, and so absorbed in video games.
A commenter at Steve Sailer's blog (I believe named jody) pointed out that now kids line up at 10pm for a "midnight launch" of a new video game, where as late as the 1980s they lined up for tickets to a rock concert. They do the same for gadgets in general, not just video games or video game consoles -- look at the dweebs wrapped around five city blocks waiting for the new iPod, iPhone, iPad, etc. No one did that when Sony released the original Walkman, Discman, or any of their many updates, and only a retard would have camped out to secure the first cordless phone. People used to have a life before 1992.
And then there's the petty grab at status when someone says "I used to like that before it was popular" or "I'm not really into mainstream ____, I'm more into indie." Dude, you were like so ahead of the trend -- you totally called it! -- and a little recognition is only fitting. As late as the mid-'90s, I only said or heard this kind of bragging in the context of music and maybe movies. I never heard anyone talk about video games that way, even though all young males were into them. Nor did I hear or read about games that were "obscure," "underground," "hidden gems," "under-rated classics," etc.
As the 3D era of video games began in the mid-'90s, I tuned out because the graphics looked worse than Super Nintendo, the pace and action level plummeted, and "winning" them became more a matter of having lots of free time to waste rather than a high skill level. That's still true. But it should come as no surprise that this is when video games showed the first clear signs of displacing music from youth culture. The Independent Games Festival began in 1998 and named their first winners for 1999. By 2001, the high-profile website GameSpot featured an Indie Games Week.
A Google search for "before it was popular" and any video game system from the 2000s delivers tens to hundreds of thousands of hits. Sometimes the whiner is talking about an awareness of or liking a particular game or series of games, although sometimes they even refer to using a specific set of weapons or strategies within an already popular game. From the forums at High Impact Halo, this thread on things you liked before they were popular has not only video games but also movies, TV shows, comedians... and more or less no music. So it's not like video games have shoved out all earlier youth obsessions -- just the most Dionysian one. For that matter, you don't hear them talking about other parts of the "sex, drugs, and rock 'n' roll" culture, like "I was into Christy Canyon before porn became mainstream" or "I was smoking crack and wearing Polo shirts before ghetto blacks took them over and made them declasse."
In a recent episode of the All Gen Gamers podcast, Pete Dorr mentioned that he's getting more into indie or obscure games, using that same tone of voice that my friend used (as late as 1994) to explain to his dad that he was moving away from corporate rock and into underground punk. Dorr is a Millennial, born in the late '80s, and again only the biggest of nerds during the pre-Playstation era would have tried to use "into indie video games" as one of their cultural tribal markers. Only a minority of guys in either time period would have used "indie" as a marker at all, but the point is that this minority went from "unknown bands" to "unknown games" as their tribal badge. And today GameFAQs is running a poll on the Best Indie Game of 2010, which tens of thousands of youngish males will respond to.
One silver lining to be found in the replacement of music by video games among young people is the relegation of indie "rock" to the background of culture. Teenagers and college students today couldn't care less about it, so it's doomed -- either they don't have the indie snob gene in the first place, or they do but are applying it to video games rather than bands. I'd ballpark the primary age group of indie music fans at 30-34, with the rest falling into the 25-29 and 35-44 age groups. That's way up from even the mid-'90s, when I was an indie snob about music. It was probably college-aged then, although that still left plenty of high school and even middle school snobs (I began my obsession with the Dead Milkmen in 8th grade, having seen the video for "Punk Rock Girl" on Beavis & Butt-head, and by 9th grade I bought a record player so I could listen to "obscure" music that wasn't popular enough for the label to re-release it on CD.)
Although I was either not yet born or only in elementary school during the heyday of independent rock music (the mid-'70s through the late '80s), I'm sure I wouldn't have minded the indie snobs back then. At least they were listening to good independent music -- sincere, energetic groups like Talking Heads, Echo and the Bunnymen, R.E.M, Camper Van Beethoven, The Replacements... and not passionless meta-ironics like The Magnetic Fields or The Stills. The only thing worse than a snob is one with dorky taste.
A commenter at Steve Sailer's blog (I believe named jody) pointed out that now kids line up at 10pm for a "midnight launch" of a new video game, where as late as the 1980s they lined up for tickets to a rock concert. They do the same for gadgets in general, not just video games or video game consoles -- look at the dweebs wrapped around five city blocks waiting for the new iPod, iPhone, iPad, etc. No one did that when Sony released the original Walkman, Discman, or any of their many updates, and only a retard would have camped out to secure the first cordless phone. People used to have a life before 1992.
And then there's the petty grab at status when someone says "I used to like that before it was popular" or "I'm not really into mainstream ____, I'm more into indie." Dude, you were like so ahead of the trend -- you totally called it! -- and a little recognition is only fitting. As late as the mid-'90s, I only said or heard this kind of bragging in the context of music and maybe movies. I never heard anyone talk about video games that way, even though all young males were into them. Nor did I hear or read about games that were "obscure," "underground," "hidden gems," "under-rated classics," etc.
As the 3D era of video games began in the mid-'90s, I tuned out because the graphics looked worse than Super Nintendo, the pace and action level plummeted, and "winning" them became more a matter of having lots of free time to waste rather than a high skill level. That's still true. But it should come as no surprise that this is when video games showed the first clear signs of displacing music from youth culture. The Independent Games Festival began in 1998 and named their first winners for 1999. By 2001, the high-profile website GameSpot featured an Indie Games Week.
A Google search for "before it was popular" and any video game system from the 2000s delivers tens to hundreds of thousands of hits. Sometimes the whiner is talking about an awareness of or liking a particular game or series of games, although sometimes they even refer to using a specific set of weapons or strategies within an already popular game. From the forums at High Impact Halo, this thread on things you liked before they were popular has not only video games but also movies, TV shows, comedians... and more or less no music. So it's not like video games have shoved out all earlier youth obsessions -- just the most Dionysian one. For that matter, you don't hear them talking about other parts of the "sex, drugs, and rock 'n' roll" culture, like "I was into Christy Canyon before porn became mainstream" or "I was smoking crack and wearing Polo shirts before ghetto blacks took them over and made them declasse."
In a recent episode of the All Gen Gamers podcast, Pete Dorr mentioned that he's getting more into indie or obscure games, using that same tone of voice that my friend used (as late as 1994) to explain to his dad that he was moving away from corporate rock and into underground punk. Dorr is a Millennial, born in the late '80s, and again only the biggest of nerds during the pre-Playstation era would have tried to use "into indie video games" as one of their cultural tribal markers. Only a minority of guys in either time period would have used "indie" as a marker at all, but the point is that this minority went from "unknown bands" to "unknown games" as their tribal badge. And today GameFAQs is running a poll on the Best Indie Game of 2010, which tens of thousands of youngish males will respond to.
One silver lining to be found in the replacement of music by video games among young people is the relegation of indie "rock" to the background of culture. Teenagers and college students today couldn't care less about it, so it's doomed -- either they don't have the indie snob gene in the first place, or they do but are applying it to video games rather than bands. I'd ballpark the primary age group of indie music fans at 30-34, with the rest falling into the 25-29 and 35-44 age groups. That's way up from even the mid-'90s, when I was an indie snob about music. It was probably college-aged then, although that still left plenty of high school and even middle school snobs (I began my obsession with the Dead Milkmen in 8th grade, having seen the video for "Punk Rock Girl" on Beavis & Butt-head, and by 9th grade I bought a record player so I could listen to "obscure" music that wasn't popular enough for the label to re-release it on CD.)
Although I was either not yet born or only in elementary school during the heyday of independent rock music (the mid-'70s through the late '80s), I'm sure I wouldn't have minded the indie snobs back then. At least they were listening to good independent music -- sincere, energetic groups like Talking Heads, Echo and the Bunnymen, R.E.M, Camper Van Beethoven, The Replacements... and not passionless meta-ironics like The Magnetic Fields or The Stills. The only thing worse than a snob is one with dorky taste.
January 5, 2011
Fancier home video game consoles did not kill arcades: the case of pinball
God damn what a geeky title. Anyway, awhile ago I explained why the death of the video game arcade during the 1990s and 2000s had nothing to do with the rise of 3D graphics and more powerful hardware available on systems you could play at home.
The important thing to notice is that graph of arcade and home console sales that I re-drew from an academic's paper. After recovering from the early-'80s crash, arcade game sales peaked in 1988, so that the first year of the steady decline was 1989. The Super Nintendo was not even available, let alone prevalent. The Sega Genesis was only available as of August of 1989, but its early games were nothing compared to what you could find in an arcade. Anyone who says that, during the '90s, home consoles could replicate the arcade look must not have spent much time in the arcades.
Rather, the death of video game arcades was part of a broader pattern of people withdrawing from public spaces, especially the larger and more carnivalesque ones. During the massive social transition of the late '80s / early '90s, people started spending almost all their time at home, and when they did go out, they were only willing to travel a short distance, to a smaller and tamer atmosphere, and for less time.
It just occurred to me that pinball is a good test case for the sane and silly ideas about why arcades died. Pinball cannot be replicated on a home video game system or any video game system because it is not a video game. It could be simulated, but not replicated. For awhile, pinball was very popular, even into the early '90s, with the Addams Family game being the most popular. During the '90s and 2000s, though, it vanished along with video game arcade cabinets.
My idea explains that perfectly -- people started hunkering down in general, avoiding the arcade altogether, so everything inside it disappeared, not just the video game machines. Mini-golf, roller rinks, pizza parlors, etc., all have died off too, for the same reasons, and they used all have pinball machines. The silly idea about home systems replicating the arcade experience cannot explain the death of pinball. Again, video games cannot replicate pinball at all -- and it's not as though people started buying pinball machines for home use that were roughly as good as the ones they used to play in noisy public spaces.
Moreover, there were hardly any home video games that tried to simulate pinball -- a handful, but not enough to kill off the real thing. Devil's Crush is the best pinball video game, but it was on the TurboGrafx system, which almost no one had. Nope, people just forgot about pinball altogether and started focusing more on first-person shooters and soap-opera-like role playing games.
This argument also applies to those ticket-redemption games that were popular at Chuck E Cheese's, roller rinks, Putt-Putt, etc. Those are gone in everyday life, and it wasn't because there was some fantastic simulation of them on a home video game system. And they certainly weren't replaced by good-enough versions for use in kids' homes -- anyone who was a kid then remembers how pathetic the mini/portable version of the "shooting hoops" game was.
Kids and adults both agreed, openly or not, that unsupervised public spaces were too dangerous, so the kid began staying indoors all day, even if that meant not being able to play skeeball or Final Fight anymore. That behavioral change is what dried up the arcades, not technological progress.
The important thing to notice is that graph of arcade and home console sales that I re-drew from an academic's paper. After recovering from the early-'80s crash, arcade game sales peaked in 1988, so that the first year of the steady decline was 1989. The Super Nintendo was not even available, let alone prevalent. The Sega Genesis was only available as of August of 1989, but its early games were nothing compared to what you could find in an arcade. Anyone who says that, during the '90s, home consoles could replicate the arcade look must not have spent much time in the arcades.
Rather, the death of video game arcades was part of a broader pattern of people withdrawing from public spaces, especially the larger and more carnivalesque ones. During the massive social transition of the late '80s / early '90s, people started spending almost all their time at home, and when they did go out, they were only willing to travel a short distance, to a smaller and tamer atmosphere, and for less time.
It just occurred to me that pinball is a good test case for the sane and silly ideas about why arcades died. Pinball cannot be replicated on a home video game system or any video game system because it is not a video game. It could be simulated, but not replicated. For awhile, pinball was very popular, even into the early '90s, with the Addams Family game being the most popular. During the '90s and 2000s, though, it vanished along with video game arcade cabinets.
My idea explains that perfectly -- people started hunkering down in general, avoiding the arcade altogether, so everything inside it disappeared, not just the video game machines. Mini-golf, roller rinks, pizza parlors, etc., all have died off too, for the same reasons, and they used all have pinball machines. The silly idea about home systems replicating the arcade experience cannot explain the death of pinball. Again, video games cannot replicate pinball at all -- and it's not as though people started buying pinball machines for home use that were roughly as good as the ones they used to play in noisy public spaces.
Moreover, there were hardly any home video games that tried to simulate pinball -- a handful, but not enough to kill off the real thing. Devil's Crush is the best pinball video game, but it was on the TurboGrafx system, which almost no one had. Nope, people just forgot about pinball altogether and started focusing more on first-person shooters and soap-opera-like role playing games.
This argument also applies to those ticket-redemption games that were popular at Chuck E Cheese's, roller rinks, Putt-Putt, etc. Those are gone in everyday life, and it wasn't because there was some fantastic simulation of them on a home video game system. And they certainly weren't replaced by good-enough versions for use in kids' homes -- anyone who was a kid then remembers how pathetic the mini/portable version of the "shooting hoops" game was.
Kids and adults both agreed, openly or not, that unsupervised public spaces were too dangerous, so the kid began staying indoors all day, even if that meant not being able to play skeeball or Final Fight anymore. That behavioral change is what dried up the arcades, not technological progress.
Declining media interest in gays and their issues
More fun with Google's Ngram Viewer. Here's gay, lesbian, homosexual, and queer. After a slow rise in popularity starting in the mid-1970s and a surge in visibility from the later '80s through the PC-related culture wars of the early-mid-'90s, gays and lesbians have started vanishing from the popular mind. I mean, we know they're out there -- we just don't care anymore.
The only term that's still on the rise is "queer," but it's nowhere nearly common enough to make up for the plummeting emphasis on "gay," "lesbian," and "homosexual." So, it's not as though this chart merely shows that the fashionable term for people who like someone of their own sex has become "queer" instead of one of those other three.
How about issues central to the gay movement? Here's HIV and AIDS. Again a falling interest since the mid-'90s. And here's homophobia and gay bashing -- same thing. There were moral panics surrounding these things for about a decade, then people realized that much of the scare-mongering was either bogus or exaggerated, and they stopped paying so much lip-service to them.
The only gay-related term that's been steadily increasing in prevalence since the '90s is gay marriage. However, this is a civil rights issue that just so happens to deal with gays, now that the more egregious civil rights matters have been taken care of. In contrast, AIDS, homophobia, even the designator "gay" itself all relate to gays per se -- what makes their identity and behavior different from everyone else.
So, the identity politics part of the gay movement has totally evaporated, as they realized the straight majority had stopped caring. They had no choice but to take up some civil rights issue like gay marriage, which the average person might take the time to listen to.
The only term that's still on the rise is "queer," but it's nowhere nearly common enough to make up for the plummeting emphasis on "gay," "lesbian," and "homosexual." So, it's not as though this chart merely shows that the fashionable term for people who like someone of their own sex has become "queer" instead of one of those other three.
How about issues central to the gay movement? Here's HIV and AIDS. Again a falling interest since the mid-'90s. And here's homophobia and gay bashing -- same thing. There were moral panics surrounding these things for about a decade, then people realized that much of the scare-mongering was either bogus or exaggerated, and they stopped paying so much lip-service to them.
The only gay-related term that's been steadily increasing in prevalence since the '90s is gay marriage. However, this is a civil rights issue that just so happens to deal with gays, now that the more egregious civil rights matters have been taken care of. In contrast, AIDS, homophobia, even the designator "gay" itself all relate to gays per se -- what makes their identity and behavior different from everyone else.
So, the identity politics part of the gay movement has totally evaporated, as they realized the straight majority had stopped caring. They had no choice but to take up some civil rights issue like gay marriage, which the average person might take the time to listen to.
January 4, 2011
For decades, diabetes writers have been ignoring glucose and insulin
Here is the prevalence over time of glucose, insulin, and diabetes in Google Books' database, since the pioneering work of the early 1920s that made the connection between those three things.
The trend for "diabetes" is steadily upward, even if there are cycles around that trend (each peak is higher than the previous one, and so is each trough). For awhile that was true for "glucose" and "insulin" as well, although the growth rate starts to slow down in the later 1970s and peaks in the early 1980s. After that, the trend for three decades has been down (the next peak during the low-carb craze of the early 2000s is lower than the early-'80s one).
So even though our sources of information have taken steadily greater interest in diabetes after we understood what caused it, and especially after it became so common, we are no longer interested in its causes or treatment -- namely keeping glucose levels low by removing most carbohydrates from the diet.
The anti-fat, de facto vegan push by nutrition experts that began in the late '70s has made it more and more dangerous to bad-mouth empty carbohydrates and praise fat and animal products. Bread, cereal, rice, and pasta are clean, while fats and oils are contaminated -- hey, the FDA's food pyramid says so.
If you question what the experts say, or if you disobey their suggestions, it's not as though you'll get arrested -- it's just that everyone else will look at you in disgust like you're a filthy hog who has no regard whatsoever for their own health. They'll downgrade your purity level in their minds and try to shun you into joining the (strangely obese) carb-scarfers' brigade. And if you're someone writing about diabetes, obesity, and other diseases within the larger cluster called Metabolic Syndrome, you can't point out the obvious solution without suffering the same consequences. Even if you get through to someone, how likely is it that they'll keep eating low-carb meals?
As I pointed out earlier, most people don't feel like sticking out, so only a small group of people who don't give a shit what a bunch of busybodies thinks will be able to stay on track. The only solution is to change what the experts say is clean vs. dirty to eat -- otherwise people will balk with anxiety about being seen as someone who eats stuff that the smart men in white coats have told us is dangerous for our health. Academia is one of the places that is protected from market forces, so all sorts of stupid, crazy, and harmful ideas can go on and on in popularity. We'll just have to wait for fashion to change among the experts, enjoy the public health boost we'll get as a result, while still realizing that sometime the fashion could swing back in the fat-phobic veganist direction and ruin us again, and that there would be little we could do about it if it did.
The trend for "diabetes" is steadily upward, even if there are cycles around that trend (each peak is higher than the previous one, and so is each trough). For awhile that was true for "glucose" and "insulin" as well, although the growth rate starts to slow down in the later 1970s and peaks in the early 1980s. After that, the trend for three decades has been down (the next peak during the low-carb craze of the early 2000s is lower than the early-'80s one).
So even though our sources of information have taken steadily greater interest in diabetes after we understood what caused it, and especially after it became so common, we are no longer interested in its causes or treatment -- namely keeping glucose levels low by removing most carbohydrates from the diet.
The anti-fat, de facto vegan push by nutrition experts that began in the late '70s has made it more and more dangerous to bad-mouth empty carbohydrates and praise fat and animal products. Bread, cereal, rice, and pasta are clean, while fats and oils are contaminated -- hey, the FDA's food pyramid says so.
If you question what the experts say, or if you disobey their suggestions, it's not as though you'll get arrested -- it's just that everyone else will look at you in disgust like you're a filthy hog who has no regard whatsoever for their own health. They'll downgrade your purity level in their minds and try to shun you into joining the (strangely obese) carb-scarfers' brigade. And if you're someone writing about diabetes, obesity, and other diseases within the larger cluster called Metabolic Syndrome, you can't point out the obvious solution without suffering the same consequences. Even if you get through to someone, how likely is it that they'll keep eating low-carb meals?
As I pointed out earlier, most people don't feel like sticking out, so only a small group of people who don't give a shit what a bunch of busybodies thinks will be able to stay on track. The only solution is to change what the experts say is clean vs. dirty to eat -- otherwise people will balk with anxiety about being seen as someone who eats stuff that the smart men in white coats have told us is dangerous for our health. Academia is one of the places that is protected from market forces, so all sorts of stupid, crazy, and harmful ideas can go on and on in popularity. We'll just have to wait for fashion to change among the experts, enjoy the public health boost we'll get as a result, while still realizing that sometime the fashion could swing back in the fat-phobic veganist direction and ruin us again, and that there would be little we could do about it if it did.
January 3, 2011
Trying to make minivans manly
Here's an NYT article about car companies trying to rid the minivan of its association with boring life, in order to convince fence-sitting car-buyers. The car-makers are afraid that if they try to make minivans look cooler, they'll have to sacrifice the various features that make the inside look like the bridge on the starship Enterprise -- and what helicopter parent could shelter and cushify their children without them?
Back in 1984 when minivans were born, the endless cycle between parenting styles was still in the "let them know hardship" phase. As a result, they weren't so god awfully humungous, and therefore weren't so dorky looking -- on an absolute scale anyway, though of course they were next to a Pontiac Firebird or even a Toyota Celica.
Compared to the short wheelbase 1984 Dodge Caravan, the 2011 Chrysler Town & Country is over two feet longer, half a foot wider, and a third of a foot taller. Crudely (pretending they're boxes), that amounts to a 36% increase in the volume taken up by the damn thing. Not surprisingly, the first one looked about as sporty as a minivan ever could.
Even station wagons looked sleeker when the zeitgeist favored racecars over boat-cars. Our family-mobile growing up was an early '80s Subaru GL that looked like this -- hardly a rival to a Porsche 944, but still light-years away from the "Lezbaru" design of wagons from the '90s and after. When the tide turned against sports cars during the '90s, as part of the wider wussification of the culture, it was no longer possible to have a decent-looking car that would satisfy overprotective parents, who believe that each of their kids needs their own studio apartment inside the family car in order to be safe, comfortable, and on track toward early acceptance at an Ivy.
Back in 1984 when minivans were born, the endless cycle between parenting styles was still in the "let them know hardship" phase. As a result, they weren't so god awfully humungous, and therefore weren't so dorky looking -- on an absolute scale anyway, though of course they were next to a Pontiac Firebird or even a Toyota Celica.
Compared to the short wheelbase 1984 Dodge Caravan, the 2011 Chrysler Town & Country is over two feet longer, half a foot wider, and a third of a foot taller. Crudely (pretending they're boxes), that amounts to a 36% increase in the volume taken up by the damn thing. Not surprisingly, the first one looked about as sporty as a minivan ever could.
Even station wagons looked sleeker when the zeitgeist favored racecars over boat-cars. Our family-mobile growing up was an early '80s Subaru GL that looked like this -- hardly a rival to a Porsche 944, but still light-years away from the "Lezbaru" design of wagons from the '90s and after. When the tide turned against sports cars during the '90s, as part of the wider wussification of the culture, it was no longer possible to have a decent-looking car that would satisfy overprotective parents, who believe that each of their kids needs their own studio apartment inside the family car in order to be safe, comfortable, and on track toward early acceptance at an Ivy.
January 1, 2011
When do young people appear in obituaries?
The NYT recently ran an obituary for a 28 year-old anorexic model, and you can't help but be struck by the age and year. Lifespans have been increasing for a long time now, and the homicide rate (a good proxy for how reckless young people are) has been plummeting for nearly 20 years. The most popular young deaths were back when wildness was soaring -- "the day that music died," Jimi Hendrix, Jim Morrison, Len Bias. It would seem strange if the media focused more on young deaths when they were less likely to happen, and in less dramatic fashion.
I searched the NYT for "dies at ___" for numbers 1 through 29 and tallied what years these young obituaries appeared in. (It doesn't matter if you restrict it to adults, adolescent minors and adults, or everyone under 30.) The chart below shows how many of these obituaries there were in a given year, shown as a 5-year moving average:
First, there aren't many young obituaries in any year: the moving average stays under 3, and in the raw data the highest was 6 in 1937. Still, there are clearly periods when recognition of young deaths rises and others where it falls. The pattern that jumps out is the relation to the homicide rate -- strangely enough, young obituaries rise when the homicide rate falls, and they disappear when the homicide rate soars. The correlation between the 5-year moving average of young obituaries and the homicide rate, from 1919 to 2006 (where we have data for both), is -0.48.
There was an initial rise in young obituaries in the late '20s and early '30s, but most of the rise and high plateau lasts from the mid-'30s through the most of the '50s, a time of plummeting crime. Once the crime rate starts to soar in the late '50s, recognition of young deaths falls off and stays low throughout the high-crime era of the '80s. It only shoots up again when the crime rate started falling in the 1990s, lasting through today.
That's the opposite pattern of the focus on young deaths in other areas of culture. Horror movies from high-crime times make you feel like it could really happen, while those from low-crime times (especially the '50s and 2000s) are too goofy or exaggerated to feel real. It's more like torture porn, where you feel like you're watching from a safe distance, not right in the middle of it all. Certainly pop music of low-crime times has little to no emphasis on living fast and dying young. Death and grieving used to be common in TV shows of high-crime times, even children's cartoons, but that's evaporated during low-crime times.
But an obituary is so much more formal and solemn than a movie or song, and my guess is the obituary writers don't want to romanticize or glorify young deaths when it's a more and more common sight. Only when the crime rate plummets and people see their lives as lasting much longer, and therefore being worth much more, do the obituary writers feel that young deaths are worth dignifying, unlike the high-crime times when people perceive their world to be on-the-brink and therefore that life is cheap.
I searched the NYT for "dies at ___" for numbers 1 through 29 and tallied what years these young obituaries appeared in. (It doesn't matter if you restrict it to adults, adolescent minors and adults, or everyone under 30.) The chart below shows how many of these obituaries there were in a given year, shown as a 5-year moving average:
First, there aren't many young obituaries in any year: the moving average stays under 3, and in the raw data the highest was 6 in 1937. Still, there are clearly periods when recognition of young deaths rises and others where it falls. The pattern that jumps out is the relation to the homicide rate -- strangely enough, young obituaries rise when the homicide rate falls, and they disappear when the homicide rate soars. The correlation between the 5-year moving average of young obituaries and the homicide rate, from 1919 to 2006 (where we have data for both), is -0.48.
There was an initial rise in young obituaries in the late '20s and early '30s, but most of the rise and high plateau lasts from the mid-'30s through the most of the '50s, a time of plummeting crime. Once the crime rate starts to soar in the late '50s, recognition of young deaths falls off and stays low throughout the high-crime era of the '80s. It only shoots up again when the crime rate started falling in the 1990s, lasting through today.
That's the opposite pattern of the focus on young deaths in other areas of culture. Horror movies from high-crime times make you feel like it could really happen, while those from low-crime times (especially the '50s and 2000s) are too goofy or exaggerated to feel real. It's more like torture porn, where you feel like you're watching from a safe distance, not right in the middle of it all. Certainly pop music of low-crime times has little to no emphasis on living fast and dying young. Death and grieving used to be common in TV shows of high-crime times, even children's cartoons, but that's evaporated during low-crime times.
But an obituary is so much more formal and solemn than a movie or song, and my guess is the obituary writers don't want to romanticize or glorify young deaths when it's a more and more common sight. Only when the crime rate plummets and people see their lives as lasting much longer, and therefore being worth much more, do the obituary writers feel that young deaths are worth dignifying, unlike the high-crime times when people perceive their world to be on-the-brink and therefore that life is cheap.
Subscribe to:
Posts (Atom)