Pursuing the theme of how, during the past 20 years of falling-crime times, children have lost their exposure to scary things while growing up, I thought about what some less obvious examples might be. What's an obvious example? When I was 6 or 7, my babysitter (a cool high school guy) let us watch The Terminator, and around that time I used to check out Aliens from the public library every week. * I don't remember if I even needed my parents to use their card for me, but if so, they were cool with it. That would never happen now.
But scariness shows up even where you'd least suspect it, like in popular music. Even if it wasn't the norm, there were still enough spooky-sounding hit songs that they still stand out in my memories of music during the pre-adolescent stage of life where you're not even trying to pay attention to music. When it's haunting, you can't help but pay attention. Off the top of my head, here are the ones that made the strongest impression (all likely are on YouTube):
- "Bette Davis Eyes" by Kim Carnes. This one haunted me throughout my childhood because my dad played the single over and over and over. It was one of the handful of contemporary songs that he liked (he's stuck in the mid-'60s), and he played it in car rides for at least 10 years after it came out. Synthesizers are inherently creepy because they're close enough to sounding like organic instruments, yet are not so artificial that our mind would treat them as non-musical sound. They're in the "uncanny valley." Add to the synth line her raspy voice and mysterious femme fatale lyrics -- "all the boys think she's a spy" -- and you've got a certain spooker.
- "Private Eyes" by Hall & Oates. Another one I recall from endless repetition as my mother played the tape during every car ride to daycare when I was 2 or 3 -- it's one of the few vivid memories I have from such an early age. Here it's not so much the instrumentation that's freaky, but the chorus' stalker vibe -- both the lyrics and the tone of voice -- is enough to make you worry.
- "Somebody's Watching Me" by Rockwell. Seriously, what was it about eyes in '80s songs? (Though I don't recall "Eyes Without a Face".) I didn't hear this one so often, probably just on the radio in the car or when my parents were watching MTV, but those few times were enough. There's another creepy synth line, a similar sound to one of the background vocals, and the paranoia that comes through in the lyrics and tone of voice.
- "Dirty Diana" by Michael Jackson. Oddly enough, I have little memory of the songs from Thriller (or else "Billie Jean" would get a mention), but when Bad came out I played the tape all the way through just about every day. This song was definitely the most nerve-wracking because it not only had the synth line and stalker lyrics, but the harder guitar riffs and his voice play off against them to create a palpable tension. It feels like the creeping-from-behind synth line and his panicking voice are caught in a physical struggle for dominance, accentuated by the drum machine hits that sound like one of them striking or kicking the other.
- "Rhythm is Gonna Get You" by Gloria Estefan & The Miami Sound Machine. Every time I heard this song, I thought "This is devil-worshiping music." When the background voices shout "oway-oway, oway-owah" and "woo!" you feel like you're eavesdropping on a Satanic chant during some ritual sacrifice. Except you're not just overhearing this unobserved -- the lyrics are all about how they see you and this spirit or ghost or whatever "the rhythm" is, is out to possess you and there's nothing you can do to stop it. I think I saw the video once or twice, and that only confirmed my fears.
- Honorable mention: "Finally" by CeCe Peniston. Yeah that's right, laugh it up. It turns out that it's an early '90s incarnation of "I'm So Excited," and she's singing about how crazy she is for some guy she just met. But my mind never registered the verses, only the chorus where her voice sounds like she's paralyzed and frightened by her lack of control over her surroundings. In retrospect, I see how she used that to suggest how powerless she felt because of love at first sight or whatever, but at the time I imagined the singer had just been raped or beaten or mugged, or witnessed these things while walking down the street or in her own neighborhood -- one of those "you think it couldn't happen here" moments.
I'm sure I've left a lot out in this cursory look. What else comes to mind?
I don't remember hearing anything like these after the '91-'92 social transformation. Sure there's the angsty emo whining and screaming that the late-career Smashing Pumpkins and My Chemical Romance brought out, not to mention a bunch of growling rejects churning out nu metal, but that's not haunting at all. There's no suspense or anxiety or paranoia. The closest we've gotten to that sound in the past 20 years was probably "Baby One More Time" by Britney Spears, one of the few mega-hits from this period that I liked at the time and still do.
My impression is that the female singers are better at these songs, most likely because they're more easily frightened to begin with and more given to expressing that fear, worry, and powerlessness to others so that they can get help.
In any case, there's another source of unease that the Millennials have been protected from -- spooky pop music songs. Unfortunately that won't change as they grow up because it's not merely due to their helicopter parents keeping such songs out of earshot -- the songs are just not getting made in the first place. This is a general pattern: parents are most over-protective when times are the safest. It sounds paradoxical but is not, as their underlying paranoia has two effects: 1) it keeps them and their families locked in their homes, draining the pond of potential victims of street-roving criminals, who then move on to other business; and 2) it makes them hover over their kids at home.
* For at least several months after first seeing it, I was convinced that the queen alien was hiding under my bed. Several times as I faced the wall before falling asleep, I had vivid illusions that she had sprung up from underneath and towered all the way up the wall, somewhat like the scene where she abducts Newt. I toughed it out and kept renting the movie, though once I left my door open just in case I had to scream to my mother that I was being taken away. That provided me with little comfort, as I could then see into her room and imagined that all the shadows waving around her bedroom walls -- surely just tree branches from outside -- were warrior aliens who had infiltrated our house and were probably already cocooning her. Yet I couldn't stop watching that movie. When you sense the world is dangerous, you want to toughen yourself up to it by watching scary movies, just like you'd spar in a gym before a real boxing match or street fight.
September 30, 2010
September 27, 2010
Follow-up on lack of innovation in children's culture
Now that my nephew is 2 ("...and a half!"), I've been planning out what to get him for birthday and Christmas presents, rather than not worry since he won't remember and will throw the stuff away before long. So I've been trying to keep my finger somewhat on the pulse of kids' culture. Well, more so than before, i.e. a few times a year vs. almost never.
Last year I gave my impression that there's a real lack of innovation in toys compared to when I was a child in the '80s. It's bad enough that the quality is degraded, so that most of it is just cheap crap from China, but I'm talking about the stages of production far before the thing rolls off some assembly line. There's just no thought put into the design process anymore. And it would be bad enough if the flow of good new ideas had stalled out, but it's worse still because they've wiped out many of the once common good ideas.
You see that especially in the "gross-out" category. I know there have been recent re-issues of the old Madballs toys and Garbage Pail Kids cards, but these must be for nostalgic adult collectors because I never see them anywhere toys are sold, including the second-hand places like the garage sales I pass by or the good will stores I visit now and then. And in kid-heavy public spaces I don't see them playing with these things. Similarly, you only see adults wearing shirts with a Decepticon logo, not kids.
Impressions aside, here's some harder evidence that the excitement factor in kids' culture has been plummeting since the early-mid '90s just as it has been in the teenage and adult culture. Many toy lines are made along with a cartoon to promote them, and the toy geeks at Wikipedia have put together a template of animated series based on toys. I'm sure this captures the big picture since its makers are obsessive.
No surprise that there's a ton in the '80s, and that most are unique series -- that was the golden age for toys and cartoons. A few of the '90s cartoons are actually from the pre-wussification period of 1990 and the first part of '91. And even the ones that came after the society's decline in wildness are mostly spin-offs or continuations of series begun in the '80s. There are even fewer entries from the 2000s, and even less variety among them. Of the 13 listed, nearly half consist of the 2 variations on G.I. Joe and the 4 on Transformers, both of which originated during the golden age. There are only four fundamentally new series from the past decade: A.T.O.M., Max Steel, Hot Wheels Battle Force 5, and Strawberry Shortcake.
This is not an effect of video games replacing TV cartoons since most people still watch a lot more TV than play video games, especially pre-pubescent kids. They may be spending some time here and there leveling up their dorky little Pokemon characters, but they're not yet in the adolescent (and sadly, adult) stage of being glued to their computer or home console for a quarter of a day playing Halo or World of Warcraft.
I also doubt that much of this change is explained by the size of the child population, either in absolute or fraction-of-the-whole terms. When the Baby Boom was in full swing, there were hardly any cool new toys and cartoons coming out. Sure, they had Davy Crockett hats and Rocky and Bullwinkle -- and all of that was much cooler than what's been for sale during the past 20 years -- but it was nothing compared to the explosion in kid culture during the '80s and very early '90s. There was also a mini boom that gave producers more kids to cater too during the late '90s and 2000s, yet with no effect on how exciting their products were.
Rather, it looks like rising-crime times -- and especially when the violence rate is near its peak -- cause people of all ages to want to indulge in exciting activities. For teenagers or adults, that's rock music, which died off in the '90s. For kids, it's action-packed cartoons and trailblazing action figures. (I've also written here before about the decline in their gross-out rhymes, playing truth-or-dare, or singing songs like "99 Bottles of Beer.") When life looks short and dangerous, why not live it up while you still can? When life looks long and safe, you might as well go to sleep and coast through it.
You may have already guessed it, but yes I am planning to do my best to protect my nephew from the overly protective culture of today. I'm going to get him some Garbage Pail Kids cards or a My Pet Monster stuffed animal when he hits elementary school, plus that freaky Dr. Seuss Halloween TV special. For now I'll ease him into it with a DVD set of Faerie Tale Theatre and some original Pound Puppies, Wuzzles, or those construction vehicle Transformers. And I'd better set aside an extra Nintendo, Super Nintendo, or Genesis so I can inoculate him against the nothing-to-do Grand Theft Auto games, Goldeneye clones, and soap opera role-playing games.
I realize that most of these efforts will have little effect, but it's better than contributing to the problem by getting him DVDs of Go Diego Go! Plus there's an off-chance that he'll dig a more rambunctious cartoon like Heathcliff or scarier looking toys like the Sectaurs. Hell, even the Teddy Ruxpin stories have more menace and evil in them than the so-called action shows of today.
Last year I gave my impression that there's a real lack of innovation in toys compared to when I was a child in the '80s. It's bad enough that the quality is degraded, so that most of it is just cheap crap from China, but I'm talking about the stages of production far before the thing rolls off some assembly line. There's just no thought put into the design process anymore. And it would be bad enough if the flow of good new ideas had stalled out, but it's worse still because they've wiped out many of the once common good ideas.
You see that especially in the "gross-out" category. I know there have been recent re-issues of the old Madballs toys and Garbage Pail Kids cards, but these must be for nostalgic adult collectors because I never see them anywhere toys are sold, including the second-hand places like the garage sales I pass by or the good will stores I visit now and then. And in kid-heavy public spaces I don't see them playing with these things. Similarly, you only see adults wearing shirts with a Decepticon logo, not kids.
Impressions aside, here's some harder evidence that the excitement factor in kids' culture has been plummeting since the early-mid '90s just as it has been in the teenage and adult culture. Many toy lines are made along with a cartoon to promote them, and the toy geeks at Wikipedia have put together a template of animated series based on toys. I'm sure this captures the big picture since its makers are obsessive.
No surprise that there's a ton in the '80s, and that most are unique series -- that was the golden age for toys and cartoons. A few of the '90s cartoons are actually from the pre-wussification period of 1990 and the first part of '91. And even the ones that came after the society's decline in wildness are mostly spin-offs or continuations of series begun in the '80s. There are even fewer entries from the 2000s, and even less variety among them. Of the 13 listed, nearly half consist of the 2 variations on G.I. Joe and the 4 on Transformers, both of which originated during the golden age. There are only four fundamentally new series from the past decade: A.T.O.M., Max Steel, Hot Wheels Battle Force 5, and Strawberry Shortcake.
This is not an effect of video games replacing TV cartoons since most people still watch a lot more TV than play video games, especially pre-pubescent kids. They may be spending some time here and there leveling up their dorky little Pokemon characters, but they're not yet in the adolescent (and sadly, adult) stage of being glued to their computer or home console for a quarter of a day playing Halo or World of Warcraft.
I also doubt that much of this change is explained by the size of the child population, either in absolute or fraction-of-the-whole terms. When the Baby Boom was in full swing, there were hardly any cool new toys and cartoons coming out. Sure, they had Davy Crockett hats and Rocky and Bullwinkle -- and all of that was much cooler than what's been for sale during the past 20 years -- but it was nothing compared to the explosion in kid culture during the '80s and very early '90s. There was also a mini boom that gave producers more kids to cater too during the late '90s and 2000s, yet with no effect on how exciting their products were.
Rather, it looks like rising-crime times -- and especially when the violence rate is near its peak -- cause people of all ages to want to indulge in exciting activities. For teenagers or adults, that's rock music, which died off in the '90s. For kids, it's action-packed cartoons and trailblazing action figures. (I've also written here before about the decline in their gross-out rhymes, playing truth-or-dare, or singing songs like "99 Bottles of Beer.") When life looks short and dangerous, why not live it up while you still can? When life looks long and safe, you might as well go to sleep and coast through it.
You may have already guessed it, but yes I am planning to do my best to protect my nephew from the overly protective culture of today. I'm going to get him some Garbage Pail Kids cards or a My Pet Monster stuffed animal when he hits elementary school, plus that freaky Dr. Seuss Halloween TV special. For now I'll ease him into it with a DVD set of Faerie Tale Theatre and some original Pound Puppies, Wuzzles, or those construction vehicle Transformers. And I'd better set aside an extra Nintendo, Super Nintendo, or Genesis so I can inoculate him against the nothing-to-do Grand Theft Auto games, Goldeneye clones, and soap opera role-playing games.
I realize that most of these efforts will have little effect, but it's better than contributing to the problem by getting him DVDs of Go Diego Go! Plus there's an off-chance that he'll dig a more rambunctious cartoon like Heathcliff or scarier looking toys like the Sectaurs. Hell, even the Teddy Ruxpin stories have more menace and evil in them than the so-called action shows of today.
Marketing vegetables as junk food
Adults have given up trying to get young people to eat vegetables the way they'd be prepared for a home-cooked meal, so they're trying to re-brand them as hip junk food.
You can't blame kids or anyone else for not liking vegetables -- left alone, they taste bland and hurt your stomach. That's why people with any sense, from hunter-gatherers through your grandmother, cook and flavor them. It's only within the fat-phobic period of history -- more or less the past 30 years -- that modern people came to equate "fresh" with "pure" and thus "healthy." Cooked foods -- eww, that's like bacon, french fries, and other things made with animal fat. Thus, raw = good, as long as it's not an animal, but who wants to harm their health by eating that stuff anyway right? That's another blight on civilization caused by fat-phobia -- the rancid salad bar.
In reality, eating fresh vegetables, washed or not, is a quick way to get food poisoning, not just from the native poisons, toxins, and irritants that serve as the plant's defenses, but also from pathogens like E. coli and salmonella. As I've pointed out before here, aside from baked chicken (rarely cooked long enough), the majority of food poisoning cases come from non-animal foods.
And given how nutritionally lacking vegetables are compared to animal products, you often have to heap salt on them to make them worth eating. Assuming you do these things -- cook them and flavor them -- vegetables can taste quite good. Deep frying baby carrots isn't the right way to do that, since that's not so different from potato chips -- lots of starch coated with oxidized vegetable oils. (And if you're looking for something chip-like, just eat pork rinds instead, especially the kind with the skin still attached.)
I eat more vegetables and from a wider variety than the typical vegetarian or vegan does, since studies that follow their diets show that they mostly rely on grains, legumes, pulses, and boatloads of fruit for vegans. Contrary to their propaganda of being leaf-eaters, they're really nothing but sugar-suckers and grain-munchers. And they get away with this because the average person sees any non-animal product as a "plant food."
How do I do it as a poor grad student? Buy canned vegetables and look for sales on the bottled kinds. These are much cheaper because they're not high-maintenance -- they just sit on the shelf -- whereas the supermarket has to regularly rinse the raw ones, keep the flies off, and throw them out before too long. Better yet, the shelf vegetables are already cooked somehow: spinach is boiled and salted, cabbage is pickled into sauerkraut (no more stomach pains), peppers and tomatoes are fire-roasted, and so on.
The only way to get young people to eat more vegetables is to play up their savory side, and make sure they're detoxified, but don't expect to see that from the salt-phobic nutrition experts who worship bland salads.
You can't blame kids or anyone else for not liking vegetables -- left alone, they taste bland and hurt your stomach. That's why people with any sense, from hunter-gatherers through your grandmother, cook and flavor them. It's only within the fat-phobic period of history -- more or less the past 30 years -- that modern people came to equate "fresh" with "pure" and thus "healthy." Cooked foods -- eww, that's like bacon, french fries, and other things made with animal fat. Thus, raw = good, as long as it's not an animal, but who wants to harm their health by eating that stuff anyway right? That's another blight on civilization caused by fat-phobia -- the rancid salad bar.
In reality, eating fresh vegetables, washed or not, is a quick way to get food poisoning, not just from the native poisons, toxins, and irritants that serve as the plant's defenses, but also from pathogens like E. coli and salmonella. As I've pointed out before here, aside from baked chicken (rarely cooked long enough), the majority of food poisoning cases come from non-animal foods.
And given how nutritionally lacking vegetables are compared to animal products, you often have to heap salt on them to make them worth eating. Assuming you do these things -- cook them and flavor them -- vegetables can taste quite good. Deep frying baby carrots isn't the right way to do that, since that's not so different from potato chips -- lots of starch coated with oxidized vegetable oils. (And if you're looking for something chip-like, just eat pork rinds instead, especially the kind with the skin still attached.)
I eat more vegetables and from a wider variety than the typical vegetarian or vegan does, since studies that follow their diets show that they mostly rely on grains, legumes, pulses, and boatloads of fruit for vegans. Contrary to their propaganda of being leaf-eaters, they're really nothing but sugar-suckers and grain-munchers. And they get away with this because the average person sees any non-animal product as a "plant food."
How do I do it as a poor grad student? Buy canned vegetables and look for sales on the bottled kinds. These are much cheaper because they're not high-maintenance -- they just sit on the shelf -- whereas the supermarket has to regularly rinse the raw ones, keep the flies off, and throw them out before too long. Better yet, the shelf vegetables are already cooked somehow: spinach is boiled and salted, cabbage is pickled into sauerkraut (no more stomach pains), peppers and tomatoes are fire-roasted, and so on.
The only way to get young people to eat more vegetables is to play up their savory side, and make sure they're detoxified, but don't expect to see that from the salt-phobic nutrition experts who worship bland salads.
September 21, 2010
Did clueless economist types help sink ancient societies as well?
Nassim Taleb, in his new afterword to The Black Swan and throughout his website, emphasizes how much the expert economists pushed the ideology of over-leveraging that played a key role in the current crisis. Why keep your money sitting around unproductively when you can optimize your rate of return by investing in the stock market or whatever? Well, because if something really bad happens, you won't lose all your money.
Taleb's "barbell" strategy for investing shows how to make yourself robust to catastrophes. Rather than investing 100% in medium-risk instruments, invest a large amount like 80% in the safest instruments such as cash and the remaining 20% in very high-risk / high-return instruments. The average risk could be the same in both cases, but if something really bad happens, the person who has 100% invested will lose everything, while the barbell person will only lose that 20% and still have 80% in cold hard cash. The supposed inefficient use of the 80% being stuffed under the mattress rather than being used to borrow a ton and bet it in the stock market is actually a form of insurance or redundancy -- just in case.
So, by pushing the idea of putting everything to use rather than letting it sit idle, which appears sub-optimal, economists helped to make our society more fragile. Taleb notes that the three major Semitic religions all have some form of a ban on debt, and that this cultural practice made their societies more robust to large negative shocks than if everyone took 90% of their money, used it to borrow 1000 times as much, and put it to even medium-risk uses. We should relearn the wisdom of the ancients, he says.
But aside from their scorn for debt and usury, were the ancients so free from the overly optimizing expert economist mindset, where idle uses are not seen as a source of redundancy that will save us when the shit hits the fan, but instead as an inefficient mistake that needs correction?
In War Before Civilization, Lawrence Keeley discusses how bands that raid each other tend to move apart after really nasty raids, leaving a buffer zone in between their territories. This keeps them from bumping into each other and setting off another round of brutal raids, though at the cost of not giving this land the good ol' slash-and-burn and using it to cultivate food. However, Keeley says that when agrarian societies begin to form -- the sedentary agricultural groups, not the nomadic slash-and-burn groups -- the opportunity cost of this buffer land becomes much greater. After all, by leaving it idle they are foregoing a lot more food because agrarian methods of cultivation are a lot more productive per unit area compared to the more primitive slash-and-burn method.
The tendency then is to leave less and less of this land idle and to incorporate it into the main territory and put it to use for growing crops. I mean, who needs that worthless buffer zone anyway? Even if we need some buffer, does it really have to be that big? Surely it's more efficient to whittle it down to almost nothing and replace it with wheat fields to feed ourselves. You can just hear the ancient version of the clueless economist having a consulting lunch with the local headman.
The analogy to debt vs. savings is clear. Those pre-agrarian groups were practicing Taleb's barbell strategy -- keep a big chunk of land idle as a form of insurance that will save our group (and the neighboring one) from going extinct in the event that we crossed paths and started raiding each other again. Use the remainder of the territory for more productive uses. But those settled agrarian societies fell into the trap of equating idleness with worthlessness, and they all but eliminated a key source of redundancy. Now when some disaster strikes, they literally have no buffer to protect them.
What if a band of violent nomads comes storming through? No time to prepare -- once the nomads reach your territory, they've already reached highly valuable stuff to steal, and they've already reached people to kill or abduct.
What if there's another settled agrarian society just across from you, again with little or no buffer? Well you're both in the same Hobbesian trap of wanting to raid before you're raided yourself, except now it's more like protracted warfare rather than raid and counter-raid. You wouldn't feel so nervous -- correctly -- about their presence if they were separated by a buffer zone.
Hell, what if there's a deadly epidemic burning through the neighboring peoples and is headed in your direction? Once that disease reaches your vast buffer zone where population density is nearly 0, that is the opposite of the crowd environment where it thrives. Thus, it is far less likely to reach you and kill off most of your group. Forget letting infected outsiders get close to your core territory as long as they sit in quarantine -- don't even let them within 10 miles! Mind you, this epidemic doesn't necessarily have to be one that primarily affects humans. It could primarily affect, say, rats and the fleas they carry -- like the Black Plague -- and still spell your doom. Or some infectious crop disease could spread from the next agrarian state over to your crops. Or some infectious livestock disease could spread to your animals.
The convincing models of state breakdown highlight how the society first makes itself fragile from within, for example when the elites lose solidarity with one another and are on the verge of civil war. (Turchin summarizes this family of ideas by Ibn Khaldun, Goldstone, Collins, and others.) Still, there often is some coup de grace that comes from an invasion of more solidaristic people -- the Germanic barbarians who overran Rome, the Mongols who swept over much of Eurasia, and so on.
So buffer or no buffer, the most important dynamics are those within a society that tear apart the social fabric and lead up to civil war. However, if you're at this point, wouldn't you like to have a large territorial buffer and "only" go through a bloody civil war, rather than have no buffer and be conquered by outsiders on top of it? England is fortunate to enjoy island status (minus border raiding with Scotland), so they never had the choice to shrink their buffer down to zero in the name of productivity. After the War of the Roses or the later 17th C. Civil War, at least they weren't invaded, pillaged, and butchered by barbarians like the Roman or Khwarezmid empires were.
I wonder if this is a big part of what makes island states special. People focus on the territorial buffer zone that they have and that landlocked states do not, but they speak about the latter fact as if it were determined by geography. Instead, it's by the conscious choices of the decision-makers to shrink their buffer zone down to zero in order to settle and cultivate it rather than "irrationally" let it sit idle. An island state then is like a person who has a big chunk of their wealth locked up in a cash trust fund that they can't access. They may be getting a lower rate of return for the first 9 out of 10 years, but when the whole system blows up in the final year, they'll be OK while those who put most or all of their wealth to good productive use in the stock market will get wiped out.
So, it looks like we have to go all the way back to slash-and-burn horticulturalists or even hunter-gatherers to find groups who appreciated the insurance value of idleness. Once settled agrarian societies sprung up, they already began to worship efficiency over robustness, even if they weren't wholly devoted to this ideology, as shown by their proscriptions against debt. In the modern world, of course, we've exposed ourselves even more to black swan events. This greater vulnerability of ours is what spawned the welfare state, whose main function is to try to protect people from this excessive fragility, and which is growing more and more unstable, but that's a story for another time (maybe tomorrow).
Taleb's "barbell" strategy for investing shows how to make yourself robust to catastrophes. Rather than investing 100% in medium-risk instruments, invest a large amount like 80% in the safest instruments such as cash and the remaining 20% in very high-risk / high-return instruments. The average risk could be the same in both cases, but if something really bad happens, the person who has 100% invested will lose everything, while the barbell person will only lose that 20% and still have 80% in cold hard cash. The supposed inefficient use of the 80% being stuffed under the mattress rather than being used to borrow a ton and bet it in the stock market is actually a form of insurance or redundancy -- just in case.
So, by pushing the idea of putting everything to use rather than letting it sit idle, which appears sub-optimal, economists helped to make our society more fragile. Taleb notes that the three major Semitic religions all have some form of a ban on debt, and that this cultural practice made their societies more robust to large negative shocks than if everyone took 90% of their money, used it to borrow 1000 times as much, and put it to even medium-risk uses. We should relearn the wisdom of the ancients, he says.
But aside from their scorn for debt and usury, were the ancients so free from the overly optimizing expert economist mindset, where idle uses are not seen as a source of redundancy that will save us when the shit hits the fan, but instead as an inefficient mistake that needs correction?
In War Before Civilization, Lawrence Keeley discusses how bands that raid each other tend to move apart after really nasty raids, leaving a buffer zone in between their territories. This keeps them from bumping into each other and setting off another round of brutal raids, though at the cost of not giving this land the good ol' slash-and-burn and using it to cultivate food. However, Keeley says that when agrarian societies begin to form -- the sedentary agricultural groups, not the nomadic slash-and-burn groups -- the opportunity cost of this buffer land becomes much greater. After all, by leaving it idle they are foregoing a lot more food because agrarian methods of cultivation are a lot more productive per unit area compared to the more primitive slash-and-burn method.
The tendency then is to leave less and less of this land idle and to incorporate it into the main territory and put it to use for growing crops. I mean, who needs that worthless buffer zone anyway? Even if we need some buffer, does it really have to be that big? Surely it's more efficient to whittle it down to almost nothing and replace it with wheat fields to feed ourselves. You can just hear the ancient version of the clueless economist having a consulting lunch with the local headman.
The analogy to debt vs. savings is clear. Those pre-agrarian groups were practicing Taleb's barbell strategy -- keep a big chunk of land idle as a form of insurance that will save our group (and the neighboring one) from going extinct in the event that we crossed paths and started raiding each other again. Use the remainder of the territory for more productive uses. But those settled agrarian societies fell into the trap of equating idleness with worthlessness, and they all but eliminated a key source of redundancy. Now when some disaster strikes, they literally have no buffer to protect them.
What if a band of violent nomads comes storming through? No time to prepare -- once the nomads reach your territory, they've already reached highly valuable stuff to steal, and they've already reached people to kill or abduct.
What if there's another settled agrarian society just across from you, again with little or no buffer? Well you're both in the same Hobbesian trap of wanting to raid before you're raided yourself, except now it's more like protracted warfare rather than raid and counter-raid. You wouldn't feel so nervous -- correctly -- about their presence if they were separated by a buffer zone.
Hell, what if there's a deadly epidemic burning through the neighboring peoples and is headed in your direction? Once that disease reaches your vast buffer zone where population density is nearly 0, that is the opposite of the crowd environment where it thrives. Thus, it is far less likely to reach you and kill off most of your group. Forget letting infected outsiders get close to your core territory as long as they sit in quarantine -- don't even let them within 10 miles! Mind you, this epidemic doesn't necessarily have to be one that primarily affects humans. It could primarily affect, say, rats and the fleas they carry -- like the Black Plague -- and still spell your doom. Or some infectious crop disease could spread from the next agrarian state over to your crops. Or some infectious livestock disease could spread to your animals.
The convincing models of state breakdown highlight how the society first makes itself fragile from within, for example when the elites lose solidarity with one another and are on the verge of civil war. (Turchin summarizes this family of ideas by Ibn Khaldun, Goldstone, Collins, and others.) Still, there often is some coup de grace that comes from an invasion of more solidaristic people -- the Germanic barbarians who overran Rome, the Mongols who swept over much of Eurasia, and so on.
So buffer or no buffer, the most important dynamics are those within a society that tear apart the social fabric and lead up to civil war. However, if you're at this point, wouldn't you like to have a large territorial buffer and "only" go through a bloody civil war, rather than have no buffer and be conquered by outsiders on top of it? England is fortunate to enjoy island status (minus border raiding with Scotland), so they never had the choice to shrink their buffer down to zero in the name of productivity. After the War of the Roses or the later 17th C. Civil War, at least they weren't invaded, pillaged, and butchered by barbarians like the Roman or Khwarezmid empires were.
I wonder if this is a big part of what makes island states special. People focus on the territorial buffer zone that they have and that landlocked states do not, but they speak about the latter fact as if it were determined by geography. Instead, it's by the conscious choices of the decision-makers to shrink their buffer zone down to zero in order to settle and cultivate it rather than "irrationally" let it sit idle. An island state then is like a person who has a big chunk of their wealth locked up in a cash trust fund that they can't access. They may be getting a lower rate of return for the first 9 out of 10 years, but when the whole system blows up in the final year, they'll be OK while those who put most or all of their wealth to good productive use in the stock market will get wiped out.
So, it looks like we have to go all the way back to slash-and-burn horticulturalists or even hunter-gatherers to find groups who appreciated the insurance value of idleness. Once settled agrarian societies sprung up, they already began to worship efficiency over robustness, even if they weren't wholly devoted to this ideology, as shown by their proscriptions against debt. In the modern world, of course, we've exposed ourselves even more to black swan events. This greater vulnerability of ours is what spawned the welfare state, whose main function is to try to protect people from this excessive fragility, and which is growing more and more unstable, but that's a story for another time (maybe tomorrow).
September 20, 2010
Other signs of modernity = Aspberger's
Aside from the proliferation of acronyms, which I've covered before, what are some other pretty clear signs that modern societies select for a personality that's quite further in the systematizing Aspie direction than it was for all of pre-modern existence? Here are a few others that come to mind.
- Top Ten lists
- Numerical ratings where only vague qualitative information is given. E.g., rating cultural products as however many stars out of 5, rather than using words like terrible, not very good, OK, pretty good, and excellent.
- Complete collections. This is where someone aims to collect, e.g., every video game released for a certain system, every action figure released for a certain toy line, or every pressing / issue of a certain band's entire discography. The reward is not the item in itself, although it may also be worth something on its own to the person, but rather to coming one step closer to completing their collection.
As far as I know, it was never a hobby to aim to collect, say, one leaf from every tree (perhaps just of a certain species) within your village, or all the leaves or nuts that a particular tree produced.
- Nearly constant use of "that's my opinion," etc., when expressing your opinion. Well no shit. Autistics fail the "false belief task," so statements of fact and opinion are much less separable in their minds. Communication between Aspies and non-Aspies thus requires more of these stupid qualifications just to make sure both understand that only opinions are being discussed.
These are true differences between the modern and pre-modern mindset because they are just as easily implemented in both types of society, as long as the people are of that mindset. Obviously "computer programming" can't count since pre-modern societies didn't even have computers. But any pre-modern society could have made a list of the Top Ten best songs or animals or reasons why a woman is like a ____.
What else is there? The traits of online fanboys, no matter what they're fans of, is the easiest place to start for clues. They're the extreme right tail of the systematizing distribution, so they're easy to spot, and they do tell us something about the average being higher than it used to be.
- Top Ten lists
- Numerical ratings where only vague qualitative information is given. E.g., rating cultural products as however many stars out of 5, rather than using words like terrible, not very good, OK, pretty good, and excellent.
- Complete collections. This is where someone aims to collect, e.g., every video game released for a certain system, every action figure released for a certain toy line, or every pressing / issue of a certain band's entire discography. The reward is not the item in itself, although it may also be worth something on its own to the person, but rather to coming one step closer to completing their collection.
As far as I know, it was never a hobby to aim to collect, say, one leaf from every tree (perhaps just of a certain species) within your village, or all the leaves or nuts that a particular tree produced.
- Nearly constant use of "that's my opinion," etc., when expressing your opinion. Well no shit. Autistics fail the "false belief task," so statements of fact and opinion are much less separable in their minds. Communication between Aspies and non-Aspies thus requires more of these stupid qualifications just to make sure both understand that only opinions are being discussed.
These are true differences between the modern and pre-modern mindset because they are just as easily implemented in both types of society, as long as the people are of that mindset. Obviously "computer programming" can't count since pre-modern societies didn't even have computers. But any pre-modern society could have made a list of the Top Ten best songs or animals or reasons why a woman is like a ____.
What else is there? The traits of online fanboys, no matter what they're fans of, is the easiest place to start for clues. They're the extreme right tail of the systematizing distribution, so they're easy to spot, and they do tell us something about the average being higher than it used to be.
September 19, 2010
The stability of the Machiavellian Jewish character in high and pop culture
Early Christian anti-Semitism does not really deserve to be given a special label like that, as it suggests some kind of fixation on Jews in particular. Rather it was the standard in-group vs. out-group demonization, with a detail here and there about the out-group members used just to give it a modicum of plausibility, as in any Friend of a Friend tale.
For example, early Christians spread the rumor that Jews engaged in ritual sacrifice, cannibalism, incest, etc. -- but so does every group around the world when they're talking trash about the next tribe over. That's one reason anthropologists have trouble figuring out if some group really practices cannibalism or whether it's just a rumor from informants of a rival tribe.
What today would be called anti-Semitism doesn't look like it gets started until the Ashkenazim cohere as a group -- there are very specific claims about being good with money, controlling lots of money, emphasizing cunning and cleverness over morality, pitting two sides against each other for his own individual gain, viewing nothing as so sacred that it's beyond striking a deal over or profiting from, and so on.
I'm sure there are other less famous examples from a bit earlier on, but one of the first widely known examples of this type of Jewish character in European culture is Barabas in Marlowe's The Jew of Malta. A lot of his image and behaviors are stock, so late 16th C audiences must have already been familiar with this character. True, the character lives in Malta and so was probably not Ashkenazi, but it was common for English authors to project local action onto exotic locations. Marlowe seems to have the northern European Jews in mind.
Well that was way over there and way back then, right? Nope: I watched Aliens for the first time in awhile, and there he is again, even in a work from the modern post-Enlightenment age and in the most Semito-philic developed country outside of Israel. The Machiavellian sell-out / traitor has the laughably British name Carter J. Burke, but he's played by Paul Reiser's quintessentially Ashkenazi stage persona, no matter what movie he's in (like the neurotic Detroit gumshoe in Beverly Hills Cop). I guess there would have been a boycott if he had been named Max Lipschitz and had said, right as an alien was about to kill him, "Look, I know what you're thinking, but let's be rational and negotiate a deal here..."
That's at least four centuries of stability for this character type, and in elite and popular culture. Given how specific the type is, these portrayers must be picking up on something real. The over-representation of Ashkenazi Jews among Nobel Prize winners doesn't mean that the average one is super-brainy, but it still suggests that their average is above other groups' averages. And the social and economic ecology that Ashkenazi Jews were forced into and adapted to may not have required Nobel-level smarts, so it's not like the extreme values were selected for -- just the values around their average, where the person is smart enough to succeed as a tax farmer.
Similarly with Machiavellianism -- this may only represent a small minority of Ashkenazim, and this extreme value may not have been selected for, but it's hard to escape the conclusion that their average in this trait is higher than it is for their host populations, and that this fairly but not excessively Machiavellian level served a member well in the financial white collar ecology that they are adapted to.
I wonder if this is behind the unusually wide diversity of political positions that Ashkenazi Jews hold (they produced both Marx and Hayek -- oops, I meant Mises. I'd better not write so late at night...). If the average one is leaning in the amoral Machiavellian direction, that will reflect badly on someone who leans the other way, so the latter goes even further out of their way to prove their anti-Machiavellian beliefs to the host population. It's like the greater cultural diversity among whites in the south -- you see both rednecks and those who labor to prove their sophistication, lest they be mistaken for one of the bad white southerners. You don't see this so much where the whites are just bland and without embarrassing / bad apple leanings, say in South Dakota. Blacks who have been around lumpenprole blacks for a long time are more eager to prove, in the company of whites anyway, that they're not one of the bad blacks but educated and responsible. And a community-minded Ashkenazi Jew who through extensive personal experience senses the Machiavellian strain within his group will push even harder in the other direction when he's forming his beliefs.
For example, early Christians spread the rumor that Jews engaged in ritual sacrifice, cannibalism, incest, etc. -- but so does every group around the world when they're talking trash about the next tribe over. That's one reason anthropologists have trouble figuring out if some group really practices cannibalism or whether it's just a rumor from informants of a rival tribe.
What today would be called anti-Semitism doesn't look like it gets started until the Ashkenazim cohere as a group -- there are very specific claims about being good with money, controlling lots of money, emphasizing cunning and cleverness over morality, pitting two sides against each other for his own individual gain, viewing nothing as so sacred that it's beyond striking a deal over or profiting from, and so on.
I'm sure there are other less famous examples from a bit earlier on, but one of the first widely known examples of this type of Jewish character in European culture is Barabas in Marlowe's The Jew of Malta. A lot of his image and behaviors are stock, so late 16th C audiences must have already been familiar with this character. True, the character lives in Malta and so was probably not Ashkenazi, but it was common for English authors to project local action onto exotic locations. Marlowe seems to have the northern European Jews in mind.
Well that was way over there and way back then, right? Nope: I watched Aliens for the first time in awhile, and there he is again, even in a work from the modern post-Enlightenment age and in the most Semito-philic developed country outside of Israel. The Machiavellian sell-out / traitor has the laughably British name Carter J. Burke, but he's played by Paul Reiser's quintessentially Ashkenazi stage persona, no matter what movie he's in (like the neurotic Detroit gumshoe in Beverly Hills Cop). I guess there would have been a boycott if he had been named Max Lipschitz and had said, right as an alien was about to kill him, "Look, I know what you're thinking, but let's be rational and negotiate a deal here..."
That's at least four centuries of stability for this character type, and in elite and popular culture. Given how specific the type is, these portrayers must be picking up on something real. The over-representation of Ashkenazi Jews among Nobel Prize winners doesn't mean that the average one is super-brainy, but it still suggests that their average is above other groups' averages. And the social and economic ecology that Ashkenazi Jews were forced into and adapted to may not have required Nobel-level smarts, so it's not like the extreme values were selected for -- just the values around their average, where the person is smart enough to succeed as a tax farmer.
Similarly with Machiavellianism -- this may only represent a small minority of Ashkenazim, and this extreme value may not have been selected for, but it's hard to escape the conclusion that their average in this trait is higher than it is for their host populations, and that this fairly but not excessively Machiavellian level served a member well in the financial white collar ecology that they are adapted to.
I wonder if this is behind the unusually wide diversity of political positions that Ashkenazi Jews hold (they produced both Marx and Hayek -- oops, I meant Mises. I'd better not write so late at night...). If the average one is leaning in the amoral Machiavellian direction, that will reflect badly on someone who leans the other way, so the latter goes even further out of their way to prove their anti-Machiavellian beliefs to the host population. It's like the greater cultural diversity among whites in the south -- you see both rednecks and those who labor to prove their sophistication, lest they be mistaken for one of the bad white southerners. You don't see this so much where the whites are just bland and without embarrassing / bad apple leanings, say in South Dakota. Blacks who have been around lumpenprole blacks for a long time are more eager to prove, in the company of whites anyway, that they're not one of the bad blacks but educated and responsible. And a community-minded Ashkenazi Jew who through extensive personal experience senses the Machiavellian strain within his group will push even harder in the other direction when he's forming his beliefs.
September 17, 2010
How gay/submissive is the contemporary young male?
The days of high-energy rock singers baring their hairy chests to spellbound groupies are long gone. After the sexual counter-revolution of the early '90s, men have become more submissive in general and borderline gay. You might have thought that it couldn't get worse than the Generation X scene of the early-mid '90s when guys found pride in holding their girlfriend's purse while she was busy picketing at a Take Back the Night rally, but it can get a lot worse.
I don't mean the whole metrosexual craze of the Housing Bubble era, since basic cleanliness and not looking like a societal drop-out aren't gay. That was more a part of the bling-bling coup de grace against the lingering grunge culture of the '90s, not a stand against "patriarchal normativity" or whatever.
Before I've noted how often I see males being driven around by females rather than the other way around, and that this is backed up by the fact that among young people females are more likely to have a license than males. That's pathetic -- not having a license, let alone a car, and having to rely on a girl to get to places. If it's just now and then for the sake of convenience, that's one thing, but it's inexcusable for it to be a stable long-term pattern.
Still, I think the sickest display of this trend is how young guys "back it up" into a girl's lap at a dance club. For those who haven't set foot in a nightclub for awhile, backing it up used to be an exclusively female thing, where her back faces the guy's front and she gradually moves backward so that her ass meets his crotch. Usually there's some degree of bending over on her part, occasionally so much so that she rests her hands on her knees or even on the floor. Basically it looks like a standing lapdance.
Now a non-trivial fraction of guys who approach girls do this as well, and it's been growing more common over the past three years that I've been going out to dance clubs in America. (I went out often in Barcelona in 2004-'05, but backing it up was not common even among girls there.) Given how sex-segregated Millennials' social life is, how timid the guys are, and how the girls lack the boy-craziness of the sexual revolution era, not that many guys go right up to girls at all. But among those who do, I don't know, around 10% or 20% or so make their first move in this way. There's something to be said for humorous and not-too-frightening ways of suddenly breaking into the circled wagons of a group of girls, but this is beyond gay.
It's not even like an attempt at a male erotic dance, like a standing lapdance. It's goofy, exaggerated, self-conscious, meta, etc. Girls never get turned on by it, and never scream or make catcalls at the guy (sincere ones anyway). They just have a quick laugh at his expense, although he seems clueless about that, and send him on his way after a brief display of his doofusy servility.
And it's not even the geeky types who do this -- again, they're too timid to make a move of any sort. I'm talking about normal-looking guys with friends and who could be school athletes. This shows how extreme the change has been over the past 20 years -- even someone who would have been the popular jock type in the '60s through the '80s is a loser today, and probably won't have his first sexual adventure until a much later time, and will have fewer partners, compared to some ectomorphic goth from the early 1980s, back when the sexual revolution was still pervading all sectors of society (not just the popular groups).
It's gotten to the point where Millennial girls get surprised and don't know what to do if you try to dance with them face-to-face. They know nothing other than variations on standing lapdances, whether it's the male or female who receives. Someone emblematic of the times, like Katy Perry, should re-record "Johnny, Are You Queer?" Back when it was released, it was about a guy playing head games, but it would make much more sense for it to be on the air today.
I don't mean the whole metrosexual craze of the Housing Bubble era, since basic cleanliness and not looking like a societal drop-out aren't gay. That was more a part of the bling-bling coup de grace against the lingering grunge culture of the '90s, not a stand against "patriarchal normativity" or whatever.
Before I've noted how often I see males being driven around by females rather than the other way around, and that this is backed up by the fact that among young people females are more likely to have a license than males. That's pathetic -- not having a license, let alone a car, and having to rely on a girl to get to places. If it's just now and then for the sake of convenience, that's one thing, but it's inexcusable for it to be a stable long-term pattern.
Still, I think the sickest display of this trend is how young guys "back it up" into a girl's lap at a dance club. For those who haven't set foot in a nightclub for awhile, backing it up used to be an exclusively female thing, where her back faces the guy's front and she gradually moves backward so that her ass meets his crotch. Usually there's some degree of bending over on her part, occasionally so much so that she rests her hands on her knees or even on the floor. Basically it looks like a standing lapdance.
Now a non-trivial fraction of guys who approach girls do this as well, and it's been growing more common over the past three years that I've been going out to dance clubs in America. (I went out often in Barcelona in 2004-'05, but backing it up was not common even among girls there.) Given how sex-segregated Millennials' social life is, how timid the guys are, and how the girls lack the boy-craziness of the sexual revolution era, not that many guys go right up to girls at all. But among those who do, I don't know, around 10% or 20% or so make their first move in this way. There's something to be said for humorous and not-too-frightening ways of suddenly breaking into the circled wagons of a group of girls, but this is beyond gay.
It's not even like an attempt at a male erotic dance, like a standing lapdance. It's goofy, exaggerated, self-conscious, meta, etc. Girls never get turned on by it, and never scream or make catcalls at the guy (sincere ones anyway). They just have a quick laugh at his expense, although he seems clueless about that, and send him on his way after a brief display of his doofusy servility.
And it's not even the geeky types who do this -- again, they're too timid to make a move of any sort. I'm talking about normal-looking guys with friends and who could be school athletes. This shows how extreme the change has been over the past 20 years -- even someone who would have been the popular jock type in the '60s through the '80s is a loser today, and probably won't have his first sexual adventure until a much later time, and will have fewer partners, compared to some ectomorphic goth from the early 1980s, back when the sexual revolution was still pervading all sectors of society (not just the popular groups).
It's gotten to the point where Millennial girls get surprised and don't know what to do if you try to dance with them face-to-face. They know nothing other than variations on standing lapdances, whether it's the male or female who receives. Someone emblematic of the times, like Katy Perry, should re-record "Johnny, Are You Queer?" Back when it was released, it was about a guy playing head games, but it would make much more sense for it to be on the air today.
Scenes of conflict along the black / Mexican border within southern California
If consuming the mass media made you better informed about important things, everyone in America would know about the turf wars between blacks and Mexicans in southern California over the past 20 years. But to the average east coast reader of the NYT, these ongoing struggles would only appear as so much feuding between two groups of faraway low-lives. Therefore no one has a clue, except those in the area who have a much more personal interest, and who the local news teams will cater to.
We ignore border or frontier conflicts at our peril, since whoever triumphs at the periphery may soon march into the center. Even if those living in the center are so decadent that they've resigned themselves to being overtaken -- peacefully through demographic replacement or perhaps not so peacefully -- you'd think they would at least care who was going to invade and replace them. If you were a Persian living in the Khwarezmid Empire, wouldn't you care about who seemed to be getting the upper hand between the Mongols and various western and northern Chinese groups? Nah, nothing to worry about...
I wonder whether the provincial east coast sophisticates of the 19th C took no interest in the battles between Europeans and Plains Indians farther out west.
Whenever the topic of Compton comes up, Steve Sailer always makes sure to mention that 20 years ago it was black (Straight Outta Compton) but now is Mexican. Probably most of that is due to the Hispanic baby boom after the mid-'80s amnesty. Most people outside southern California would not know that from what they've read or seen from the media in the meantime. So let's turn to local news clips from YouTube for a better understanding.
First, here's what must be one of the first black vs. Mexican conflicts, in 1990 at Inglewood High School.
Second, here's the picture as of the late '90s / early 2000s, looking at strife between black vs. Mexican gangs.
Last, here's the story as of the mid-late 2000s in Pasadena. Notice those hate crime statistics: most Hispanic victims were targeted by blacks, and most black victims were targeted by Hispanics. When a black college student stages a hoax about white racists and alleged hate crimes, the east coast media -- responding to the desires of their readership -- fall over themselves to cover the hoax. But when real blacks are killed in true hate crimes, it's boring news because it's only Hispanic gang members bumping them off, not some Nordic supervillain.
We ignore border or frontier conflicts at our peril, since whoever triumphs at the periphery may soon march into the center. Even if those living in the center are so decadent that they've resigned themselves to being overtaken -- peacefully through demographic replacement or perhaps not so peacefully -- you'd think they would at least care who was going to invade and replace them. If you were a Persian living in the Khwarezmid Empire, wouldn't you care about who seemed to be getting the upper hand between the Mongols and various western and northern Chinese groups? Nah, nothing to worry about...
I wonder whether the provincial east coast sophisticates of the 19th C took no interest in the battles between Europeans and Plains Indians farther out west.
Whenever the topic of Compton comes up, Steve Sailer always makes sure to mention that 20 years ago it was black (Straight Outta Compton) but now is Mexican. Probably most of that is due to the Hispanic baby boom after the mid-'80s amnesty. Most people outside southern California would not know that from what they've read or seen from the media in the meantime. So let's turn to local news clips from YouTube for a better understanding.
First, here's what must be one of the first black vs. Mexican conflicts, in 1990 at Inglewood High School.
Second, here's the picture as of the late '90s / early 2000s, looking at strife between black vs. Mexican gangs.
Last, here's the story as of the mid-late 2000s in Pasadena. Notice those hate crime statistics: most Hispanic victims were targeted by blacks, and most black victims were targeted by Hispanics. When a black college student stages a hoax about white racists and alleged hate crimes, the east coast media -- responding to the desires of their readership -- fall over themselves to cover the hoax. But when real blacks are killed in true hate crimes, it's boring news because it's only Hispanic gang members bumping them off, not some Nordic supervillain.
September 15, 2010
In cultures of honor, male role is more stable than female's
Looking at various cultures of honor, whether across geography or over time, the man's role is mostly the same -- he has to protect his honor at any cost, his kin will shame him if he neglects this duty, and others in the community are always interested to hear news about which men can and cannot be dissed.
There tends to be an emphasis on female honor in these cultures as well, only it requires her to protect her sexual purity, her kin will shame her if they find out she was flirting with a strange male, and others in the community are ever eager to know who's a slut and who is not.
Both of these are still going pretty strong in the Near East, where you still find honor killings of girls whose kin feel their honor has been besmirched by her sexual deviancy. But you didn't see quite the level of control over female sexuality in the American South. Today the emphasis on female honor is even less prevalent there, and there are several modernizing currents in parts of the Near and Middle East to allow females to show off what they've got and make eyes at men without fear of reprisal.
So despite all the whining you hear from feminists and beta male bootlickers about how disgustingly widespread and harmful the madonna/whore dichotomy is, it's actually a pretty fragile set of social-cultural roles. It's the men's roles as (often violent) defenders of their honor that are very slow to go away, leaving a good part of the world still saddled with the mentality and behavior of long-lasting feuds.
This makes sense since these cultures of honor tend to emerge in groups where people have enough stuff that's worth stealing, yet where there's little or no centralized police force to prevent the bad guys from running off with your stuff, such as among pastoralist groups. It's the male fighting back the other encroaching males that drives the culture of honor, so this role will stay around forever. The madonna/whore system seems like something that's only projected from the more basic domain of survival (male street cred) onto a less important one (female chastity), so it's likely to wane over time or not show up at all in some places.
There tends to be an emphasis on female honor in these cultures as well, only it requires her to protect her sexual purity, her kin will shame her if they find out she was flirting with a strange male, and others in the community are ever eager to know who's a slut and who is not.
Both of these are still going pretty strong in the Near East, where you still find honor killings of girls whose kin feel their honor has been besmirched by her sexual deviancy. But you didn't see quite the level of control over female sexuality in the American South. Today the emphasis on female honor is even less prevalent there, and there are several modernizing currents in parts of the Near and Middle East to allow females to show off what they've got and make eyes at men without fear of reprisal.
So despite all the whining you hear from feminists and beta male bootlickers about how disgustingly widespread and harmful the madonna/whore dichotomy is, it's actually a pretty fragile set of social-cultural roles. It's the men's roles as (often violent) defenders of their honor that are very slow to go away, leaving a good part of the world still saddled with the mentality and behavior of long-lasting feuds.
This makes sense since these cultures of honor tend to emerge in groups where people have enough stuff that's worth stealing, yet where there's little or no centralized police force to prevent the bad guys from running off with your stuff, such as among pastoralist groups. It's the male fighting back the other encroaching males that drives the culture of honor, so this role will stay around forever. The madonna/whore system seems like something that's only projected from the more basic domain of survival (male street cred) onto a less important one (female chastity), so it's likely to wane over time or not show up at all in some places.
Blocking the market in used video games
Story here. If someone buys a game and agrees to a statement that they won't re-sell it, as the game is only licensed to the buyer, then clearly they shouldn't be able to re-sell it. I just wonder how many people will buy these kinds of games. Will the video game developers who refrain from putting such clauses into their games advertise this on the front of the box? -- maybe a yellow smiley face with the words "It's Yours" on it?
Another episode in the decline of the video game industry and culture, and another reason why I've had so little interest in staying up-to-date for the past 15 years.
Imagine any other hard copy of a piece of entertainment that prevented you from re-selling it. Movie studios could offer the same rationale for including "no re-sell" wording in new DVDs -- they make no money on all sales in the used market. Ditto the music industry and used CDs. Fortunately if this damage gets done, it will only affect the garbage that's coming out of the entertainment industry today -- your copies of Rocket to Russia, Ghostbusters, and Bonk's Adventure will be safe.
Another episode in the decline of the video game industry and culture, and another reason why I've had so little interest in staying up-to-date for the past 15 years.
Imagine any other hard copy of a piece of entertainment that prevented you from re-selling it. Movie studios could offer the same rationale for including "no re-sell" wording in new DVDs -- they make no money on all sales in the used market. Ditto the music industry and used CDs. Fortunately if this damage gets done, it will only affect the garbage that's coming out of the entertainment industry today -- your copies of Rocket to Russia, Ghostbusters, and Bonk's Adventure will be safe.
September 11, 2010
Do you remember where you were when you found out about... ?
Thinking over the big single events whose news was widely broadcast, I can only remember the ones that happened once I hit puberty. Perhaps the larger social world isn't so important to us as children, when we're still fairly solitary and socially retarded.
I was barely alive when John Lennon was killed or when Reagan was shot. I have no memory at all of Black Monday, the Tiananmen Square massacre, or the Berlin Wall falling. I have a vague memory of the Rodney King beating, although neither that nor the later L.A. riots stick so strongly that I remember exactly when and where I heard the news.
The first one that I recall pretty clearly is Kurt Cobain's suicide in 1994, when I was 13 and in 8th grade. I was walking down the hall toward the end of the school day (I recall the exact spot), and a girl who knew I and my friends had gotten into Nirvana broke the news. She was widely considered a Nirvana poser, and we never really interacted because of that. We put that aside for the moment, although I still remember feeling a very petty bitterness that she'd heard the news first and told us rather than the other way around. It wasn't fair -- we were ten times the Nirvana fans she was! So works the teenage mind...
After that was the not guilty verdict in the 1995 O.J. Simpson murder trial, when I was in 9th grade. Our high school thought it was so important that they let our teachers table whatever they had planned for the final class period, when they would tell us the news and let us talk about it amongst ourselves, with the teachers moderating. My class got gipped because all we got to miss was gym class. I don't recall anyone getting really incensed one way or the other, although it was such a lovely fall afternoon and we were outside, so probably were not in the mood to argue like those who were holed up in a classroom. I would guess my gym class was 70-80% white, with the rest about evenly black and Hispanic.
And of course I remember 9/11. That was sophomore year in college. Our small syntax class had just ended and we were riding the elevator down together when a black woman, almost panicking but keeping her cool, joined us. Either spontaneously or after we gave her a "what's wrong?" sort of look, she told us that "we've -- been -- a -- ttacked." A Millennial friend of mine, who was 12 and in 7th grade at the time, said she remembered everything about that morning and afternoon as well. I doubt she remembers Kurt Cobain or O.J. Simpson, but I'd have to ask.
I don't have one of those vivid recollections of where I was when I heard the news of the stock market tumbling in 2008, though I remember spending extra time in the library in between classes to check in with lots of newspapers. It definitely has not left the same imprint as the other three.
Does anyone have a distinct "I remember exactly what the context was" memory of one of these big, widely broadcast events that is from your childhood rather than adolescence or later? I believe my mother (and perhaps my father, too) has one of those memories about JFK's assassination, and she was only 8 years old then. Does it take something that extreme? How did kids born in the mid-'60s react, if at all, to Nixon's resignation? Or those born in the early '70s to the Iran Hostage crisis? Maybe these do leave lasting sharp impressions on a child's mind, but they have to hit closer to home than the fall of the Berlin Wall or Tiananmen Square Massacre.
For any social scientists out there, this would be an easy thing to find out by survey. Ask a bunch of people if they have this kind of memory for a list of big events, and ask their birth year. For each event, plot the percent with vivid memories by how old they were at the time. This would reveal when the "sensitive period" is for learning about impactful social events. Even more interesting than when it begins is when it ends -- how many who were 60-somethings at the time had a sharp memory of where they were when they heard John Lennon had been shot? Probably not very many.
Maybe this period of learning lasts only as long as a person's reproductive career, when we have no choice but to be very social and up-to-date with our social information. Before then, we're still maturing socially, and after -- well, we've mated all we're going to, so who gives a shit if we're out of the loop?
I was barely alive when John Lennon was killed or when Reagan was shot. I have no memory at all of Black Monday, the Tiananmen Square massacre, or the Berlin Wall falling. I have a vague memory of the Rodney King beating, although neither that nor the later L.A. riots stick so strongly that I remember exactly when and where I heard the news.
The first one that I recall pretty clearly is Kurt Cobain's suicide in 1994, when I was 13 and in 8th grade. I was walking down the hall toward the end of the school day (I recall the exact spot), and a girl who knew I and my friends had gotten into Nirvana broke the news. She was widely considered a Nirvana poser, and we never really interacted because of that. We put that aside for the moment, although I still remember feeling a very petty bitterness that she'd heard the news first and told us rather than the other way around. It wasn't fair -- we were ten times the Nirvana fans she was! So works the teenage mind...
After that was the not guilty verdict in the 1995 O.J. Simpson murder trial, when I was in 9th grade. Our high school thought it was so important that they let our teachers table whatever they had planned for the final class period, when they would tell us the news and let us talk about it amongst ourselves, with the teachers moderating. My class got gipped because all we got to miss was gym class. I don't recall anyone getting really incensed one way or the other, although it was such a lovely fall afternoon and we were outside, so probably were not in the mood to argue like those who were holed up in a classroom. I would guess my gym class was 70-80% white, with the rest about evenly black and Hispanic.
And of course I remember 9/11. That was sophomore year in college. Our small syntax class had just ended and we were riding the elevator down together when a black woman, almost panicking but keeping her cool, joined us. Either spontaneously or after we gave her a "what's wrong?" sort of look, she told us that "we've -- been -- a -- ttacked." A Millennial friend of mine, who was 12 and in 7th grade at the time, said she remembered everything about that morning and afternoon as well. I doubt she remembers Kurt Cobain or O.J. Simpson, but I'd have to ask.
I don't have one of those vivid recollections of where I was when I heard the news of the stock market tumbling in 2008, though I remember spending extra time in the library in between classes to check in with lots of newspapers. It definitely has not left the same imprint as the other three.
Does anyone have a distinct "I remember exactly what the context was" memory of one of these big, widely broadcast events that is from your childhood rather than adolescence or later? I believe my mother (and perhaps my father, too) has one of those memories about JFK's assassination, and she was only 8 years old then. Does it take something that extreme? How did kids born in the mid-'60s react, if at all, to Nixon's resignation? Or those born in the early '70s to the Iran Hostage crisis? Maybe these do leave lasting sharp impressions on a child's mind, but they have to hit closer to home than the fall of the Berlin Wall or Tiananmen Square Massacre.
For any social scientists out there, this would be an easy thing to find out by survey. Ask a bunch of people if they have this kind of memory for a list of big events, and ask their birth year. For each event, plot the percent with vivid memories by how old they were at the time. This would reveal when the "sensitive period" is for learning about impactful social events. Even more interesting than when it begins is when it ends -- how many who were 60-somethings at the time had a sharp memory of where they were when they heard John Lennon had been shot? Probably not very many.
Maybe this period of learning lasts only as long as a person's reproductive career, when we have no choice but to be very social and up-to-date with our social information. Before then, we're still maturing socially, and after -- well, we've mated all we're going to, so who gives a shit if we're out of the loop?
September 10, 2010
What evidence do you see for greater sex segregation among young people today?
Boys and girls live in totally separate worlds today. I can't think of any counter-examples to that big picture, except for the fact that girls are much more likely to have a gay friend these days. So I'm restricting this to boy-girl interactions or relationships where both are straight. I'll list some I've posted on before, and others that occur to me off the top of my head. But I wasn't a teenager in the '80s or before, so there may be lesser known boy-girl practices that are now dead and that I'd have no clue to look at. What else comes to mind?
- Boys and girls don't hang out in public in groups of friends. It's either a group of guys or group of girls (perhaps with some gays). I don't even notice cars that carry a mix of young guys and girls. That must reflect a lower level of mixed-sex social circles than before. So I'd guess that they don't even hang out at each other's houses or share the same cafeteria table spaces like they used to.
- At house parties or in dance clubs and bars, girls don't leave their friends alone. If they get the sense that one of their friends wants to slip away and pair up with a boy, whether for something light or heavy, they become cock blocks -- a phrase that did not exist before because there was rarely such a thing. They steal the friend back, or sometimes just walk off as a group, knowing that the girl (being a girl) cannot take being stranded by her clique and will fall in line behind them. In mixed-sex times, they would've left her alone, and if not, she would've told them to mind their own business, get a life of their own, etc.
- In the same settings as above, girls form tight circles meant to keep the world out, rather than having a more open formation like when they used to be boy-crazy. Watch a school dance or night club scene from any '80s teen movie, and notice how absent this is. Today even if there are only two girls, they face each other at close distance, closing out the rest of the world. I recall this closed formation only during 6th grade dances. It's as though teenage and 20-something girls today haven't socially matured beyond the level of middle schoolers when it comes to interacting with boys -- and therefore, boys haven't matured either for want of contact with girls.
What is a more open formation? Standing side by side, making a semi-circle, etc., showing your openness to being approached. The closed formation holds even in totally safe settings, and times are incredibly safer now anyway, so this is not simply a shift to deal with a greater level of danger at parties or clubs.
- There's all that terrible "girls gotta stick together" music that blew up with the Spice Girls in the '90s, although you could probably find some less popular examples from the early-mid '90s (like Queen Latifah's "U.N.I.T.Y"). The girl groups from the '60s through the '80s didn't sing about that at all -- they were boy-crazy and fought against other girls over their dream boys, which was reflected in myriad "choose me over her" song lyrics. In the '90s and 2000s, girl singers were either of the "girls united" camp or the "it's all about how hot I am" camp. Barf me out.
- On the guy side, how many songs are there about awkwardly falling for a close chick friend of yours? That can only happen when boys and girls socialize a fair amount inside and outside of school. I don't mean the girl who you're barely acquainted with, or who you only spy from afar after 2nd period as she's going down the stairwell that you're walking up. These songs are hard to pin down in any time because usually the guy who's fallen for a close friend appropriates a song that is probably about an actual boyfriend / girlfriend relationship, like "Bizarre Love Triangle."
Still, my impression is that these used to be common enough, at least from 1969's "Na Na Hey Hey Kiss Him Goodbye" up through 1986's "Amanda," plus a lot of less popular stuff in the '80s by college rock bands like The Smiths ("There Is A Light That Never Goes Out"). I'm probably missing another big one from the late '80s and early '90s... Mr. Big's "To Be With You"? LOL, someone please give us a better example than that.
In any case, I don't remember hearing this type of song during the mid-'90s and after. Plenty of unrequited love songs by alternative / emo geeks, whether about girls in general not paying them attention or about a specific girl to whom he's invisible (possibly a current girlfriend), but again I mean someone who the guy has hung out with long enough for her to be in his circle of friends. And they have to have been big popular songs to count here, not a band that's unrepresentative of the zeitgeist.
What else is there that shows how separate boys and girls live today?
- Boys and girls don't hang out in public in groups of friends. It's either a group of guys or group of girls (perhaps with some gays). I don't even notice cars that carry a mix of young guys and girls. That must reflect a lower level of mixed-sex social circles than before. So I'd guess that they don't even hang out at each other's houses or share the same cafeteria table spaces like they used to.
- At house parties or in dance clubs and bars, girls don't leave their friends alone. If they get the sense that one of their friends wants to slip away and pair up with a boy, whether for something light or heavy, they become cock blocks -- a phrase that did not exist before because there was rarely such a thing. They steal the friend back, or sometimes just walk off as a group, knowing that the girl (being a girl) cannot take being stranded by her clique and will fall in line behind them. In mixed-sex times, they would've left her alone, and if not, she would've told them to mind their own business, get a life of their own, etc.
- In the same settings as above, girls form tight circles meant to keep the world out, rather than having a more open formation like when they used to be boy-crazy. Watch a school dance or night club scene from any '80s teen movie, and notice how absent this is. Today even if there are only two girls, they face each other at close distance, closing out the rest of the world. I recall this closed formation only during 6th grade dances. It's as though teenage and 20-something girls today haven't socially matured beyond the level of middle schoolers when it comes to interacting with boys -- and therefore, boys haven't matured either for want of contact with girls.
What is a more open formation? Standing side by side, making a semi-circle, etc., showing your openness to being approached. The closed formation holds even in totally safe settings, and times are incredibly safer now anyway, so this is not simply a shift to deal with a greater level of danger at parties or clubs.
- There's all that terrible "girls gotta stick together" music that blew up with the Spice Girls in the '90s, although you could probably find some less popular examples from the early-mid '90s (like Queen Latifah's "U.N.I.T.Y"). The girl groups from the '60s through the '80s didn't sing about that at all -- they were boy-crazy and fought against other girls over their dream boys, which was reflected in myriad "choose me over her" song lyrics. In the '90s and 2000s, girl singers were either of the "girls united" camp or the "it's all about how hot I am" camp. Barf me out.
- On the guy side, how many songs are there about awkwardly falling for a close chick friend of yours? That can only happen when boys and girls socialize a fair amount inside and outside of school. I don't mean the girl who you're barely acquainted with, or who you only spy from afar after 2nd period as she's going down the stairwell that you're walking up. These songs are hard to pin down in any time because usually the guy who's fallen for a close friend appropriates a song that is probably about an actual boyfriend / girlfriend relationship, like "Bizarre Love Triangle."
Still, my impression is that these used to be common enough, at least from 1969's "Na Na Hey Hey Kiss Him Goodbye" up through 1986's "Amanda," plus a lot of less popular stuff in the '80s by college rock bands like The Smiths ("There Is A Light That Never Goes Out"). I'm probably missing another big one from the late '80s and early '90s... Mr. Big's "To Be With You"? LOL, someone please give us a better example than that.
In any case, I don't remember hearing this type of song during the mid-'90s and after. Plenty of unrequited love songs by alternative / emo geeks, whether about girls in general not paying them attention or about a specific girl to whom he's invisible (possibly a current girlfriend), but again I mean someone who the guy has hung out with long enough for her to be in his circle of friends. And they have to have been big popular songs to count here, not a band that's unrepresentative of the zeitgeist.
What else is there that shows how separate boys and girls live today?
Are economists fatter than other academics?
Steve Sailer is doing a good job pointing out how useless economists have been, so why not pile on. I got this idea awhile ago and simply looked around various universities' econ department faculty pictures, and I have a fairly good but not certain impression that economists are fatter than similar academics, for example psychologists on the one hand and physicists and engineers on the other. Their close academic peers tend toward the average or ectomorphic side, whereas they definitely have an above-average BMI. It's hard not to notice because most highly educated, professional people are in the normal-to-thin range.
No links provided, as I don't recall every last department I looked at for the economists and physicists, etc., but half the fun here is taking a look around for yourself.
I could care less what these guys look like per se, but it says something deeper about them. Everyone else in their social stratum is thin or average, so they're clearly more likely to let themselves go, probably more so with sugars (desserts) and starches (bowls of pasta). I think this is caused by their greater delight in rationalizing destructive behavior, whether individual or social -- they're endlessly in search of those intellectual bonus points for being counter-intuitive. Bunch of grade-grubbing dorks.
I doubt that it has to do with the discipline itself, like if you transplanted an ectomorphic physicist or mathematician into an econ department, he wouldn't blimp out after absorbing their ideas. It's more like people who wish to rationalize destructive behavior sort themselves into econ departments. Why did they settle on econ? Who knows, it could have been completely random a hundred or so years ago, if the other social sciences had already been seeded by people with a moral outlook. Sociologists also encourage socially destructive behavior, but it's always out of some higher conviction that it's the right thing to do, rather than an amoral rationalization. Or there could be something inherently attractive about econ to such people.
No links provided, as I don't recall every last department I looked at for the economists and physicists, etc., but half the fun here is taking a look around for yourself.
I could care less what these guys look like per se, but it says something deeper about them. Everyone else in their social stratum is thin or average, so they're clearly more likely to let themselves go, probably more so with sugars (desserts) and starches (bowls of pasta). I think this is caused by their greater delight in rationalizing destructive behavior, whether individual or social -- they're endlessly in search of those intellectual bonus points for being counter-intuitive. Bunch of grade-grubbing dorks.
I doubt that it has to do with the discipline itself, like if you transplanted an ectomorphic physicist or mathematician into an econ department, he wouldn't blimp out after absorbing their ideas. It's more like people who wish to rationalize destructive behavior sort themselves into econ departments. Why did they settle on econ? Who knows, it could have been completely random a hundred or so years ago, if the other social sciences had already been seeded by people with a moral outlook. Sociologists also encourage socially destructive behavior, but it's always out of some higher conviction that it's the right thing to do, rather than an amoral rationalization. Or there could be something inherently attractive about econ to such people.
September 7, 2010
Does it take modern minds to trigger Superfluous Acronym Proliferation (SAP)?
In a post below on the history of heavy metal, TGGP uses an unfortunately standard nickname for the Maiden and Priest sound -- New Wave of British Heavy Metal, always referred to as NWOBHM. If you read any social science article, textbook, or even popularization, you've no doubt come across all manner of pointless and confusing acronyms. They must require this in social science graduate programs, just like grant writing, as a career-building skill. You can easily detect the nerdy origins of the Game community by their extensive use of acronyms (PUA, LJBF, etc.).
Yet as far as I can tell from my admittedly limited reading of pre-modern authors, they would have found this practice repellent -- both the creators and the audience. Sure, they existed, but they were no more than a handful -- AD, RIP, INRI, etc. St. Augustine did not refer to the City of God over and over as CD. Compare with the terms in modern religion, i.e. social science, again by browsing their article abstracts. Dante did not uglify his descriptor Dolce Stil Novo by calling it DSN every time after the first use. Compare with names that current critics give to today's art movements: NWOBHM, EBM, etc. And Hobbes did not market his concept of the war of all against all with the Latin acronym BOCO, perhaps because it was too pronounceable -- only real estate acronyms are designed to sound mellifluous. Compare that to one of its modern counterparts, MAD (mutually assured destruction).
Even natural scientists did not used to be geeks, unlike today. Browsing through the Wikipedia article on string theory, I see references to TOE, QCD strings, and AdS/CFT correspondence (bonus points for the slash, though it could use hyphen too, or would that be gilding the lily?). Then I look through the article on classical mechanics and find not a single one! No reference to the law of conservation of angular momentum as the LoCoAM, for instance. Newton was above using his insights into the laws of motion as a pathetic excuse to manufacture another fleet of operose acronyms.
The practice destroys clarity, introduces unnecessary jargon, and further divorces the concept from the reality it's supposed to represent -- to further Platonify it. Somehow an unspeakable array of characters seems more pure and ideal than words or phrases that roll off the tongue, like the ancient Hebrew tetragrammaton YHWH used to refer to their god. This showcases the backwardness of modern thinking, where we treat mankind's slapdash mental constructs as sacred, and the real world out there as profane. Also, they don't function merely as shibboleths, as every group makes tons of these without cranking out acronyms.
Clearly there's none of this funny business from the birth of the printed word up through the 17th C. Based on when autistic and Aspbergery traits start exploding, as seen for example by the rise of thick-skulled grammar Nazis and other top-down attempts to make human language mo, I'd guess that this mess got going during the 18th C. -- why follow the tradition of using words and phrases when it's more rational, efficient, and progressive to use acronyms? It must've really taken off during the 19th C., and of course by the end of the 20th it was deeply entrenched.
For all the rationalizations about how time-saving or ink-saving the acronym is, yet modern books blather on for much longer than pre-modern books do. Look at how easily you could condense a lot of novels compared to epic poems, and that's leaving aside that most pre-modern writings were not even as long as an epic poem -- or anywhere close. The same applies to today's non-fiction "idea books" compared to pre-modern ones. So the efficiency excuse is bogus. Semi-autistic geeks coin acronyms just to give their simple minds something easy to bat around, whether playing alone or with friends.
Yet as far as I can tell from my admittedly limited reading of pre-modern authors, they would have found this practice repellent -- both the creators and the audience. Sure, they existed, but they were no more than a handful -- AD, RIP, INRI, etc. St. Augustine did not refer to the City of God over and over as CD. Compare with the terms in modern religion, i.e. social science, again by browsing their article abstracts. Dante did not uglify his descriptor Dolce Stil Novo by calling it DSN every time after the first use. Compare with names that current critics give to today's art movements: NWOBHM, EBM, etc. And Hobbes did not market his concept of the war of all against all with the Latin acronym BOCO, perhaps because it was too pronounceable -- only real estate acronyms are designed to sound mellifluous. Compare that to one of its modern counterparts, MAD (mutually assured destruction).
Even natural scientists did not used to be geeks, unlike today. Browsing through the Wikipedia article on string theory, I see references to TOE, QCD strings, and AdS/CFT correspondence (bonus points for the slash, though it could use hyphen too, or would that be gilding the lily?). Then I look through the article on classical mechanics and find not a single one! No reference to the law of conservation of angular momentum as the LoCoAM, for instance. Newton was above using his insights into the laws of motion as a pathetic excuse to manufacture another fleet of operose acronyms.
The practice destroys clarity, introduces unnecessary jargon, and further divorces the concept from the reality it's supposed to represent -- to further Platonify it. Somehow an unspeakable array of characters seems more pure and ideal than words or phrases that roll off the tongue, like the ancient Hebrew tetragrammaton YHWH used to refer to their god. This showcases the backwardness of modern thinking, where we treat mankind's slapdash mental constructs as sacred, and the real world out there as profane. Also, they don't function merely as shibboleths, as every group makes tons of these without cranking out acronyms.
Clearly there's none of this funny business from the birth of the printed word up through the 17th C. Based on when autistic and Aspbergery traits start exploding, as seen for example by the rise of thick-skulled grammar Nazis and other top-down attempts to make human language mo, I'd guess that this mess got going during the 18th C. -- why follow the tradition of using words and phrases when it's more rational, efficient, and progressive to use acronyms? It must've really taken off during the 19th C., and of course by the end of the 20th it was deeply entrenched.
For all the rationalizations about how time-saving or ink-saving the acronym is, yet modern books blather on for much longer than pre-modern books do. Look at how easily you could condense a lot of novels compared to epic poems, and that's leaving aside that most pre-modern writings were not even as long as an epic poem -- or anywhere close. The same applies to today's non-fiction "idea books" compared to pre-modern ones. So the efficiency excuse is bogus. Semi-autistic geeks coin acronyms just to give their simple minds something easy to bat around, whether playing alone or with friends.
"Japan may be one of the most diabetic countries right now"
I'm reposting a comment to a post at Steve Sailer's on diet and evolution, specifically whether or not East Asians process carbs better than Europeans -- they are thinner than we are, so maybe they're better genetically adapted to a diet that a hunter-gatherer would find high-carb. I don't believe so, and the reason is that we have to look at the full spectrum of diseases that cluster into Metabolic Syndrome, not just obesity. No links are provided for the quotes below, as you can find them in the first page of googling "japan diabetes," which will give you more sources anyway.
Obesity is only one symptom within the larger cluster disease called Metabolic Syndrome or Syndrome X. It also includes type II diabetes, insulin resistance, heart disease, high blood pressure, hypertension, etc.
If East Asians deal well with carbs, then they should have low rates of these symptoms. They do tend to be thinner, but that is only because they don't eat loads of food to begin with.
More glucose causes more insulin causes more storage of fat -- but how much fat gets stored depends on how much chow you're eating. If they don't eat many large meals like we do, their higher glucose and insulin levels won't have as much fat to store.
So we need to check whether East Asians or other groups have high / rising rates of diabetes, insulin resistance, hyperglycemia, etc. Those are tell-tale signs of not dealing well with carbs, as they respond directly to how much glucose you're taking in, whereas obesity also requires a high level of food intake to show up.
The other symptoms of Metabolic Syndrome are runaway in East Asia, South Asia, the Gulf states, etc. See this post. Here's just one summary of the Japanese from 2005:
And another from 2002:
Again, just Japan, but it's true for all other thin but carb-scarfing countries. They all point to their inability to thrive on a high-carb diet, but you'd only detect that by looking at diabetes, insulin resistance, glucose metabolism, etc., rather than just obesity.
Obesity is only one symptom within the larger cluster disease called Metabolic Syndrome or Syndrome X. It also includes type II diabetes, insulin resistance, heart disease, high blood pressure, hypertension, etc.
If East Asians deal well with carbs, then they should have low rates of these symptoms. They do tend to be thinner, but that is only because they don't eat loads of food to begin with.
More glucose causes more insulin causes more storage of fat -- but how much fat gets stored depends on how much chow you're eating. If they don't eat many large meals like we do, their higher glucose and insulin levels won't have as much fat to store.
So we need to check whether East Asians or other groups have high / rising rates of diabetes, insulin resistance, hyperglycemia, etc. Those are tell-tale signs of not dealing well with carbs, as they respond directly to how much glucose you're taking in, whereas obesity also requires a high level of food intake to show up.
The other symptoms of Metabolic Syndrome are runaway in East Asia, South Asia, the Gulf states, etc. See this post. Here's just one summary of the Japanese from 2005:
In fact, the prevalence of diabetes, particularly type 2 diabetes, is increasing explosively in Japan, as well as those in the other Asian and African countries. Japan may be one of the most diabetic countries right now. [...] insulin resistance enhanced by change of lifestyle ...
And another from 2002:
Among Japanese men, these changes have been associated with a steadily increasing body mass index (BMI), a well-known risk factor for the development of insulin resistance, impaired glucose tolerance, and diabetes. Genetic characteristics common to many Japanese may also contribute to their higher prevalence of diabetes. The Japanese have a higher prevalence of polymorphisms for at least three genes that code for proteins thought to play key roles in lipid and glucose metabolism: the beta 3-adrenergic receptor, the peroxisome proliferator-activated receptor γ, and calpain-10. The interaction between changes in lifestyle and the ‘thrifty’ genotype characteristic of many Japanese people may play a significant role in the increasing prevalence of diabetes and associated cardiovascular risk in this population.
Again, just Japan, but it's true for all other thin but carb-scarfing countries. They all point to their inability to thrive on a high-carb diet, but you'd only detect that by looking at diabetes, insulin resistance, glucose metabolism, etc., rather than just obesity.
September 5, 2010
Drugging kids to control them is a safe times practice
This report is 10 years old, but it has great graphs on the rise of ritalin during the '90s after a steady and low level of production and use before then. They even give a specific year, 1991, as the transition point:
So it was only once the violence rate peaked in '91 or '92 that parents started drugging their kids to bring about more compliant behavior. There are several reasons why parents let their kids run wild during dangerous times and drug them during safe times, as odd as that may sound:
- When the whole society is wild, wildness will not be punished by others, so why bother trying to keep your kid out of trouble that he won't have to pay for? When the society is tame, a rambunctious kid will run into lots of trouble, so you do what you can to dial it down for him.
- Drugging a kid is one way to keep them from going through natural human development, some of which isn't too pleasant for the PTA. In safe times, parents try to keep their kids from growing up -- the world's so safe that they can put off growing up until later without worry. In dangerous times, parents see that life is fleeting and their kids had better hurry the hell up with maturation if they're going to make some grandkids.
- This is just an impression, but it's related to an earlier pattern I detailed where dangerous times breed a greater belief in the supernatural, paranormal, mystical, occult, etc. That is, it looks like safe times see a rise in the hubris of science and particularly social science and social engineering -- even among the public. Even the elites were fairly skeptical of the power of top-down social engineering during the '60s through the '80s, aside from the Great Society programs.
The case of crime control is instructive: everyone, liberal or conservative, had concluded pessimistically by the mid-late 1970s that crime was out of control and there was nothing that technocrats could do about it, even after all we learned through "natural experiments" in policing policy. Contrast that with the New Deal of the low-crime times of the mid-'30s through the late '50s, or with the Clinton-Bush-Obama handling of the housing and finance industries for the greater good.
So, parents will have greater faith in the power of so-called wonder drugs to cure their kid of rambunctiousness without any big side effects. This is one reason why fiction is in decline -- parents are too busy reading books about how to non-genetically engineer their kid into being a genius.
Thank god I got to enjoy all of my pre-pubescent childhood before the Ritalin-pushing helicopter parents took over. I remember at some point in 3rd grade (either '89 or '90), my teacher became worried because I'd stare off into space, daydream, or otherwise get lost in thought when I was supposed to be doing a specific task at one of our "activity stations." As I recall, I was just really bored for those couple of weeks. Fortunately, my parents and my teacher only went so far as asking if there was anything wrong, could I try to focus a little more, etc. Otherwise they just let my boredom run its course, and I turned out better for not having been drugged with Ritalin at such a young age.
It makes you wonder whether or not all these hyper-managed and drugged-up Millennials are going to re-pay their parents when they're taking care of them in retirement.
Prior to 1991, domestic sales reported by the manufacturers of methylphenidate remained stable at approximately 2,000 kilograms per year. By 1999, domestic sales increased by nearly 500 percent.
[...]
Collectively, this data indicates that the number of prescriptions written for ADHD has increased by a factor of five since 1991.
So it was only once the violence rate peaked in '91 or '92 that parents started drugging their kids to bring about more compliant behavior. There are several reasons why parents let their kids run wild during dangerous times and drug them during safe times, as odd as that may sound:
- When the whole society is wild, wildness will not be punished by others, so why bother trying to keep your kid out of trouble that he won't have to pay for? When the society is tame, a rambunctious kid will run into lots of trouble, so you do what you can to dial it down for him.
- Drugging a kid is one way to keep them from going through natural human development, some of which isn't too pleasant for the PTA. In safe times, parents try to keep their kids from growing up -- the world's so safe that they can put off growing up until later without worry. In dangerous times, parents see that life is fleeting and their kids had better hurry the hell up with maturation if they're going to make some grandkids.
- This is just an impression, but it's related to an earlier pattern I detailed where dangerous times breed a greater belief in the supernatural, paranormal, mystical, occult, etc. That is, it looks like safe times see a rise in the hubris of science and particularly social science and social engineering -- even among the public. Even the elites were fairly skeptical of the power of top-down social engineering during the '60s through the '80s, aside from the Great Society programs.
The case of crime control is instructive: everyone, liberal or conservative, had concluded pessimistically by the mid-late 1970s that crime was out of control and there was nothing that technocrats could do about it, even after all we learned through "natural experiments" in policing policy. Contrast that with the New Deal of the low-crime times of the mid-'30s through the late '50s, or with the Clinton-Bush-Obama handling of the housing and finance industries for the greater good.
So, parents will have greater faith in the power of so-called wonder drugs to cure their kid of rambunctiousness without any big side effects. This is one reason why fiction is in decline -- parents are too busy reading books about how to non-genetically engineer their kid into being a genius.
Thank god I got to enjoy all of my pre-pubescent childhood before the Ritalin-pushing helicopter parents took over. I remember at some point in 3rd grade (either '89 or '90), my teacher became worried because I'd stare off into space, daydream, or otherwise get lost in thought when I was supposed to be doing a specific task at one of our "activity stations." As I recall, I was just really bored for those couple of weeks. Fortunately, my parents and my teacher only went so far as asking if there was anything wrong, could I try to focus a little more, etc. Otherwise they just let my boredom run its course, and I turned out better for not having been drugged with Ritalin at such a young age.
It makes you wonder whether or not all these hyper-managed and drugged-up Millennials are going to re-pay their parents when they're taking care of them in retirement.
Girls flashing at concerts was an American-only thing?
We've seen that young people today are far less exhibitionistic than during the wild times of the '60s through the '80s. In particular, flashing at a concert, streaking, and topless sunbathing are dead among young people. As for the first of those, here's an interesting quote I came across from the singer of Scorpions, Klaus Meine:
That's from the Wikipedia article for their album Lovedrive, but the citation goes nowhere. Supposedly it's from a 2010 interview. Notice from the tense that he speaks about this pattern of behavior as something that died in the '90s and 2000s. What's really cool is his statement that it only happened here. I'm sure he had plenty of experience touring the globe, so his perception must be true.
You always hear social liberals whining about how Puritanical our culture is -- those French are so liberated, they even go topless on the beach! Well we had that here, too, it's just that you had to go to rock concerts, particularly hard rock or heavy metal ones. But these complaining sophisticates would rather chew on rat poison than socialize or even be seen with metalheads, so they missed out on toplessness American-style.
It is odd that in America that some of these [album] covers were a problem because in the 80’s when we would tour here we always had boobs flashed to us at the front of the stage. Nowhere else in the world, just here.
That's from the Wikipedia article for their album Lovedrive, but the citation goes nowhere. Supposedly it's from a 2010 interview. Notice from the tense that he speaks about this pattern of behavior as something that died in the '90s and 2000s. What's really cool is his statement that it only happened here. I'm sure he had plenty of experience touring the globe, so his perception must be true.
You always hear social liberals whining about how Puritanical our culture is -- those French are so liberated, they even go topless on the beach! Well we had that here, too, it's just that you had to go to rock concerts, particularly hard rock or heavy metal ones. But these complaining sophisticates would rather chew on rat poison than socialize or even be seen with metalheads, so they missed out on toplessness American-style.
September 4, 2010
Why no one wants an e-reader
It's been years since Amazon's Kindle was introduced in 2007, yet no one cares about e-readers. Three years after the iPod was introduced, it had taken over portable music and was already in its fourth generation. Three years after cell phones were introduced in the early 1980s, they were still too expensive to afford -- but everyone wanted one. We thought we were cool and living-in-the-future just to be able to talk on a cordless telephone outside our house. So why will e-paper never come close to replacing real paper?
First, we should consider another big failure of digital-only media / entertainment -- video games that are downloaded onto a memory stick or hard drive, rather than ones that come on a single disc or cartridge. Virtually none of the major games are gotten this way, although the technology allows it. People want a physical copy of a single game. The only somewhat popular use of "downloadable content" is to provide a few bells and whistles to a game that you already have a hard copy of.
In the biggest mistake in Sony's life as a video game maker, they changed their handheld system, called the PSP, from a system that took individual discs to one where everything is downloaded onto a memory stick (the PSP Go). This is exactly like going from a portable CD player to an iPod, yet it has done pathetically -- no one is buying it or wants to buy it.
What is the difference between recorded music on the one hand and video games and books, magazines, newspapers, etc., on the other? For iPod owners, they are listening to single tracks, not entire albums. They're flitting from one song to another (typically by different artists), rather than getting immersed in the fullness of a single album. When you play a video game, though, you don't want to play one level on this video game, then another level from another one, then a third level from yet another game, and so on -- you want to get into a single game for a decent stretch of time. The same with reading a book: you don't want to read five chapters from five separate books, but rather as far into a single book (or maybe two) as you can when you're on the metro, lounging around Starbucks, or whatever.
The main appeal of media players that can read a ton of digital files off a single small memory source is their portability -- with an iPod, there's no need to lug around 20 CDs when you only want to hear one song from each. (That means those albums stink -- only one song worth listening to? -- and that you should find better artists, but that's another topic.) In contrast to single songs, where most people want to zip from one to another across albums and artists, no one tends to bring 20 video games with them during a single session of handheld video game playing -- one or two will do fine, and they're already pretty small. Ditto for books: no one tends to bring 20 books, as just one or two or even three will do, and they're already fairly easy to carry around. Thus for video games and books, no one cares about greater portability -- the ones we carry around are already portable enough, being so few in number.
There are certainly other reasons why digital-only video games and e-books suffer, like how not having a more durable copy of something makes you more vulnerable to losing it, and gives you nothing to re-sell in the secondary market if you get bored of it or don't end up liking it. But that's true for mp3 files as well, yet most people are fine with it there. There's something different about how people experience single songs vs. video games and books, namely how immersed in a single long work you tend to get, which determines how many hard copies you tend to bring, and that makes portability worth it or not.
The one thing that could have saved e-readers from oblivion is if magazines and newspapers had only existed in print form. Articles from periodicals are like single songs -- most readers zoom across a bunch of them from a host of sources. Even those who follow just one newspaper are really reading multiple newspapers -- one unique to each day. If you wanted to recall earlier articles in print form, you'd need to hold on to all those old newspapers or magazines. Here, lack of immersion in a single copy would make portability a huge attraction.
However, newspapers and magazines screwed themselves over by putting everything on the web -- for free no less, though that's not going to last beyond next year. So, all you need now to read periodicals on the go is something with web access, such as an iPhone. That's really all that e-readers could offer, and they're much less portable than smartphones, so they have nowhere to go. The reports of ink-and-paper's death are greatly exaggerated.
First, we should consider another big failure of digital-only media / entertainment -- video games that are downloaded onto a memory stick or hard drive, rather than ones that come on a single disc or cartridge. Virtually none of the major games are gotten this way, although the technology allows it. People want a physical copy of a single game. The only somewhat popular use of "downloadable content" is to provide a few bells and whistles to a game that you already have a hard copy of.
In the biggest mistake in Sony's life as a video game maker, they changed their handheld system, called the PSP, from a system that took individual discs to one where everything is downloaded onto a memory stick (the PSP Go). This is exactly like going from a portable CD player to an iPod, yet it has done pathetically -- no one is buying it or wants to buy it.
What is the difference between recorded music on the one hand and video games and books, magazines, newspapers, etc., on the other? For iPod owners, they are listening to single tracks, not entire albums. They're flitting from one song to another (typically by different artists), rather than getting immersed in the fullness of a single album. When you play a video game, though, you don't want to play one level on this video game, then another level from another one, then a third level from yet another game, and so on -- you want to get into a single game for a decent stretch of time. The same with reading a book: you don't want to read five chapters from five separate books, but rather as far into a single book (or maybe two) as you can when you're on the metro, lounging around Starbucks, or whatever.
The main appeal of media players that can read a ton of digital files off a single small memory source is their portability -- with an iPod, there's no need to lug around 20 CDs when you only want to hear one song from each. (That means those albums stink -- only one song worth listening to? -- and that you should find better artists, but that's another topic.) In contrast to single songs, where most people want to zip from one to another across albums and artists, no one tends to bring 20 video games with them during a single session of handheld video game playing -- one or two will do fine, and they're already pretty small. Ditto for books: no one tends to bring 20 books, as just one or two or even three will do, and they're already fairly easy to carry around. Thus for video games and books, no one cares about greater portability -- the ones we carry around are already portable enough, being so few in number.
There are certainly other reasons why digital-only video games and e-books suffer, like how not having a more durable copy of something makes you more vulnerable to losing it, and gives you nothing to re-sell in the secondary market if you get bored of it or don't end up liking it. But that's true for mp3 files as well, yet most people are fine with it there. There's something different about how people experience single songs vs. video games and books, namely how immersed in a single long work you tend to get, which determines how many hard copies you tend to bring, and that makes portability worth it or not.
The one thing that could have saved e-readers from oblivion is if magazines and newspapers had only existed in print form. Articles from periodicals are like single songs -- most readers zoom across a bunch of them from a host of sources. Even those who follow just one newspaper are really reading multiple newspapers -- one unique to each day. If you wanted to recall earlier articles in print form, you'd need to hold on to all those old newspapers or magazines. Here, lack of immersion in a single copy would make portability a huge attraction.
However, newspapers and magazines screwed themselves over by putting everything on the web -- for free no less, though that's not going to last beyond next year. So, all you need now to read periodicals on the go is something with web access, such as an iPhone. That's really all that e-readers could offer, and they're much less portable than smartphones, so they have nowhere to go. The reports of ink-and-paper's death are greatly exaggerated.
September 3, 2010
Will metalheads rule the world in 100 years?
I've been on a metal posting kick and probably will be for another week or so. There's so much new to see now that I'm no longer a 13 year-old infatuated with Beavis & Butt-Head, which unfortunately was the last time I spent much time with it.
One thing that jumps out is how high the solidarity is among metalheads, far greater than for fans of any other type of music. Looking around over the past 20 years of various culture wars, they're about the only major subculture that has consistently been founded on strength of community and keeping their traditions sacred, where others have descended into petty status contests and "moving beyond" their ways once they become too well known and hence no longer fashionable.
Metalheads still look pretty much like they did 30 years ago: slim-fitting light blue or black jeans, black or white t-shirt (often with sleeves cut off), white tennis shoes or black boots, and long hair parted in the middle (perhaps with bangs). Even the nu metal people, arguably a separate species, haven't strayed so far from that costume. In contrast, look at how differently the fans of rock-for-the-college-educated have looked from 1975 to 2010, going by five-year intervals. Hair length is all over the place, eyeshadow is present or absent, colors are varied or monochrome, the overall look is hyper-tailored or disheveled, and all according to the trend in the larger society. About the only constant is Chuck Taylor shoes.
Metalheads therefore take their community membership much more seriously: their appearance is supposed to be above the fickle shifts of the stream of fashion, much as the more traditional religious groups still sport a look that's hundreds of years old. "Moving beyond" this cultural inheritance in order to stay in touch with the times would be sacrilege. Other subcultures are more likely to view their community membership as fleeting, something fun to do for now but nothing they're going to commit themselves to over the long haul, let alone sacrifice anything for.
This suggests that unlike most other groups of white people, metalheads have a strong potential for collective action. (Readers of Peter Turchin will know this by Ibn Khaldun's term "asabiya.") True, they're not as strong as they were in the '70s or '80s, but neither is anyone else -- and those other groups are even more faded or divided. Punks, post-punks, goths, country, electronic, pop, mainstream, you name it. While there are short cycles up and down, this group-mindedness really rises and falls on the order of centuries -- maybe two or three for a complete cycle up and down again.
It comes together when one group finds itself on what Turchin calls a "meta-ethnic frontier" -- where the people on the other side might as well come from a different planet culturally. This intensifies the Us vs. Them distinction, and often makes you band together lest you be overrun by Them. Ground zero for the heavy metal scene in the U.S. was southern California, but that was before illegal immigrants flooded in, so I don't think it's so much a white vs. other racial frontier. It's more internal to white people. Part of it is class, since classes are close to ethnic groups in America, but that's not the whole story. After all, country and western fans are working class, but most of them haven't conserved a fairly coherent way of life over the past generation or two; they blend in with the mainstream. They probably couldn't name more than a couple Hank Williams songs, don't know who Kitty Wells is, and wouldn't recognize the face of mega-babe Emmylou Harris.
The culture associated with country music is also too self-pitying to galvanize a large mass of people into conquering the faction-riven mainstream culture. It's also contaminated by appropriation by the elites, where they pretend to like country and folk and even bluegrass music, as part of their affectation to care about the honest working man. You could find some music by Woody Guthrie, Hank Williams, or Tom Waits in an elite person's collection -- but never any metal from '75 or so onward.
Metal has never won any elite praise, so it's been safe from being diluted that way. Again this rejection by just about every other subculture heightens the Us vs. Them distinction, giving them a greater sense of solidarity. Plus the music tries to work the listener up into feeling like they're a macho soldier in the advancing metal army. If they had their own country, we would call such songs patriotic. Here are just two by Judas Priest: "United" and "Take On The World". You don't hear such rallying cries among any other group, except the religious traditionalists.
Returning to the differences with country, the latter is lyric poetry set to music, whereas metal tends to shun the personal and focus more on third-person narration, often aiming for the epic. There's one big exception to that, which is the power ballads, probably the most well known being those by Scorpions and then later those by glam metal bands. Really, though, this combination is no different from the speech of Marlowe's Tamburlaine -- mostly virile, bombastic, and epic, but occasionally intensely emotional when talking to his main love. He was another leader of a people whose high solidarity was forged along the frontiers of civilization, whose squabbling and dainty elites they later conquered.
Of course, that took a long time, but the same could eventually happen with some mix of metalheads and religious traditionalists. If we're looking into the next 100 or so years, I'd actually give metalheads a distinct advantage in one area that may make the difference -- they're not infected with political correctness and they don't put faith in social science to design society for the better. Those seem to be the two main diseases eating away at Western civilization from inside, mostly in indirect ways. America overall is fairly religious, so the religious traditionalists don't face a strong meta-ethnic frontier, unless they're Amish, Mormon, etc., but these groups are not as large as the group that's sympathetic or drawn to metal. But who knows what their population sizes will be in 100 or 200 years. Too bad we won't be there to see them getting along, or not. Perhaps the conquest of the existing mainstream culture will be lead by Mormon metalheads.
One thing that jumps out is how high the solidarity is among metalheads, far greater than for fans of any other type of music. Looking around over the past 20 years of various culture wars, they're about the only major subculture that has consistently been founded on strength of community and keeping their traditions sacred, where others have descended into petty status contests and "moving beyond" their ways once they become too well known and hence no longer fashionable.
Metalheads still look pretty much like they did 30 years ago: slim-fitting light blue or black jeans, black or white t-shirt (often with sleeves cut off), white tennis shoes or black boots, and long hair parted in the middle (perhaps with bangs). Even the nu metal people, arguably a separate species, haven't strayed so far from that costume. In contrast, look at how differently the fans of rock-for-the-college-educated have looked from 1975 to 2010, going by five-year intervals. Hair length is all over the place, eyeshadow is present or absent, colors are varied or monochrome, the overall look is hyper-tailored or disheveled, and all according to the trend in the larger society. About the only constant is Chuck Taylor shoes.
Metalheads therefore take their community membership much more seriously: their appearance is supposed to be above the fickle shifts of the stream of fashion, much as the more traditional religious groups still sport a look that's hundreds of years old. "Moving beyond" this cultural inheritance in order to stay in touch with the times would be sacrilege. Other subcultures are more likely to view their community membership as fleeting, something fun to do for now but nothing they're going to commit themselves to over the long haul, let alone sacrifice anything for.
This suggests that unlike most other groups of white people, metalheads have a strong potential for collective action. (Readers of Peter Turchin will know this by Ibn Khaldun's term "asabiya.") True, they're not as strong as they were in the '70s or '80s, but neither is anyone else -- and those other groups are even more faded or divided. Punks, post-punks, goths, country, electronic, pop, mainstream, you name it. While there are short cycles up and down, this group-mindedness really rises and falls on the order of centuries -- maybe two or three for a complete cycle up and down again.
It comes together when one group finds itself on what Turchin calls a "meta-ethnic frontier" -- where the people on the other side might as well come from a different planet culturally. This intensifies the Us vs. Them distinction, and often makes you band together lest you be overrun by Them. Ground zero for the heavy metal scene in the U.S. was southern California, but that was before illegal immigrants flooded in, so I don't think it's so much a white vs. other racial frontier. It's more internal to white people. Part of it is class, since classes are close to ethnic groups in America, but that's not the whole story. After all, country and western fans are working class, but most of them haven't conserved a fairly coherent way of life over the past generation or two; they blend in with the mainstream. They probably couldn't name more than a couple Hank Williams songs, don't know who Kitty Wells is, and wouldn't recognize the face of mega-babe Emmylou Harris.
The culture associated with country music is also too self-pitying to galvanize a large mass of people into conquering the faction-riven mainstream culture. It's also contaminated by appropriation by the elites, where they pretend to like country and folk and even bluegrass music, as part of their affectation to care about the honest working man. You could find some music by Woody Guthrie, Hank Williams, or Tom Waits in an elite person's collection -- but never any metal from '75 or so onward.
Metal has never won any elite praise, so it's been safe from being diluted that way. Again this rejection by just about every other subculture heightens the Us vs. Them distinction, giving them a greater sense of solidarity. Plus the music tries to work the listener up into feeling like they're a macho soldier in the advancing metal army. If they had their own country, we would call such songs patriotic. Here are just two by Judas Priest: "United" and "Take On The World". You don't hear such rallying cries among any other group, except the religious traditionalists.
Returning to the differences with country, the latter is lyric poetry set to music, whereas metal tends to shun the personal and focus more on third-person narration, often aiming for the epic. There's one big exception to that, which is the power ballads, probably the most well known being those by Scorpions and then later those by glam metal bands. Really, though, this combination is no different from the speech of Marlowe's Tamburlaine -- mostly virile, bombastic, and epic, but occasionally intensely emotional when talking to his main love. He was another leader of a people whose high solidarity was forged along the frontiers of civilization, whose squabbling and dainty elites they later conquered.
Of course, that took a long time, but the same could eventually happen with some mix of metalheads and religious traditionalists. If we're looking into the next 100 or so years, I'd actually give metalheads a distinct advantage in one area that may make the difference -- they're not infected with political correctness and they don't put faith in social science to design society for the better. Those seem to be the two main diseases eating away at Western civilization from inside, mostly in indirect ways. America overall is fairly religious, so the religious traditionalists don't face a strong meta-ethnic frontier, unless they're Amish, Mormon, etc., but these groups are not as large as the group that's sympathetic or drawn to metal. But who knows what their population sizes will be in 100 or 200 years. Too bad we won't be there to see them getting along, or not. Perhaps the conquest of the existing mainstream culture will be lead by Mormon metalheads.
September 2, 2010
Getting exercise and muscle growth backwards
Returning to an earlier theme of how unnatural gym bodies look, it's clear that this is just another case of how the modern rationalistic approach to things gets them completely backwards.
One example (pointed out by Nassim Taleb, possibly getting it from Art De Vany) is eating and activity. We view eating as a precursor to doing some range of activities that require a good amount of energy -- we're "fueling up" for these activities by eating. But it goes the other way around: we are designed to undertake physically demanding activity to acquire food, by hunting, gathering, threshing, grinding, milking, etc. We should aim to do vigorous activities on a somewhat empty stomach, then eat, and then mellow out for awhile, possibly including the next day or so, rather than pig out and then be very active today or tomorrow.
Food is not a fuel for activity later in the day; it is an information signal to our body that we've been active enough to get nourishment and that our body should chill and build muscle, repair itself, and so on. When we follow the modern experts' approach, we screw up that signal because after eating a good amount, we go "put it to use" through vigorous activity. "Hold on, I thought you just gave me food -- why are you out jogging and weightlifting so soon?" our body asks, confused about whether it should invest its limited resources in bodily maintenance or in here-and-now performance. These opposing goals are controlled by two separate parts of the nervous system, the parasympathetic (digestion) and sympathetic (fight-or-flight), which are not intended to operate at the same time.
Now consider physical activity and building up your physique. The modern experts' approach takes muscle growth as the goal and activity as the means to achieve it. So, most people have a picture in their minds of what they want to look like -- even using the narcissistic phrase "I wanna look good naked" -- and then go jogging, stairmastering, bicep curling, bench pressing, etc., to hopefully get that look. Then they wind up looking like freaks, as I detailed in the link above.
What went wrong? Again they got it backwards: we are designed to carry out a range of physical activities -- sprinting, leaping, hurling overhand at a target, heaving underhand at a target, pulling ourselves into a tree, and so on. In the process of doing these regularly and with the occasional intense demand, our body gets the signal that our environment requires us to be good at these things, and therefore it had better build us a set of musculature that will get the job done. When we do these activities that we're designed for, all of our muscles respond, not just the biceps and chest muscles that respond when you sit in a gym doing curls and bench presses for an hour. Plus they respond in the right proportions, which you cannot recreate by targeting every muscle one at a time -- how do you know how much to stress each one? You don't, but your body is programmed to respond in the right proportions once you do natural activities.
That's why when we say someone has "an athletic physique," we mean that's what the body should look like when it's excelling at the physical activities that homo sapiens are designed for. See any classical or Renaissance sculpture. Those people aimed to be athletic, and as a side-effect grew musculature that we find impressive. If they aim to look impressive and try to achieve this through "working out," they'll look weird.
They'll also fail at athletics -- how many of those gym rats do you think could hurl a 20-pound stone at a target from even 20 feet away and hit it? Could they sprint down a hill, leap across a 4-foot-wide stream, and sprint up the hill on the other side, let alone do it fast? How good would they be at dodging a moving object that was targeting them, like a predator or an enemy in battle? Could they coordinate their whole body to throw a good, connecting punch? These building block activities from our evolutionary past are what make up the spectacle of modern sports, and that's why no one wants to watch gym rats "work out" -- even Olympic weightlifting is so boring that no one watches.
In Starbucks the other day I overheard some retard loudly yammering into his phone about how a gym buddy of his is like so coming along in his program -- he can bench however-many pounds now! Hold on, that's the goal? These guys are just as insecure and pathetic as the geeks who hang out in MENSA clubs doing IQ puzzles all day and comparing scores, again getting things completely backwards. We are designed to make art, solve real-world problems, or whatever else, and our brain gets into shape as a side-effect, to help us excel at these activities in the future.
No one gives a shit if you can bench press X pounds or memorize a number that's Y digits long. So, the only way these people can keep up their enthusiasm is by finding the handful of similarly backwards-thinking morons out there and forming an echo chamber, forever caught in a contest over who is the best at getting nothing done.
The modern person rejects anything that they don't see the reason behind, and they suffer as a result of getting lots of things backwards. Following traditional practices will keep you in good shape, like working hard and then eating, or striving for athletic excellence rather than to "look good naked." Once again grandma knew better.
One example (pointed out by Nassim Taleb, possibly getting it from Art De Vany) is eating and activity. We view eating as a precursor to doing some range of activities that require a good amount of energy -- we're "fueling up" for these activities by eating. But it goes the other way around: we are designed to undertake physically demanding activity to acquire food, by hunting, gathering, threshing, grinding, milking, etc. We should aim to do vigorous activities on a somewhat empty stomach, then eat, and then mellow out for awhile, possibly including the next day or so, rather than pig out and then be very active today or tomorrow.
Food is not a fuel for activity later in the day; it is an information signal to our body that we've been active enough to get nourishment and that our body should chill and build muscle, repair itself, and so on. When we follow the modern experts' approach, we screw up that signal because after eating a good amount, we go "put it to use" through vigorous activity. "Hold on, I thought you just gave me food -- why are you out jogging and weightlifting so soon?" our body asks, confused about whether it should invest its limited resources in bodily maintenance or in here-and-now performance. These opposing goals are controlled by two separate parts of the nervous system, the parasympathetic (digestion) and sympathetic (fight-or-flight), which are not intended to operate at the same time.
Now consider physical activity and building up your physique. The modern experts' approach takes muscle growth as the goal and activity as the means to achieve it. So, most people have a picture in their minds of what they want to look like -- even using the narcissistic phrase "I wanna look good naked" -- and then go jogging, stairmastering, bicep curling, bench pressing, etc., to hopefully get that look. Then they wind up looking like freaks, as I detailed in the link above.
What went wrong? Again they got it backwards: we are designed to carry out a range of physical activities -- sprinting, leaping, hurling overhand at a target, heaving underhand at a target, pulling ourselves into a tree, and so on. In the process of doing these regularly and with the occasional intense demand, our body gets the signal that our environment requires us to be good at these things, and therefore it had better build us a set of musculature that will get the job done. When we do these activities that we're designed for, all of our muscles respond, not just the biceps and chest muscles that respond when you sit in a gym doing curls and bench presses for an hour. Plus they respond in the right proportions, which you cannot recreate by targeting every muscle one at a time -- how do you know how much to stress each one? You don't, but your body is programmed to respond in the right proportions once you do natural activities.
That's why when we say someone has "an athletic physique," we mean that's what the body should look like when it's excelling at the physical activities that homo sapiens are designed for. See any classical or Renaissance sculpture. Those people aimed to be athletic, and as a side-effect grew musculature that we find impressive. If they aim to look impressive and try to achieve this through "working out," they'll look weird.
They'll also fail at athletics -- how many of those gym rats do you think could hurl a 20-pound stone at a target from even 20 feet away and hit it? Could they sprint down a hill, leap across a 4-foot-wide stream, and sprint up the hill on the other side, let alone do it fast? How good would they be at dodging a moving object that was targeting them, like a predator or an enemy in battle? Could they coordinate their whole body to throw a good, connecting punch? These building block activities from our evolutionary past are what make up the spectacle of modern sports, and that's why no one wants to watch gym rats "work out" -- even Olympic weightlifting is so boring that no one watches.
In Starbucks the other day I overheard some retard loudly yammering into his phone about how a gym buddy of his is like so coming along in his program -- he can bench however-many pounds now! Hold on, that's the goal? These guys are just as insecure and pathetic as the geeks who hang out in MENSA clubs doing IQ puzzles all day and comparing scores, again getting things completely backwards. We are designed to make art, solve real-world problems, or whatever else, and our brain gets into shape as a side-effect, to help us excel at these activities in the future.
No one gives a shit if you can bench press X pounds or memorize a number that's Y digits long. So, the only way these people can keep up their enthusiasm is by finding the handful of similarly backwards-thinking morons out there and forming an echo chamber, forever caught in a contest over who is the best at getting nothing done.
The modern person rejects anything that they don't see the reason behind, and they suffer as a result of getting lots of things backwards. Following traditional practices will keep you in good shape, like working hard and then eating, or striving for athletic excellence rather than to "look good naked." Once again grandma knew better.
September 1, 2010
Rise and fall of metal parallels that of pop music broadly
The data from that graph come from Digital Dream Door's list of the 100 best metal albums. That's exactly what my mental picture of best albums looks like, no matter the genre. There's a peak in 1983 (the Maiden and Priest sound, parodied the next year in This is Spinal Tap) and another in 1987 (the glammier and deathier sound), just like with pop in general: the new wave / new romantic / post-punk explosion peaks in '83, and then there's the '87 peak of I don't know what to call it -- Kick by INXS, Bad by Michael Jackson, Joshua Tree by U2, and much else in that year. The metal albums take a tumble starting in 1993, again just like with pop music in general (grunge / alternative / indie / emo, as well as gangsta rap).
It might seem surprising that the sacred albums of metalheads are mostly from the '80s, as we associate '80s music with sounds that they wouldn't dig. This just goes to show that how good music is depends mostly on the zeitgeist and not on the genre -- metal, pop, rock, rap, and R&B have all sucked starting in the early-mid '90s, all were out of control from '75 to '84, and all went through a twilight stage from about '85 to '90. Plus all of them experienced their growing-up phase sometime during the late '50s or early '60s through the early '70s (except for rap, which started in the late '70s).
The power of the zeitgeist is one of the most underappreciated social forces in pop culture. It's like a wind that blows through and either polishes or corrodes everything out there. I think it's given more credit in high culture -- people talk about the Elizabethan world, the Renaissance, etc., noting how the zeitgeist affected religious and secular art, painting as well as sculpture, and so on. For popular culture in recent times, we've adopted the highly misleading practice of referring to decades that begin in a 0 and end in a 9, like the Roaring Twenties or The Sixties.
But we should just see what the picture looks like for each year and group similar years together with a suggestive name, not arbitrarily split by decades. Again, the period from about 1975 to 1984 is a lot more coherent of a cultural moment than if we included the periods just outside of it, but we don't have a name for it and chop it in half to fit the Procrustean bed of decade thinking. And sometimes a decade is too long -- the roughly 5-year period of the late '80s through 1990 has its own feel, distinct from the '75-'84 period and definitely in another world from the '91 and after period. We do have a handy name for the 2003-'07 period of euphoria -- the Bubble Years, or something like that -- but in general we need better periodization.
It might seem surprising that the sacred albums of metalheads are mostly from the '80s, as we associate '80s music with sounds that they wouldn't dig. This just goes to show that how good music is depends mostly on the zeitgeist and not on the genre -- metal, pop, rock, rap, and R&B have all sucked starting in the early-mid '90s, all were out of control from '75 to '84, and all went through a twilight stage from about '85 to '90. Plus all of them experienced their growing-up phase sometime during the late '50s or early '60s through the early '70s (except for rap, which started in the late '70s).
The power of the zeitgeist is one of the most underappreciated social forces in pop culture. It's like a wind that blows through and either polishes or corrodes everything out there. I think it's given more credit in high culture -- people talk about the Elizabethan world, the Renaissance, etc., noting how the zeitgeist affected religious and secular art, painting as well as sculpture, and so on. For popular culture in recent times, we've adopted the highly misleading practice of referring to decades that begin in a 0 and end in a 9, like the Roaring Twenties or The Sixties.
But we should just see what the picture looks like for each year and group similar years together with a suggestive name, not arbitrarily split by decades. Again, the period from about 1975 to 1984 is a lot more coherent of a cultural moment than if we included the periods just outside of it, but we don't have a name for it and chop it in half to fit the Procrustean bed of decade thinking. And sometimes a decade is too long -- the roughly 5-year period of the late '80s through 1990 has its own feel, distinct from the '75-'84 period and definitely in another world from the '91 and after period. We do have a handy name for the 2003-'07 period of euphoria -- the Bubble Years, or something like that -- but in general we need better periodization.
Subscribe to:
Posts (Atom)