My younger brother and I were shooting the bull at Starbucks and trying to figure out where to grab a bite to eat afterwards. He jokingly suggested Hooters, and I said I'd never been there (he'd been every now and then). I never saw the point -- there are better places to talk to cute girls, better places for food, and my assumption was that most of the other customers would be the loser foot soldiers of some frat taking a break from their 7-11 diet and 30-hour-a-week video game regimen. But what the hell, let's see what it's like.
The first thing you notice is that most of the guys there are a lot older, like 30s through 50s, and maybe a handful of older 20-somethings. I didn't see any college or high school kids there, though there were about a dozen pre-pubescent children, including some infants, taken there by their parents. Hey, they've got to get some practice flirting with girls sometime before it counts for real. Of course, given how pussified the culture has become, none of them appeared to be seizing the opportunity. *
I wouldn't mind the age distribution so much, if it didn't put the waitresses so on the defensive. As I explained awhile ago, girls are most open to flirting and in general being carefree when there is a narrow range of ages, and they keep more to themselves when there's a broad range. A 19 year-old Hooters girl doesn't feel that she belongs to the social group that the customers belong to, so she isn't so much at ease as she would be if the customers were mostly around her age. It's probably going to be easier to joke around with a chick who works at Jamba Juice, where almost everyone is under 30 and therefore where she feels more at home and comfortable.
The girls were all good-looking, nothing skanky like I feared. They seemed pretty easy-going, unlike some other girls who get hired based on good looks but don't get tipped, such as go-go dancers. They reminded me of the pretty preppy / popular girls who felt comfortable chatting with guys from any of the many tribes within the high school, and not like the fake class president wannabe who just wanted your vote. Somewhat like how airline stewardesses used to be, before lawsuits turned them into middle-aged obese martinets.
I'm surprised that dolphin shorts are still a part of the uniform (the restaurant was founded in 1983). Their heyday was the late '70s and early '80s, yet there they are. They could have used more contemporary shorts that were lower-waisted and ran farther down the leg, as long as they were still orange shorts, but they've conserved a good look. Still, to complete their image of fun-loving sex appeal, they need to wear their hair bigger and not so straight and right next to the scalp like the style has been for the past 15-20 years. The dolphin short days were before the 10-cans-of-Aqua-Net days, so they wouldn't need to have it that big -- something like Phoebe Cates in Fast Times or Gremlins, poofed out enough to catch the eye instead of tissue-thin.
As far as I could tell, the prices weren't noticeably higher than their competitors like TGIFridays, Chili's, etc., and the quality of the food was similar. Actually, the mushroom and swiss burger and BBQ burger that I had were better than what you get at their competitors -- I think the Hooters burgers had more fat, like a Whopper, unlike the 98% fat-free patties that more "health-conscious" places use, and that crumble right away because there's no fat to hold it together.
In any case, there didn't seem to be any price premium that you were paying in order to have decent-looking and friendly waitresses. They weren't competing on price, but on the quality of the interaction with your waitress, which is important enough to enough people for Hooters to suck customers away from Fridays or Chili's on that appeal alone. (And in the other direction, Chili's or whoever sucks customers away from Hooters by offering a more sophisticated and so-called healthy menu, to draw in people who care too much what their peers think about their taste level.) Unless he's a friend, I hate being served by a dude, and ditto for women who are obese, snappy, or too awkward.
Aside from the quality of the waitresses and the food, another big plus was the background noise. Often in these kinds of restaurants and sports bars, there's some loud-ass TV -- CNN in an airport restaurant, ESPN in a sports bar, or whatever. If I wanted to watch TV, I'd be back home. They had several big-screen TVs with sports coverage, but the volume was on mute -- thank god. My brother and I had gone to Friday's just a couple nights ago, and they were playing only wuss rock and whiny pop-country music. At Hooters, there was a fair amount of bla-bla '90s music, but it was at least listenable -- "Alive" by Pearl Jam, "Fields of Gold" by Sting, and so on. Still, there was more than enough rock from the '80s to make up for it, some of it on the cheesier side (like "867-5309/Jenny"), but most of it really toe-tapping ("Don't Stop Believin,' " "And She Was," "New Sensation"). They just need to replace the '90s junk with some classic rock from the '60s and '70s. Overall, though, the music was a breath of fresh air compared to just about every other public space nowadays.
Most of the decorations on the wall were pictures of the girls with various people, whether celebrities or loyal customers I couldn't tell, and some cheesy posters showing how self-aware they are about not being a place that competes on sophistication. That point could be made more subtly and clearly by having a wall-poster equivalent of their music selection: a 1967 Mustang, Michael Jordan, The Bangles, Risky Business, Paulina Porizkova in Sports Illustrated, etc., mixed in with pictures of the local Hooters girls. That mix would give it more of the feel of your room during high school and college, which is obviously when the restaurant wants to take you back to.
So, on the whole, Hooters was the best place I've eaten at in this broad category of restaurant / bar chains. I can talk to and dance with cute girls and listen to better music at '80s night, and I can eat better at a Brazilian grill, although that costs more. But for the purposes of just going out to grab a bite to eat with your brothers or friends, this place is the easiest to relax and enjoy yourselves.
* And yes, back in 1983 when I was just a toddler, I ran cold approach game on multiple girls when I was out in public. Children were just that way back then -- we could smell the sexual revolution in the air, and had to adapt our behavior to that environment. I don't know where I learned it from, perhaps I figured it out on my own, but in a crowded place I used to walk up to a girl or woman, take her hand without asking permission, give it a big smooch, then look up at her with my wide 3 year-old eyes and explain, "Charming!" with a smile. I only vaguely remember this, but my parents say I did it most when there were lots of targets, like walking up and down the aisles of an airplane, through the tables at the food court in a mall, etc.
Both my parents egged me on whenever we went out -- "Hey agnostic, go do 'charming!' " The girls always got a big kick out of it, smiling, laughing, rubbing my hair, etc. This shows how accepting adults were of young people growing up fast -- they knew that if the world I would confront was a wild one, I'd better get prepared for it. Hence the encouragement from my parents and the positive reinforcement from the girls I went up to. Nowadays parents would be paralyzed by the thought of what embarrassment might result, and girls would be accepting but still unsure whether it was OK to go along with it or not.
December 31, 2010
December 30, 2010
Is TV really better now?
I don't know how to answer that since I haven't watched TV for three years, and I didn't watch a whole lot from the mid-'90s onward anyway. Supposedly the quality has improved a lot, at least starting with The Sopranos, and running through today with Mad Men.
I've never seen those, so I'll just accept that they're very good TV shows. Still, that does not mean that the average show is better -- maybe there is only greater variance than before, resulting in a lot more shows at the great level but also a lot more at the garbage level. From what I did tune into during the 2000s, I definitely got the sense that there's more garbage -- lots of new, boring game shows that started with Who Wants to Be a Millionaire? and lots of nauseating reality TV shows.
As recently as the 1980s, game shows were still mildly entertaining rather than stupid -- The Price is Right, Jeopardy!, Press Your Luck, the goofy slapstick of Supermarket Sweep, etc. The only reality TV back then was Lifestyles of the Rich and Famous, and everyone's reaction was "gag me with a spoon." The first three or four seasons of MTV's The Real World (early-mid-'90s) were OK, but it already started sucking by the Miami season of 1996. I only checked in on it intermittently after that -- Hawaii in 1999, Paris in 2003, and Austin in 2005 -- and it got worse every time.
I also don't believe that there are more high-quality shows now either. The two that always come up are The Sopranos and Mad Men, and perhaps there's another one or two at that level. But that's over the course of a decade. I can find three engrossing dramas with great writing from a shorter period of the late '80s / early '90s -- The Simpsons, The Wonder Years, and Twin Peaks. Maybe throw in Tour of Duty too. The 1970s had All in the Family and M*A*S*H. Perhaps the new TV shows that get so much attention fill a previously uncolonized niche for content, look, dialog, or whatever, but there were just as many (maybe more) TV shows that had the same overall quality level.
And that's just TV for adults -- don't even start me on TV for children. With my 2 year-old nephew home for Christmas, I had to sit through a bunch of episodes of the Alec Baldwin phase of Thomas and Friends, as well as Spongebob. And it was just as bad during most of the '90s with Barney, Teletubbies, and the dork patrol that took over Nickelodeon (Doug, Rugrats, etc.). Fortunately the Netflix streaming service has the first season of Inspector Gadget, and my nephew went nuts over it. He began dancing to the catchy theme song (impossible with Thomas or Spongebob) and spontaneously kept imitating the voice of Dr. Claw while making a menacing posture (he never imitated the dweebs on those other two shows).
Pretty soon I'll get him the DVDs for Bravestarr, a well written Western set on an outer-space frontier mining town. I started watching Jem on YouTube, and it's more exciting and has better writing than the kids' cartoons of the past 20 years. But my nephew would never watch a girls' cartoon -- hell, neither did I back when it originally aired (although I always tuned in long enough to take in the Pat Benataresque theme song).
And no educational programs today try to make learning fun and cool for children. Bill Nye was all right but too gee-golly. Beakman's World drew us in with its Beetlejuice / mad scientist appeal, and Mr. Wizard's World fascinated all young boys by showing us how to set off a bottle rocket in our back yard, set stuff on fire in the kitchen, and instantly crush a thick metal jug using the different pressure levels of very hot and very cold substances inside vs. outside the container.
Getting back to the main point, it's not very hard to find as many or more high-quality shows from earlier times as from the past decade, and it's easy to show how much more garbage there has been piling up on the lower end over the past 15-20 years. If anything, it looks like TV has gotten worse.
I've never seen those, so I'll just accept that they're very good TV shows. Still, that does not mean that the average show is better -- maybe there is only greater variance than before, resulting in a lot more shows at the great level but also a lot more at the garbage level. From what I did tune into during the 2000s, I definitely got the sense that there's more garbage -- lots of new, boring game shows that started with Who Wants to Be a Millionaire? and lots of nauseating reality TV shows.
As recently as the 1980s, game shows were still mildly entertaining rather than stupid -- The Price is Right, Jeopardy!, Press Your Luck, the goofy slapstick of Supermarket Sweep, etc. The only reality TV back then was Lifestyles of the Rich and Famous, and everyone's reaction was "gag me with a spoon." The first three or four seasons of MTV's The Real World (early-mid-'90s) were OK, but it already started sucking by the Miami season of 1996. I only checked in on it intermittently after that -- Hawaii in 1999, Paris in 2003, and Austin in 2005 -- and it got worse every time.
I also don't believe that there are more high-quality shows now either. The two that always come up are The Sopranos and Mad Men, and perhaps there's another one or two at that level. But that's over the course of a decade. I can find three engrossing dramas with great writing from a shorter period of the late '80s / early '90s -- The Simpsons, The Wonder Years, and Twin Peaks. Maybe throw in Tour of Duty too. The 1970s had All in the Family and M*A*S*H. Perhaps the new TV shows that get so much attention fill a previously uncolonized niche for content, look, dialog, or whatever, but there were just as many (maybe more) TV shows that had the same overall quality level.
And that's just TV for adults -- don't even start me on TV for children. With my 2 year-old nephew home for Christmas, I had to sit through a bunch of episodes of the Alec Baldwin phase of Thomas and Friends, as well as Spongebob. And it was just as bad during most of the '90s with Barney, Teletubbies, and the dork patrol that took over Nickelodeon (Doug, Rugrats, etc.). Fortunately the Netflix streaming service has the first season of Inspector Gadget, and my nephew went nuts over it. He began dancing to the catchy theme song (impossible with Thomas or Spongebob) and spontaneously kept imitating the voice of Dr. Claw while making a menacing posture (he never imitated the dweebs on those other two shows).
Pretty soon I'll get him the DVDs for Bravestarr, a well written Western set on an outer-space frontier mining town. I started watching Jem on YouTube, and it's more exciting and has better writing than the kids' cartoons of the past 20 years. But my nephew would never watch a girls' cartoon -- hell, neither did I back when it originally aired (although I always tuned in long enough to take in the Pat Benataresque theme song).
And no educational programs today try to make learning fun and cool for children. Bill Nye was all right but too gee-golly. Beakman's World drew us in with its Beetlejuice / mad scientist appeal, and Mr. Wizard's World fascinated all young boys by showing us how to set off a bottle rocket in our back yard, set stuff on fire in the kitchen, and instantly crush a thick metal jug using the different pressure levels of very hot and very cold substances inside vs. outside the container.
Getting back to the main point, it's not very hard to find as many or more high-quality shows from earlier times as from the past decade, and it's easy to show how much more garbage there has been piling up on the lower end over the past 15-20 years. If anything, it looks like TV has gotten worse.
December 27, 2010
Level of conformity across cultures, shown by car colors
Here are regional data from DuPont's 2010 Global Automotive Popularity Report that show what percent of new cars sold in 2010 had this or that paint color. Across the world, neutrals are far in the lead, a far cry from the dangerous-times explosion of color in general and including cars.
At any rate, we can see how conformist the car-buyers in a region are by looking at how much variation there is among car colors -- if all cars belong to one color, that's total conformity, while if each car had its own unique color, that's total doing-your-own-thingity. There are lots of ways to calculate variation in qualitative data, but to keep things simple, I'm just going to use the percent of all cars that fall into the four most popular colors -- the greater this percent is, the more the most popular colors show up everywhere, while the smaller this percent is, the more the non-mainstream colors enjoy success. Here's the ranking from least to most conformist:
No surprise that the birthplace of rock 'n' roll shows the lowest level of conformity in car colors, although other European groups are only slightly more conformist. Since the data are from new cars sold in 2010, I'm guessing most of the Mexican data reflect the white elite of that country. Rounding out the lowest stratum of conformity, India is not a surprise either: the "look at how hot and cool I am" enthusiasm is familiar. It's the opposite of the "we must maintain group harmony by looking the same" ethic, which we see in full force in the upper-stratum countries of China and South Korea. Japan is also more conformist than Europe, but is substantially less so than the other Northeast Asian countries, a pattern that shows up in all areas of culture. (This may be due to the very late adoption of sedentary farming in Japan, where nomadic foraging and fishing remained popular for thousands of years longer. See below on the farmer vs. herder divide.)
Joining Japan in the middle stratum are South America, Brazil, and South Africa. Again I assume those car-buyers are mostly of European ancestry, so it's odd that they're so much more conformist than their counterparts in North America and Europe -- even more so than those in Mexico. Maybe there's something about being in such racially diverse places that makes Europeans want to look highly similar, in order to maintain cohesion in the face of non-European ethnic groups that are close to or more than a majority. That would still leave Mexico unexplained, though, since the Europeans there are a minority also.
In any case, there seems to be a larger pattern of farmer vs. herder differences. Farmers in general are not very showy, whereas pastoralists, especially the nomadic ones, are much more colorful. For females in nomadic pastoralist groups, the only form of wealth that they tend to carry and pass on is gaudy jewelry, since the herds are held by and transferred to men. Men in such groups tend to be showy and boastful, since having a reputation as a shrinking violet would doom you to predation in a society where thieves can easily chase away or run off with all that belongs to you. Farmers care more about land, seeds to plant, and defensive structures to keep out trespassers, and have little need to be gregarious and show off.
It's too bad there isn't regional data for car colors in the Middle East, where a large share of the people descend from nomadic pastoralists. Still, Europe has always had a healthy mix of farmers and herders, whereas Northeast Asia has been nearly only farmers. Another great test case would be sub-Saharan Africans, but new cars are too expensive for most of them, so that data probably won't be collected for a long while. When it does, there will still probably be the sharp divide between farmers and herders that has existed there. This would control for lots of other factors too, unlike the Europeans vs. Northeast Asians comparison.
At any rate, we can see how conformist the car-buyers in a region are by looking at how much variation there is among car colors -- if all cars belong to one color, that's total conformity, while if each car had its own unique color, that's total doing-your-own-thingity. There are lots of ways to calculate variation in qualitative data, but to keep things simple, I'm just going to use the percent of all cars that fall into the four most popular colors -- the greater this percent is, the more the most popular colors show up everywhere, while the smaller this percent is, the more the non-mainstream colors enjoy success. Here's the ranking from least to most conformist:
No surprise that the birthplace of rock 'n' roll shows the lowest level of conformity in car colors, although other European groups are only slightly more conformist. Since the data are from new cars sold in 2010, I'm guessing most of the Mexican data reflect the white elite of that country. Rounding out the lowest stratum of conformity, India is not a surprise either: the "look at how hot and cool I am" enthusiasm is familiar. It's the opposite of the "we must maintain group harmony by looking the same" ethic, which we see in full force in the upper-stratum countries of China and South Korea. Japan is also more conformist than Europe, but is substantially less so than the other Northeast Asian countries, a pattern that shows up in all areas of culture. (This may be due to the very late adoption of sedentary farming in Japan, where nomadic foraging and fishing remained popular for thousands of years longer. See below on the farmer vs. herder divide.)
Joining Japan in the middle stratum are South America, Brazil, and South Africa. Again I assume those car-buyers are mostly of European ancestry, so it's odd that they're so much more conformist than their counterparts in North America and Europe -- even more so than those in Mexico. Maybe there's something about being in such racially diverse places that makes Europeans want to look highly similar, in order to maintain cohesion in the face of non-European ethnic groups that are close to or more than a majority. That would still leave Mexico unexplained, though, since the Europeans there are a minority also.
In any case, there seems to be a larger pattern of farmer vs. herder differences. Farmers in general are not very showy, whereas pastoralists, especially the nomadic ones, are much more colorful. For females in nomadic pastoralist groups, the only form of wealth that they tend to carry and pass on is gaudy jewelry, since the herds are held by and transferred to men. Men in such groups tend to be showy and boastful, since having a reputation as a shrinking violet would doom you to predation in a society where thieves can easily chase away or run off with all that belongs to you. Farmers care more about land, seeds to plant, and defensive structures to keep out trespassers, and have little need to be gregarious and show off.
It's too bad there isn't regional data for car colors in the Middle East, where a large share of the people descend from nomadic pastoralists. Still, Europe has always had a healthy mix of farmers and herders, whereas Northeast Asia has been nearly only farmers. Another great test case would be sub-Saharan Africans, but new cars are too expensive for most of them, so that data probably won't be collected for a long while. When it does, there will still probably be the sharp divide between farmers and herders that has existed there. This would control for lots of other factors too, unlike the Europeans vs. Northeast Asians comparison.
December 26, 2010
2000s were a quiet decade for video games
GameFAQs just completed a round-robin tournament of "who would win in a fight?" among 128 video games released during the 2000s. These were the most popularly nominated before the tournament began, and each match was decided by a survey with tens of thousands of respondents.
And the game of the decade is Legend of Zelda: Majora's Mask, which came out so long ago -- fall 2000 -- that even I have played it, and I mostly tuned out during the boring 3D-soap-opera era of video games. If you did too, you apparently didn't miss much.
The past two holidays that I've visited home for, I've seen my brother dither away at least four hours a day on Grand Theft Auto IV multiplayer, one of the most hyped games of the past 10 years. Most of the gameplay reduces to killing someone and waiting near the area where they will re-appear, just to kill them right away when they do. Whether he's the dorkmeister doing the camping or he's on the receiving end of it, watching this "game" puts me right to sleep, and my brother too looks like a zombie playing it.
In its repetitiveness, it's not very different from the mindless marathons of leveling up that keep me away from role-playing games like the Final Fantasy or Pokemon series. No skill or challenge involved -- just logging enough hours.
I wonder who would win in a similar round-robin tournament for games of the 1990s and '80s. My bet for the '90s would be Super Metroid, Castlevania: Symphony of the Night, Legend of Zelda: Ocarina of Time, and Super Mario Bros. 3. For the '80s, it would come down to Legend of Zelda, Super Mario Bros. or Super Mario Bros. 2, Metroid, and Tetris. Sadly, most video game players these days are into the putz-around-with-nothing-to-do-and-never-die style, so Ocarina of Time would probably win both the '90s and a tri-decade contest.
It's surprising how little innovation there's been in video games since the 3D era began. All of the top contenders for Game of the Decade or of All Time trace back to characters, storylines, and game concepts developed during the golden age of the late '80s and early '90s. Of course during this time there's been little or no innovation in rock music, rap music, etc., so it's not the fault of video game makers or players -- it's a really broad blandness in the entire culture. It makes you wonder what it would have been like if video games had been invented in the late '50s along with rock music. There, we went from singles-band Beatles to album-oriented Guns N' Roses. But video games only started to mature much later, and so had only a handful of years to create before the whole culture came inside from the playground and began taking a really long nap.
And the game of the decade is Legend of Zelda: Majora's Mask, which came out so long ago -- fall 2000 -- that even I have played it, and I mostly tuned out during the boring 3D-soap-opera era of video games. If you did too, you apparently didn't miss much.
The past two holidays that I've visited home for, I've seen my brother dither away at least four hours a day on Grand Theft Auto IV multiplayer, one of the most hyped games of the past 10 years. Most of the gameplay reduces to killing someone and waiting near the area where they will re-appear, just to kill them right away when they do. Whether he's the dorkmeister doing the camping or he's on the receiving end of it, watching this "game" puts me right to sleep, and my brother too looks like a zombie playing it.
In its repetitiveness, it's not very different from the mindless marathons of leveling up that keep me away from role-playing games like the Final Fantasy or Pokemon series. No skill or challenge involved -- just logging enough hours.
I wonder who would win in a similar round-robin tournament for games of the 1990s and '80s. My bet for the '90s would be Super Metroid, Castlevania: Symphony of the Night, Legend of Zelda: Ocarina of Time, and Super Mario Bros. 3. For the '80s, it would come down to Legend of Zelda, Super Mario Bros. or Super Mario Bros. 2, Metroid, and Tetris. Sadly, most video game players these days are into the putz-around-with-nothing-to-do-and-never-die style, so Ocarina of Time would probably win both the '90s and a tri-decade contest.
It's surprising how little innovation there's been in video games since the 3D era began. All of the top contenders for Game of the Decade or of All Time trace back to characters, storylines, and game concepts developed during the golden age of the late '80s and early '90s. Of course during this time there's been little or no innovation in rock music, rap music, etc., so it's not the fault of video game makers or players -- it's a really broad blandness in the entire culture. It makes you wonder what it would have been like if video games had been invented in the late '50s along with rock music. There, we went from singles-band Beatles to album-oriented Guns N' Roses. But video games only started to mature much later, and so had only a handful of years to create before the whole culture came inside from the playground and began taking a really long nap.
December 15, 2010
Food stuff: staying hungry, nuts to nuts, and the least destructive holiday desserts
- I've mentioned before that the easiest way to give yourself nightmares is to eat carbs in the evening -- that spikes your glucose, which spikes your insulin, which keeps you from burning fat for fuel while you're sleeping and after your glucose has already been burned up. (And obviously you can't snack on more carbs while you're sleeping to get another quick fix of glucose, though for many this simply causes them to get up in the middle of the night for some chips, popcorn, bread, cookies, etc.) So refraining from eating carbs later in the day will drastically improve your sleep, if you're used to eating potatoes, muffins, soda, pizza, pasta, etc. in the evening.
But I've been playing around with not eating anything in the evening every now and then, just to see if there's anything to the intermittent fasting idea. Whenever I do this, I always wake up and get out of bed right away, and never feel more energized. Having used this strategy to great success this morning when I had to wake up at 7am to take a final exam -- and I am not a morning person -- I'm going to stick with it.
How does it work? Let's remember what a meal is -- it's an information signal to your body when it's planning what to do in the near term. Food is not fuel in the short term. If it were, then after eating a decent-sized or large meal, you'd be brimming over with fuel and feel like or at least be capable of lots of activity. In reality, eating a big meal lays you out and nothing could be more impossible than vigorous activity. This mistaken view of food's role comes from our rationalistic worldview, where we impose our own reasons on how the world works, rather than study it empirically. Someone thought up the analogy between food for the body and fuel for the lantern or car, and others found this plausible enough, not bothering to run a basic reality check.
Food does provide fuel for the longer term, but it takes quite awhile for your digestive system to process and store it. Similarly, it's not like when you eat a large amount of salmon, you grow bigger muscles within a matter of hours. In the near term, a meal is an information signal that tells your body that its main job -- finding and eating food -- has been taken care of, so don't bother yourself too much with the types of activities that are involved in getting your next meal, such as physical exertion. (On a mechanistic level, digestion activates the parasympathetic nervous system, which controls the "chill out" functions and inhibits the sympathetic nervous system, which controls the "fight, flight, fright, and fuck" functions.)
If a meal tells your body to shut down and relax until your next big meal needs to be tracked down, then hunger tells your body the opposite -- be prepared to exert yourself and use up however much energy it takes to bring down the giraffe, climb up a tree to steal eggs, or whatever. Especially if you go to bed hungry, you'll have no trouble waking up on time or even early -- your body wants you to waste no time in securing food. If you go to bed with a really full stomach, your body figures you can sleep through and waddle around the better part of the next day and still be fine for fuel, given how much will be in reserve.
These effects all come without the use of caffeine or stronger stimulants.
- For the past several weeks I've experimented by including a lot more nuts and seeds into my diet to see what happened. I've concluded that I'm junking them altogether, aside from a handful here or there throughout the week -- nothing like a couple ounces a day. I was eating mostly almonds and pistachios, though also hazelnuts, and sometimes almond butter. It's hard for me to remember a time since I started low-carb eating when I haven't had many nuts or seeds, but it was like that at first, and then and now I feel better overall. During my high-nuts-and-seeds period, there were times when I'd just feel tired and out of it; thankfully that's gone.
The reason seems to be the high levels of phytic acid in nuts and seeds -- it's also high in pulses, legumes, and grains, but I don't touch that junk in the first place. It interferes with the absorption of many vitamins (including A and D, crucial to the immune system) and minerals, as well as hindering the processing of amino acids (protein). Unfortunately humans lack the enzymes that would break it down. So although nuts and seeds are a lot better than grains and legumes and pulses with respect to carb count, fat profile, and so on, they still have incredibly high levels of phytic acid. It doesn't even take eating pounds of almonds to do it; even moderate levels will interfere with digestion.
Most people around the world who eat such things find ways of getting rid of the phytic acid, mostly by soaking and drying them. But I don't have the patience to soak almonds for 12 hours, dry them properly so they don't get infected, bla bla bla. It's easier to simply find something else to eat. By the way, lots of spice seeds have high levels of phytic acid, but I'm not sure how much of the whole seed finds its way into various condiments. Nevertheless, I've cut mustard out of my diet, and without feeling that I'm really missing anything.
If you're looking for ways to get some carbs into your otherwise low-carb diet, non-starchy vegetables and non-saccharine fruits are a better way to go than nuts and seeds, as they don't have much phytic acid at all.
- With Christmas celebrations just around the corner, I thought I'd pass along some impressions on which desserts have been less destructive in my low-carb experience. Desserts that are all grains and sugars are obviously the worst -- I don't know why I even bothered to eat some (gluten-free) cupcakes at my mother's birthday party two years ago, but I've never felt so sick from food unrelated to pathogens. Cakes, cookies, donuts, muffins -- all that stuff is terrible. Not only is it almost entirely made from glucose-spiking junk, but related to the above point, it's full of phytic acid from the grains that make up the dough, the peanuts if there are any of those, etc.
Next are cakes, pies, etc., with a good amount of dairy in them, such as cheesecakes, carrot cake, and so on. Fat inherently tastes good, so by working a good deal of fat into the dessert -- a lot of it saturated, no less -- we don't need for there to be as much sweetness in order to feel an intense taste. There are proteins in animal milk that we don't digest, and that are probably causally related to acne breakouts (Loren Cordain reviewed evidence of the dairy-acne relationship), but it's typically nothing terrible. And as another plus, dairy lacks high levels of phytic acid.
At the top are two different groups of desserts, but they almost always go together during the holidays that I'm lumping them together -- fruit pies and ice cream. For the reasons above, and because I'm fairly lactose intolerant anyway, I skip dairy-based ice cream (they usually have a ton more sugar) and go for the coconut milk-based ones (avoid soy, almond, and rice -- too much carbs and/or phytic acid). The ones by Coconut Bliss are the best because they're denser and have more fat, so they're richer than the other brands, and taste remarkably like dairy ice cream. They're sweetened with agave syrup, which is very high in fructose and will therefore overload your liver (and give you gout) if you eat too much of it, but if it's just for a dessert or two, no big deal.
With all that fat already in the dessert, you can eat a less sweet pie (and less of it). Unlike cakes or muffins, most fruit pies have their first ingredient as the fruit itself, followed by butter -- more fat! There's still some bad grains in the crust, but this is about as low as you can get your grain consumption while still enjoying holiday desserts. Fortunately, most of the fruits used in pies are among the least saccharine -- there are all sorts of berry pies, as well as cherry, apple, and peach, whereas there are no widely available banana or mango or fig pies. If you're gluten intolerant, Whole Foods makes their own, and they're great. Over Thanksgiving I tried cherry, apple, and peach, and they were all fantastic. Just break them apart a little, heat them in the toaster oven, and have them with some vanilla-flavored coconut milk ice cream -- delicious.
The pies are about 5 or 6 inches across at the top, and maybe 2 inches tall. If you take a quarter of one of these, that's about 40 g of net carbs for the apple, peach, and cherry pies (a bit more for the pecan one, and even more for pumpkin). Add to that a quarter of a pint of the ice cream, and that's another 16 g, so that the entire dessert would have 56 g of net carbs. Just make sure you don't eat grains, starches, or sugars elsewhere in the day and you're still doing pretty well by low-carb standards (40-60 g or less per day is the goal). Hell, even if you splurge and have two of these desserts in a day, that's slightly over 100 g of net carbs. By comparison, two of those gluten-free cupcakes I had would contain 128 g -- and would not have tasted nearly as great as half of a smallish pie and half a pint of ice cream.
But I've been playing around with not eating anything in the evening every now and then, just to see if there's anything to the intermittent fasting idea. Whenever I do this, I always wake up and get out of bed right away, and never feel more energized. Having used this strategy to great success this morning when I had to wake up at 7am to take a final exam -- and I am not a morning person -- I'm going to stick with it.
How does it work? Let's remember what a meal is -- it's an information signal to your body when it's planning what to do in the near term. Food is not fuel in the short term. If it were, then after eating a decent-sized or large meal, you'd be brimming over with fuel and feel like or at least be capable of lots of activity. In reality, eating a big meal lays you out and nothing could be more impossible than vigorous activity. This mistaken view of food's role comes from our rationalistic worldview, where we impose our own reasons on how the world works, rather than study it empirically. Someone thought up the analogy between food for the body and fuel for the lantern or car, and others found this plausible enough, not bothering to run a basic reality check.
Food does provide fuel for the longer term, but it takes quite awhile for your digestive system to process and store it. Similarly, it's not like when you eat a large amount of salmon, you grow bigger muscles within a matter of hours. In the near term, a meal is an information signal that tells your body that its main job -- finding and eating food -- has been taken care of, so don't bother yourself too much with the types of activities that are involved in getting your next meal, such as physical exertion. (On a mechanistic level, digestion activates the parasympathetic nervous system, which controls the "chill out" functions and inhibits the sympathetic nervous system, which controls the "fight, flight, fright, and fuck" functions.)
If a meal tells your body to shut down and relax until your next big meal needs to be tracked down, then hunger tells your body the opposite -- be prepared to exert yourself and use up however much energy it takes to bring down the giraffe, climb up a tree to steal eggs, or whatever. Especially if you go to bed hungry, you'll have no trouble waking up on time or even early -- your body wants you to waste no time in securing food. If you go to bed with a really full stomach, your body figures you can sleep through and waddle around the better part of the next day and still be fine for fuel, given how much will be in reserve.
These effects all come without the use of caffeine or stronger stimulants.
- For the past several weeks I've experimented by including a lot more nuts and seeds into my diet to see what happened. I've concluded that I'm junking them altogether, aside from a handful here or there throughout the week -- nothing like a couple ounces a day. I was eating mostly almonds and pistachios, though also hazelnuts, and sometimes almond butter. It's hard for me to remember a time since I started low-carb eating when I haven't had many nuts or seeds, but it was like that at first, and then and now I feel better overall. During my high-nuts-and-seeds period, there were times when I'd just feel tired and out of it; thankfully that's gone.
The reason seems to be the high levels of phytic acid in nuts and seeds -- it's also high in pulses, legumes, and grains, but I don't touch that junk in the first place. It interferes with the absorption of many vitamins (including A and D, crucial to the immune system) and minerals, as well as hindering the processing of amino acids (protein). Unfortunately humans lack the enzymes that would break it down. So although nuts and seeds are a lot better than grains and legumes and pulses with respect to carb count, fat profile, and so on, they still have incredibly high levels of phytic acid. It doesn't even take eating pounds of almonds to do it; even moderate levels will interfere with digestion.
Most people around the world who eat such things find ways of getting rid of the phytic acid, mostly by soaking and drying them. But I don't have the patience to soak almonds for 12 hours, dry them properly so they don't get infected, bla bla bla. It's easier to simply find something else to eat. By the way, lots of spice seeds have high levels of phytic acid, but I'm not sure how much of the whole seed finds its way into various condiments. Nevertheless, I've cut mustard out of my diet, and without feeling that I'm really missing anything.
If you're looking for ways to get some carbs into your otherwise low-carb diet, non-starchy vegetables and non-saccharine fruits are a better way to go than nuts and seeds, as they don't have much phytic acid at all.
- With Christmas celebrations just around the corner, I thought I'd pass along some impressions on which desserts have been less destructive in my low-carb experience. Desserts that are all grains and sugars are obviously the worst -- I don't know why I even bothered to eat some (gluten-free) cupcakes at my mother's birthday party two years ago, but I've never felt so sick from food unrelated to pathogens. Cakes, cookies, donuts, muffins -- all that stuff is terrible. Not only is it almost entirely made from glucose-spiking junk, but related to the above point, it's full of phytic acid from the grains that make up the dough, the peanuts if there are any of those, etc.
Next are cakes, pies, etc., with a good amount of dairy in them, such as cheesecakes, carrot cake, and so on. Fat inherently tastes good, so by working a good deal of fat into the dessert -- a lot of it saturated, no less -- we don't need for there to be as much sweetness in order to feel an intense taste. There are proteins in animal milk that we don't digest, and that are probably causally related to acne breakouts (Loren Cordain reviewed evidence of the dairy-acne relationship), but it's typically nothing terrible. And as another plus, dairy lacks high levels of phytic acid.
At the top are two different groups of desserts, but they almost always go together during the holidays that I'm lumping them together -- fruit pies and ice cream. For the reasons above, and because I'm fairly lactose intolerant anyway, I skip dairy-based ice cream (they usually have a ton more sugar) and go for the coconut milk-based ones (avoid soy, almond, and rice -- too much carbs and/or phytic acid). The ones by Coconut Bliss are the best because they're denser and have more fat, so they're richer than the other brands, and taste remarkably like dairy ice cream. They're sweetened with agave syrup, which is very high in fructose and will therefore overload your liver (and give you gout) if you eat too much of it, but if it's just for a dessert or two, no big deal.
With all that fat already in the dessert, you can eat a less sweet pie (and less of it). Unlike cakes or muffins, most fruit pies have their first ingredient as the fruit itself, followed by butter -- more fat! There's still some bad grains in the crust, but this is about as low as you can get your grain consumption while still enjoying holiday desserts. Fortunately, most of the fruits used in pies are among the least saccharine -- there are all sorts of berry pies, as well as cherry, apple, and peach, whereas there are no widely available banana or mango or fig pies. If you're gluten intolerant, Whole Foods makes their own, and they're great. Over Thanksgiving I tried cherry, apple, and peach, and they were all fantastic. Just break them apart a little, heat them in the toaster oven, and have them with some vanilla-flavored coconut milk ice cream -- delicious.
The pies are about 5 or 6 inches across at the top, and maybe 2 inches tall. If you take a quarter of one of these, that's about 40 g of net carbs for the apple, peach, and cherry pies (a bit more for the pecan one, and even more for pumpkin). Add to that a quarter of a pint of the ice cream, and that's another 16 g, so that the entire dessert would have 56 g of net carbs. Just make sure you don't eat grains, starches, or sugars elsewhere in the day and you're still doing pretty well by low-carb standards (40-60 g or less per day is the goal). Hell, even if you splurge and have two of these desserts in a day, that's slightly over 100 g of net carbs. By comparison, two of those gluten-free cupcakes I had would contain 128 g -- and would not have tasted nearly as great as half of a smallish pie and half a pint of ice cream.
December 13, 2010
Two major waves in the history of junk mail
Here is the prevalence of the term "junk mail" in the NYT, starting with its first occurrence in 1954 (data are in 5-year blocks, plotted at the mid-year):
I'm not surprised by the upward trend, but I didn't expect to see two distinct periods. Usage of the term surges through the '50s and first half of the '60s, but even by the second half of the '60s the increase flattens out, and there's mostly a plateau through the first half of the '80s. Since the prevalence of the term reflects people's perceptions of how bad the problem is, it looks like they'd gotten more or less used to junk mail by then.
However, the second half of the '80s sees another surge upward that began to plateau somewhere around 2000. This is another case of "obviously not due to the internet," as it preceded the internet and email, and the post-internet world shows a shallower rise in usage of the term. Either senders of junk mail started ramping up the volume, which the post office likes because it increases their revenue, or despite a mild change in volume people just got more fed up with it.
I have no recollection of what junk mail was like during the first half of the '80s, and was not even alive before then, so I have no idea what distinguishes these two phases. Anyone out there care to clue us in to what the first wave of the junk mail deluge was like?
I'm not surprised by the upward trend, but I didn't expect to see two distinct periods. Usage of the term surges through the '50s and first half of the '60s, but even by the second half of the '60s the increase flattens out, and there's mostly a plateau through the first half of the '80s. Since the prevalence of the term reflects people's perceptions of how bad the problem is, it looks like they'd gotten more or less used to junk mail by then.
However, the second half of the '80s sees another surge upward that began to plateau somewhere around 2000. This is another case of "obviously not due to the internet," as it preceded the internet and email, and the post-internet world shows a shallower rise in usage of the term. Either senders of junk mail started ramping up the volume, which the post office likes because it increases their revenue, or despite a mild change in volume people just got more fed up with it.
I have no recollection of what junk mail was like during the first half of the '80s, and was not even alive before then, so I have no idea what distinguishes these two phases. Anyone out there care to clue us in to what the first wave of the junk mail deluge was like?
December 11, 2010
The rise of the gay friend and its consequences
Yesterday at '80s night I saw even more disgusting evidence of the displacement of guys from a girl's social circle and gays moving in to take their place. As female attitudes toward guys came to be dominated by suspicion, fear, unease, and so on, they began to cut normal males out of their world and allow in only the non-guyish guys -- namely gays.
When this trend began in the '90s, the gay friend at first took over only the most female-typical roles of the guy he was replacing, such as talking about her relationship problems over the phone or accompanying her on shopping trips. Gradually he took over roles that were increasingly closer to the heart of a guy-girl friendship, such as going out for coffee and lunch during the day, or drinks and night-clubbing at night.
Over the past few years, I've seen the gay friends not just tagging along with the girls when they go out at night, but being more or less the only ones they dance with. Last night I saw them intrude even further onto the turf of the straight guy -- by being the only one that she would grind on (i.e. give a standing lapdance to), and the one who would pick her up and hold her around the waist while she put her legs around his sides.
Really, what's next? -- girls who only feel comfortable fucking when it's with their gay friend?
It's worth looking at the origins and spread of this phenomenon, as most Millennials will find it hard to believe that there was a time in the recent past when gays were completely off a girl's radar, and a good deal of pre-Millennials have already forgotten what they've lived through, as is typical. First, the prevalence of the term "gay friends" in the newspaper of record:
The points are of 5-year blocks and plotted at the middle year of the block, so that the 2000-2004 period is shown as a point at 2002. There are no instances of this term before the 1970s, except for some cases in the '40s and earlier when "gay" still meant "cheery" instead of "homosexual." I used the plural instead of the singular because this gave a larger sample size.
During all of the '70s and '80s, there are only 14 occurrences, or on average less than one a year, and no upward trend during these 20 years. In fact, there wasn't a single instance of the term from 1980 through '85. Suddenly during the early-mid-'90s the prevalence shoots up to 10 times the average from the '70s and '80s, and inches up a bit more during the late '90s. The 2000s saw another surge in usage, so that during the 2005-2009 period it was over 20 times as prevalent as the '70s-'80s average.
This picture confirms my earlier hunch based on the appearance of the gay friend in TV and movies. Here is a recent NYT article about some loathsome new TV show that will focus exclusively on the now-mainstream girl-gay friendship, compared to earlier shows where there may have been a single girl-gay friendship, and that may not have been very central to the plot. The reporter is correct to point out that fag hags are no longer drawn only from the dregs, but now from a wide range of the fair sex.
Another article covers several surveys on Americans' attitudes and interactions with gays, which have changed quite a bit even during the 2000s: from 2003 to 2010, "the proportion of people who reported having a gay friend or relative rose 10 percentage points." Since gays are not shooting up that fast in the overall population, this is not due to people having more gay relatives but rather searching out or welcoming in gay friends.
As I mentioned earlier, the gay friend trend is part of a larger shift toward extreme sex segregation after the peak of the violence rate in 1992, and in its broad contours mirrors the sex segregation of the 1950s, '40s, and even the later '30s, which were another period of falling crime. Boys and girls want to play with each other more when the violence level swings upward, as during the '60s through the '80s, as well as the first three decades of the 20th C., peaking during the Roaring Twenties.
Since male desire to hang around females is fairly stable, the real change shows up in female preferences. During dangerous times, they want to hang out more with males for a variety of reasons. They are in greater need of protectors and avengers, and female friends aren't going to do any good there (and neither are gays in general). Also, they tend to be more boy-crazy and promiscuous, for reasons described elsewhere, and having more guys in your social circle makes that easier to act on.
Moreover, when times are more dangerous, you expect to live a shorter life, so you feel the impulse to grow up earlier rather than live life as a perpetual toddler. I remember first being seduced into an adventure of "I'll show you mine if you show me yours" by a girl who was hiding under a tablecloth-protected workspace during naptime at a daycare center -- when I was only 3 or 4. Ah, the early-mid-'80s... sure won't find a toddler girl planning that out today.
I still vividly recall my first school dance in the fall of 1992, when I was 11 or perhaps 12. Sure at first there was the typical picture of "girls along this wall and boys along the opposite wall" of the cafeteria, but it didn't last more than 10 or 15 minutes. Someone broke out and everyone else followed. We were barely entering puberty then, yet we were behaving more wildly than the infantilized college students I observe and interact with every week at '80s night. In particular, the girls back then were horny as billy goats and wanted so bad to dance up-close with boys.
Periodically the adult monitors waded through the sea of sweaty sixth-grade bodies to break us apart and insist that we maintain a certain distance with our dance partner. Today those busybody killjoys would be out of work, since most girls don't want to dance at all, let alone face to face and belly to belly, preferring instead to jiggle their shit in order to steal the spotlight without having to touch or be touched by boys, a la Fergie. (Grinding does not count, as I've explained in detail elsewhere. There's no connection or tendency to stay together; rather it's fleeting and prone to the girl just up and walking away without even making eye-contact.)
We were also more rebellious back then -- even when we were split up, we snapped right back together when the old lady had moved on. Compare that to the girl in the club today who's dancing with a boy, gets pulled away by her cockblocking friends, and then just goes along with the abduction rather than tell them to get a life of their own as she goes back to her partner.
And greater sex segregation is just one piece of the larger picture of girls becoming more boring during falling-crime times, namely as a result of not hanging around the more wild and exciting sex -- guys. Fag hags try to rationalize their snore-inducing friendships with gays by imagining that these relationships are transgressive and liberating. Back on planet Earth, the gay friend is the ultimate desexualized male, and hence poses no threat to her safe cocoon at all. He plays the role that the eunuch used to play with harem girls, although his lack of interest in girls was forced upon him then, and these days the female's cloistering is of her own choice. He has a faint protector role, nothing on the order of a knight who has to defend the castle, and spends most of the time in utterly asexual activities and conversation. Today's girls who like boys who like boys are no more shackle-breaking or square-shocking than a slave girl imprisoned under the guard of a ball-less watchman.
Shoot, I saw more carnivalesque behavior among the elementary school girls who joined in with the boys for a game of duck-duck-goose or state tag. like, omigod, you mean i have to touch a boy to tag him out, and he might touch me to tag me out???!! well... i guess so -- i mean, it does look FUN! On the bright side, once that late-'80s level of violence in the wider society returns, we'll see the rebirth of cool chicks as they start ditching their gay pet/friend and hanging out with the boys again.
When this trend began in the '90s, the gay friend at first took over only the most female-typical roles of the guy he was replacing, such as talking about her relationship problems over the phone or accompanying her on shopping trips. Gradually he took over roles that were increasingly closer to the heart of a guy-girl friendship, such as going out for coffee and lunch during the day, or drinks and night-clubbing at night.
Over the past few years, I've seen the gay friends not just tagging along with the girls when they go out at night, but being more or less the only ones they dance with. Last night I saw them intrude even further onto the turf of the straight guy -- by being the only one that she would grind on (i.e. give a standing lapdance to), and the one who would pick her up and hold her around the waist while she put her legs around his sides.
Really, what's next? -- girls who only feel comfortable fucking when it's with their gay friend?
It's worth looking at the origins and spread of this phenomenon, as most Millennials will find it hard to believe that there was a time in the recent past when gays were completely off a girl's radar, and a good deal of pre-Millennials have already forgotten what they've lived through, as is typical. First, the prevalence of the term "gay friends" in the newspaper of record:
The points are of 5-year blocks and plotted at the middle year of the block, so that the 2000-2004 period is shown as a point at 2002. There are no instances of this term before the 1970s, except for some cases in the '40s and earlier when "gay" still meant "cheery" instead of "homosexual." I used the plural instead of the singular because this gave a larger sample size.
During all of the '70s and '80s, there are only 14 occurrences, or on average less than one a year, and no upward trend during these 20 years. In fact, there wasn't a single instance of the term from 1980 through '85. Suddenly during the early-mid-'90s the prevalence shoots up to 10 times the average from the '70s and '80s, and inches up a bit more during the late '90s. The 2000s saw another surge in usage, so that during the 2005-2009 period it was over 20 times as prevalent as the '70s-'80s average.
This picture confirms my earlier hunch based on the appearance of the gay friend in TV and movies. Here is a recent NYT article about some loathsome new TV show that will focus exclusively on the now-mainstream girl-gay friendship, compared to earlier shows where there may have been a single girl-gay friendship, and that may not have been very central to the plot. The reporter is correct to point out that fag hags are no longer drawn only from the dregs, but now from a wide range of the fair sex.
Another article covers several surveys on Americans' attitudes and interactions with gays, which have changed quite a bit even during the 2000s: from 2003 to 2010, "the proportion of people who reported having a gay friend or relative rose 10 percentage points." Since gays are not shooting up that fast in the overall population, this is not due to people having more gay relatives but rather searching out or welcoming in gay friends.
As I mentioned earlier, the gay friend trend is part of a larger shift toward extreme sex segregation after the peak of the violence rate in 1992, and in its broad contours mirrors the sex segregation of the 1950s, '40s, and even the later '30s, which were another period of falling crime. Boys and girls want to play with each other more when the violence level swings upward, as during the '60s through the '80s, as well as the first three decades of the 20th C., peaking during the Roaring Twenties.
Since male desire to hang around females is fairly stable, the real change shows up in female preferences. During dangerous times, they want to hang out more with males for a variety of reasons. They are in greater need of protectors and avengers, and female friends aren't going to do any good there (and neither are gays in general). Also, they tend to be more boy-crazy and promiscuous, for reasons described elsewhere, and having more guys in your social circle makes that easier to act on.
Moreover, when times are more dangerous, you expect to live a shorter life, so you feel the impulse to grow up earlier rather than live life as a perpetual toddler. I remember first being seduced into an adventure of "I'll show you mine if you show me yours" by a girl who was hiding under a tablecloth-protected workspace during naptime at a daycare center -- when I was only 3 or 4. Ah, the early-mid-'80s... sure won't find a toddler girl planning that out today.
I still vividly recall my first school dance in the fall of 1992, when I was 11 or perhaps 12. Sure at first there was the typical picture of "girls along this wall and boys along the opposite wall" of the cafeteria, but it didn't last more than 10 or 15 minutes. Someone broke out and everyone else followed. We were barely entering puberty then, yet we were behaving more wildly than the infantilized college students I observe and interact with every week at '80s night. In particular, the girls back then were horny as billy goats and wanted so bad to dance up-close with boys.
Periodically the adult monitors waded through the sea of sweaty sixth-grade bodies to break us apart and insist that we maintain a certain distance with our dance partner. Today those busybody killjoys would be out of work, since most girls don't want to dance at all, let alone face to face and belly to belly, preferring instead to jiggle their shit in order to steal the spotlight without having to touch or be touched by boys, a la Fergie. (Grinding does not count, as I've explained in detail elsewhere. There's no connection or tendency to stay together; rather it's fleeting and prone to the girl just up and walking away without even making eye-contact.)
We were also more rebellious back then -- even when we were split up, we snapped right back together when the old lady had moved on. Compare that to the girl in the club today who's dancing with a boy, gets pulled away by her cockblocking friends, and then just goes along with the abduction rather than tell them to get a life of their own as she goes back to her partner.
And greater sex segregation is just one piece of the larger picture of girls becoming more boring during falling-crime times, namely as a result of not hanging around the more wild and exciting sex -- guys. Fag hags try to rationalize their snore-inducing friendships with gays by imagining that these relationships are transgressive and liberating. Back on planet Earth, the gay friend is the ultimate desexualized male, and hence poses no threat to her safe cocoon at all. He plays the role that the eunuch used to play with harem girls, although his lack of interest in girls was forced upon him then, and these days the female's cloistering is of her own choice. He has a faint protector role, nothing on the order of a knight who has to defend the castle, and spends most of the time in utterly asexual activities and conversation. Today's girls who like boys who like boys are no more shackle-breaking or square-shocking than a slave girl imprisoned under the guard of a ball-less watchman.
Shoot, I saw more carnivalesque behavior among the elementary school girls who joined in with the boys for a game of duck-duck-goose or state tag. like, omigod, you mean i have to touch a boy to tag him out, and he might touch me to tag me out???!! well... i guess so -- i mean, it does look FUN! On the bright side, once that late-'80s level of violence in the wider society returns, we'll see the rebirth of cool chicks as they start ditching their gay pet/friend and hanging out with the boys again.
December 7, 2010
Girls more likely to dig coming of age movies, even ones about boys
Just fooling around on imdb.com and noticed that coming of age movies just about always get higher ratings from females than males. That's not an effect of girls rating everything higher, as though they were merely gentler when judging. And it's not just for movies that they can relate to directly through a female lead character, like My Girl or Labyrinth, or with mixed-sex casts like Dirty Dancing or The Goonies. Even ones focusing only on boys get higher ratings -- including ones with non-dreamy and barely pubescent boys like Stand By Me, The Sandlot, and Lord of the Flies (! -- though only by a hair), as well as the ones packed with teenage heartthrobs like The Outsiders.
Time was when a boy dreamed of becoming a man, and couldn't devour enough of these rite of passage stories. Yet these days girls are more into them. I think this is due to the institutional squelching of male-on-male violence. That takes away a lot of the anxiety, as well as the appeal, of becoming a man -- you no longer expect to physically defend yourself, your kin, and your friends, since there's a policeman, a bouncer, or a school security guard who's supposed to take care of that business for you. What's left for you to do in the job description of man's work? Get a job and provide for your kids, perhaps, but that might as well be another lifetime when you're a teenager. It's no wonder adolescent boys are in such an existential drift, aside from the previous wave of crime during the '60s through the '80s, when they were suddenly needed again as protectors and avengers. Now it's back to studying hard and dorking around with gadgets, like during the 1950s.
Girls, on the other hand, have no protection from society's institutions when their enemies plot against them. It's mostly verbal, not physical, and it's typically done behind their back in the hallway or in secret in the girls' locker room, not like a brawl that erupts in the middle of the cafeteria. It's just about impossible for authority figures to locate the attack, let alone contain or ameliorate the harm once it's begun -- are they going to erase the memories of every girl who's heard and passed along the rumor that so-and-so really slutted it up last Saturday and slept with two guys in one night?
So while modern institutions may have tamed male aggression a good deal, females still live in as much of a dog-eat-dog world as they always have. It's only natural then that they'd get sucked into a good narrative about rites of passage, even ones about violent boys that are outside their first-hand experiences.
Time was when a boy dreamed of becoming a man, and couldn't devour enough of these rite of passage stories. Yet these days girls are more into them. I think this is due to the institutional squelching of male-on-male violence. That takes away a lot of the anxiety, as well as the appeal, of becoming a man -- you no longer expect to physically defend yourself, your kin, and your friends, since there's a policeman, a bouncer, or a school security guard who's supposed to take care of that business for you. What's left for you to do in the job description of man's work? Get a job and provide for your kids, perhaps, but that might as well be another lifetime when you're a teenager. It's no wonder adolescent boys are in such an existential drift, aside from the previous wave of crime during the '60s through the '80s, when they were suddenly needed again as protectors and avengers. Now it's back to studying hard and dorking around with gadgets, like during the 1950s.
Girls, on the other hand, have no protection from society's institutions when their enemies plot against them. It's mostly verbal, not physical, and it's typically done behind their back in the hallway or in secret in the girls' locker room, not like a brawl that erupts in the middle of the cafeteria. It's just about impossible for authority figures to locate the attack, let alone contain or ameliorate the harm once it's begun -- are they going to erase the memories of every girl who's heard and passed along the rumor that so-and-so really slutted it up last Saturday and slept with two guys in one night?
So while modern institutions may have tamed male aggression a good deal, females still live in as much of a dog-eat-dog world as they always have. It's only natural then that they'd get sucked into a good narrative about rites of passage, even ones about violent boys that are outside their first-hand experiences.
December 6, 2010
Tame-rated movies flourish during safer times
As a follow-up to last week's post about the decline of nudity in movies tracking the overall decline in cultural wildness (including the crime rate), let's look at what ratings the top 10 movies at the box office have had over the same time period (click to enlarge):
Back before the sexual counter-revolution of the early 1990s, there were three years where an X-rated movie broke into the top 10: 1969 with Midnight Cowboy, 1971 with A Clockwork Orange, and 1973 with Last Tango in Paris. Chances of that happening in the age of Barney the Dinosaur and Harry Potter? Zero. (1973 was also noteworthy for the #1 spot being held by a horror movie, The Exorcist.) At the other extreme, G-rated movies don't seem to show a strong pattern over time, probably because their target audience can't voice an opinion about what they want to see. There's some roughly constant share of box office dollars that parents shell out to take their toddlers to the movies, whether they like it or not.
The two big changes are the decline of R-rated movies starting in the late '80s or early '90s, and PG-13 replacing PG as the typical not-quite-R rating. The second change isn't hard to understand, since a lot of those earlier PG-rated movies have a decent amount of blood, gore, "sexual situations" as they're called, and so on. Once a more fine-grained rating came out, some movies that would have been rated PG before remained that way, but a good deal of others got PG-13.
The disappearance of R-rated movies is apparent already by the late '80s / early '90s but is overwhelmingly clear from 1995 onward, when most years have 2 or fewer R movies in the top 10, in contrast to only a handful of rising-crime years that have a small number of R movies. Indeed, during the falling-crime years after 1992, there are four years where not even a single blockbuster was rated R (let alone X) -- 2002, 2005, 2006, and 2008.
This vanishing of R and X-rated movies, and their replacement with PG-13 movies, is another example of greater cultural homogeneity during safer times, which I illustrated in the post below about hair styles. These days, movies are so neutered in what they can show because the audience's tastes have shifted so strongly in a "don't go there" direction. When the world gets more dangerous, people relax that disapproval because the order of the universe looks like it's coming undone and how else can we figure a way through it unless we explore a lot more of the cultural frontier to give us some ideas? Trial-and-error can't work if you aren't willing to experiment in the first place.
Also, when the future looks more violent, you don't expect to live as long, so you discount the harm to your reputation from going to see and raving about R or X-rated movies. When the world gets safer, your care more about your long-term reputation and start worrying more about what other people will think if you give them reason to believe you're the type of person who likes R-rated movies more than the socially approved PG-13 alternatives.
It's too bad that there was never a rating board for plays, novels, or poetry, or else we could have done a much deeper historical survey. Still, if movie versions were made that the author would approve of, you can bet there would've been a lot more R and X-rated movies during the previous three major periods of rising crime -- the later 14th C (just look at the Pasolini adaptations of The Canterbury Tales and Decameron), the Elizabethan/Jacobean period of ca. 1580 to 1630, and the Gothic/Romantic period of ca. 1780 to 1830. They've got it all -- body counts, violating sexual taboos, gory or grotesque characters, demonic influences on mankind, you name it -- not to mention a suite of forces opposed to these, creating a more apocalyptic atmosphere. And on the other hand, you would have seen more PG or at most PG-13 movies from the falling-crime periods in between, exemplified by Renaissance Humanism, the Age of Reason, and the Victorian era.
Based on the universal popularity of the Bible and Shakespeare, it seems clear which type of movies will endure as classics.
Back before the sexual counter-revolution of the early 1990s, there were three years where an X-rated movie broke into the top 10: 1969 with Midnight Cowboy, 1971 with A Clockwork Orange, and 1973 with Last Tango in Paris. Chances of that happening in the age of Barney the Dinosaur and Harry Potter? Zero. (1973 was also noteworthy for the #1 spot being held by a horror movie, The Exorcist.) At the other extreme, G-rated movies don't seem to show a strong pattern over time, probably because their target audience can't voice an opinion about what they want to see. There's some roughly constant share of box office dollars that parents shell out to take their toddlers to the movies, whether they like it or not.
The two big changes are the decline of R-rated movies starting in the late '80s or early '90s, and PG-13 replacing PG as the typical not-quite-R rating. The second change isn't hard to understand, since a lot of those earlier PG-rated movies have a decent amount of blood, gore, "sexual situations" as they're called, and so on. Once a more fine-grained rating came out, some movies that would have been rated PG before remained that way, but a good deal of others got PG-13.
The disappearance of R-rated movies is apparent already by the late '80s / early '90s but is overwhelmingly clear from 1995 onward, when most years have 2 or fewer R movies in the top 10, in contrast to only a handful of rising-crime years that have a small number of R movies. Indeed, during the falling-crime years after 1992, there are four years where not even a single blockbuster was rated R (let alone X) -- 2002, 2005, 2006, and 2008.
This vanishing of R and X-rated movies, and their replacement with PG-13 movies, is another example of greater cultural homogeneity during safer times, which I illustrated in the post below about hair styles. These days, movies are so neutered in what they can show because the audience's tastes have shifted so strongly in a "don't go there" direction. When the world gets more dangerous, people relax that disapproval because the order of the universe looks like it's coming undone and how else can we figure a way through it unless we explore a lot more of the cultural frontier to give us some ideas? Trial-and-error can't work if you aren't willing to experiment in the first place.
Also, when the future looks more violent, you don't expect to live as long, so you discount the harm to your reputation from going to see and raving about R or X-rated movies. When the world gets safer, your care more about your long-term reputation and start worrying more about what other people will think if you give them reason to believe you're the type of person who likes R-rated movies more than the socially approved PG-13 alternatives.
It's too bad that there was never a rating board for plays, novels, or poetry, or else we could have done a much deeper historical survey. Still, if movie versions were made that the author would approve of, you can bet there would've been a lot more R and X-rated movies during the previous three major periods of rising crime -- the later 14th C (just look at the Pasolini adaptations of The Canterbury Tales and Decameron), the Elizabethan/Jacobean period of ca. 1580 to 1630, and the Gothic/Romantic period of ca. 1780 to 1830. They've got it all -- body counts, violating sexual taboos, gory or grotesque characters, demonic influences on mankind, you name it -- not to mention a suite of forces opposed to these, creating a more apocalyptic atmosphere. And on the other hand, you would have seen more PG or at most PG-13 movies from the falling-crime periods in between, exemplified by Renaissance Humanism, the Age of Reason, and the Victorian era.
Based on the universal popularity of the Bible and Shakespeare, it seems clear which type of movies will endure as classics.
December 3, 2010
Variance in hairstyles as a measure of cultural conformity
In the comments to the post on nudity in movies, I noted that of all the things that girls are willing to change about their appearance when they play dress-up for '80s night, their hairstyle is off limits. Maybe they'll put it in a ponytail to one side, but that's it. In three years of almost weekly attendance, I've seen a girl with crimped hair less than 5 times. They just won't alter the length, how straight vs. wavy it is, or how close to the scalp vs. how mane-like it is. Of all ethnic markers, hairstyle must be one of the most inviolable.
Just to remind those who were there, or bring it to the attention of those who weren't, here's what we imagine when the hairstyles of the wild times come to mind. They're all examples of Big Hair, whether from the '60s or the very early '90s:
Then that died off and was replaced by more moderate length, super-straight, and hugging-the-scalp hairdos during the mid-late '90s (already evident by the time Clueless came out in 1995) and the 2000s (one of my tutorees around 2006 said that she and her friends, before going out for the night, were going to make their hair "super super straight... like, Mean Girls straight").
So my first thought was that there's something about Big vs. Small Hair that responds to how wild the culture is. But that would imply that the '90s and 2000s should have been known for very short hair, and they weren't. In fact, when you think of cropped hair, you also think of the '60s through the '80s:
You also saw this split between the rising-crime era of the Jazz Age, when bobbed hair exploded in popularity alongside more luxuriant styles, vs. the mid-'30s through late '50s period of plummeting crime.
So how do hairstyles respond to the level of wildness in society? There's no strong shift toward either Big or Small Hair on average. Rather, the variance increases when crime soars and people are more wild. The average girl may have a pretty similar hairstyle during dangerous times, but there are a lot fewer people at that average level and a lot more who've ventured out into the extremes of tent-like hair as well as boyishly cropped hair.
This is one example of greater cultural and social conformity when times are getting safer, the chief example there being most of the 1950s, and this decade will probably be just about as bad. When the world gets more dangerous, people discount the future more, and part of that means not caring as much about how their present actions will affect their reputation down the line, in general. Being less constrained by worries about what everyone else will think about them, people let loose, branch out, and do their own thing more in dangerous times.
In safer times, people do get more varied in some areas, but this is not due to a shrinking concern for what others think -- rather, it is due to the opposite, whereby sheltered and complacent people spend more effort trying to broadcast their epic authentic uniqueness in an attempt to climb one rung higher on the status ladder. If they truly did care less than before about what others thought of their behavior, then they'd be more promiscuous. But as I've pointed out forever, promiscuity surges during dangerous times, the logic of which I elaborated on in a post below.
It would be worth quantifying this effect. You could take the covers of Vogue, or Playboy playmates, or something, and measure the volume of hair on girls' heads. Lump girls of a single year of Vogue (or whatever) into one group, and then find the variance in their hair volume. Plot that over time, alongside the homicide rate, and see how close the impression comes to reality.
Just to remind those who were there, or bring it to the attention of those who weren't, here's what we imagine when the hairstyles of the wild times come to mind. They're all examples of Big Hair, whether from the '60s or the very early '90s:
Then that died off and was replaced by more moderate length, super-straight, and hugging-the-scalp hairdos during the mid-late '90s (already evident by the time Clueless came out in 1995) and the 2000s (one of my tutorees around 2006 said that she and her friends, before going out for the night, were going to make their hair "super super straight... like, Mean Girls straight").
So my first thought was that there's something about Big vs. Small Hair that responds to how wild the culture is. But that would imply that the '90s and 2000s should have been known for very short hair, and they weren't. In fact, when you think of cropped hair, you also think of the '60s through the '80s:
You also saw this split between the rising-crime era of the Jazz Age, when bobbed hair exploded in popularity alongside more luxuriant styles, vs. the mid-'30s through late '50s period of plummeting crime.
So how do hairstyles respond to the level of wildness in society? There's no strong shift toward either Big or Small Hair on average. Rather, the variance increases when crime soars and people are more wild. The average girl may have a pretty similar hairstyle during dangerous times, but there are a lot fewer people at that average level and a lot more who've ventured out into the extremes of tent-like hair as well as boyishly cropped hair.
This is one example of greater cultural and social conformity when times are getting safer, the chief example there being most of the 1950s, and this decade will probably be just about as bad. When the world gets more dangerous, people discount the future more, and part of that means not caring as much about how their present actions will affect their reputation down the line, in general. Being less constrained by worries about what everyone else will think about them, people let loose, branch out, and do their own thing more in dangerous times.
In safer times, people do get more varied in some areas, but this is not due to a shrinking concern for what others think -- rather, it is due to the opposite, whereby sheltered and complacent people spend more effort trying to broadcast their epic authentic uniqueness in an attempt to climb one rung higher on the status ladder. If they truly did care less than before about what others thought of their behavior, then they'd be more promiscuous. But as I've pointed out forever, promiscuity surges during dangerous times, the logic of which I elaborated on in a post below.
It would be worth quantifying this effect. You could take the covers of Vogue, or Playboy playmates, or something, and measure the volume of hair on girls' heads. Lump girls of a single year of Vogue (or whatever) into one group, and then find the variance in their hair volume. Plot that over time, alongside the homicide rate, and see how close the impression comes to reality.
December 1, 2010
When more choices leads to less variety
From a recent NPD press release about Americans' changing eating habits over the past 30 years: "The average number of food items used per meal decreased from 4.44 in the 1980s to 3.5 in 2010."
Anyone who remembers supermarkets from the 1980s or before vs. the '90s, especially the mid-'90s, and later, will have observed an explosion in the choices available to the average American. There was no way that in 1985 you were going to find sun-dried tomatoes, olive oils from at least five different countries, or Thai green curry paste at the typical American supermarket. Now those things and so many more are commonplace. The only exceptions are food items for kids, as the '80s were the golden age for kids' culture in general. I walk through the breakfast cereal or candy aisles today and see probably 1/4 as many product lines as I would've seen when I was in elementary school.
The NPD story is that people value convenience and time-saving measures much more now than 30 years ago, as we're so much more over-burdened today. I wonder about that, though. Adults in the 1980s may not have been as crushed by boring or rat-race activities as they have been for the past 15 to 20 years, but that doesn't mean they were just sitting around with lots of idle time. Strange as it may seem in the era of helicopter parents, grown-ups used to have a busy social life of their own in the '80s and before. When new wave music exploded circa 1983, my mother went out to dance clubs every weekend, dragging my father along whenever she could budge him into dancing, and leaving me and my two younger brothers in the care of a now-vanished person called the babysitter.
So, a larger portion of adults' schedules may be taken up by stressing-out activities, but how much time they have to fix meals can't have changed that much. The shift toward convenience and therefore less variety in meal ingredients doesn't have to do with having less free time but with having so much more stress, which fixing a more elaborate meal would only compound. Back when adults were enjoying more carefree, though no less "full" schedules, adding another element or two to their dinner wasn't going to be the straw that broke the camel's back that night and send them on a postal rage the next day.
This shift is an extension of the trend over thousands of years whereby people who hunt and gather eat a far wider variety of foods than do people who are settled and have a cornucopia of items available for purchase through market exchanges.
The vast spectrum of choices at the supermarket is misleading when it comes to what a single person eats. Most of that diversity is due to variety between groups of people, and not at all to the variety that a single person in one of those groups eats. For example, if a supermarket has lots of animal products and lots of grain and plant products, you might conclude that the average consumer is an omnivore. But it could also be that there are two sub-groups of customers -- carnivores and vegans -- who have much less variety in their own diets than an omnivore does.
That's just what we see in today's super-stocked supermarket. There are a bunch of food tribes that have little variety in their own diets, but they all shop at the same supermarket. So there's the gluten-free area, the vegan area, the Hispanic area, the Mediterranean area, the frat boy 7-11 diet area, the TV dinner area, etc. Any individual who shops there has a very limited diet (one of the above), rather than have a diet that draws from all of those areas.
The same pattern shows up wherever hyper-mega-markets have replaced lots of smaller specialty stores. There's an incomprehensibly large number of choices of songs on iTunes, but the average music listener today takes in an extremely narrow "diet" of music. It's just that there are a billion different narrowly focused tribes, all of whom shop at iTunes. Same with the variety of books at Amazon and the more blinkered reading culture, or TV channels and programs and the more homogeneous range of shows that viewers watch these days.
Anyone who remembers supermarkets from the 1980s or before vs. the '90s, especially the mid-'90s, and later, will have observed an explosion in the choices available to the average American. There was no way that in 1985 you were going to find sun-dried tomatoes, olive oils from at least five different countries, or Thai green curry paste at the typical American supermarket. Now those things and so many more are commonplace. The only exceptions are food items for kids, as the '80s were the golden age for kids' culture in general. I walk through the breakfast cereal or candy aisles today and see probably 1/4 as many product lines as I would've seen when I was in elementary school.
The NPD story is that people value convenience and time-saving measures much more now than 30 years ago, as we're so much more over-burdened today. I wonder about that, though. Adults in the 1980s may not have been as crushed by boring or rat-race activities as they have been for the past 15 to 20 years, but that doesn't mean they were just sitting around with lots of idle time. Strange as it may seem in the era of helicopter parents, grown-ups used to have a busy social life of their own in the '80s and before. When new wave music exploded circa 1983, my mother went out to dance clubs every weekend, dragging my father along whenever she could budge him into dancing, and leaving me and my two younger brothers in the care of a now-vanished person called the babysitter.
So, a larger portion of adults' schedules may be taken up by stressing-out activities, but how much time they have to fix meals can't have changed that much. The shift toward convenience and therefore less variety in meal ingredients doesn't have to do with having less free time but with having so much more stress, which fixing a more elaborate meal would only compound. Back when adults were enjoying more carefree, though no less "full" schedules, adding another element or two to their dinner wasn't going to be the straw that broke the camel's back that night and send them on a postal rage the next day.
This shift is an extension of the trend over thousands of years whereby people who hunt and gather eat a far wider variety of foods than do people who are settled and have a cornucopia of items available for purchase through market exchanges.
The vast spectrum of choices at the supermarket is misleading when it comes to what a single person eats. Most of that diversity is due to variety between groups of people, and not at all to the variety that a single person in one of those groups eats. For example, if a supermarket has lots of animal products and lots of grain and plant products, you might conclude that the average consumer is an omnivore. But it could also be that there are two sub-groups of customers -- carnivores and vegans -- who have much less variety in their own diets than an omnivore does.
That's just what we see in today's super-stocked supermarket. There are a bunch of food tribes that have little variety in their own diets, but they all shop at the same supermarket. So there's the gluten-free area, the vegan area, the Hispanic area, the Mediterranean area, the frat boy 7-11 diet area, the TV dinner area, etc. Any individual who shops there has a very limited diet (one of the above), rather than have a diet that draws from all of those areas.
The same pattern shows up wherever hyper-mega-markets have replaced lots of smaller specialty stores. There's an incomprehensibly large number of choices of songs on iTunes, but the average music listener today takes in an extremely narrow "diet" of music. It's just that there are a billion different narrowly focused tribes, all of whom shop at iTunes. Same with the variety of books at Amazon and the more blinkered reading culture, or TV channels and programs and the more homogeneous range of shows that viewers watch these days.
November 30, 2010
Flesh in movies tracks the crime rate
Starting around 1992, young Americans began killing off the sexual revolution that reigned during the '60s through the '80s -- by waiting much longer to get started, having fewer partners, using condoms more often, and so on. These are just a few of the starkest examples of how sexual wildness tracks the level of violence in society, the logic of which I explained some posts below.
Earlier we saw that reports from lit fic observers that interest in sex shriveled up among the novels of the superstars of the 1990s and 2000s, especially compared to the blockbuster writers of the '60s through the '80s, for whom no amount of sex was gratuitous. An even better place to look for the cultural reflection of the birth and death of sexual liberation is movies. They have to appeal to a much broader audience, there are a lot fewer of them than books and hence easier to study close to the whole universe of "popular movies," and busybody parents have already put together a wealth of information about whether a movie shows skin or not.
To quantify this, I took the top 10 movies at the box office for a year -- that's surely a good measure of the movie's resonance with what audiences wanted. I then checked the "parental advisory" section of their entry at imdb.com (or in a few cases where the movie was more obscure, at other "protect the kids" online resources for parents). If there was at least one scene of partial nudity -- even an exposed nipple -- that lasted longer than a moment, I counted it as having nudity; otherwise, not. So this isn't a measure of really raunchy stuff. Because it's such an easy threshold to clear, if hardly any movies in some year fail to meet it, then we are very safe in categorizing that year as a prudish one.
The first full year that nudity in movies was allowed was 1969, after an earlier ban was reversed in 1968. The ban, also known as the Hays Code, was only enforced starting in 1934 -- the very first year of falling crime that would last through the '50s, after the earlier surge of crime from at least 1900 through the Roaring Twenties and the early '30s. As an aside, a probably something I'll flesh out more later, these bans typically only occur when they are not needed -- when the people are themselves already becoming tamer. They are expressive of the more prudish zeitgeist, not an attempt to deal with a real problem.
Here is the change over time:
The change is so incredibly stark that you could probably go just by your impressions -- though that's only assuming you remember anything, which most people don't. We hear so much nonsense about how skankified and sexually perverted the culture keeps getting, but try to think of the last time you saw some nice T&A in a mainstream movie. Back in the late '70s and '80s, every movie was also partly a softcore porn flick. It wasn't just the screwball teen comedies like Porky's -- even mainstream comedies that were actually funny, like Caddyshack or Stripes or Fast Times at Ridgemont High or Beverly Hills Cop, show plenty of skin. Same with dramas like Fatal Attraction. Ditto for action movies, like when Conan the Barbarian and a witch get it on, or when we see the actual sex act where Sarah Connor conceives the future savior of humankind with Kyle Reese, or when Dirty Harry chases a criminal across some rooftops and they happen to crash through into an orgy with about a dozen people being filmed for a porno. And it's hard to think of a thriller/horror movie from that period that does not have a nude scene.
There are two clear periods: a wilder period from 1969 through 1988, where the mean, median, and mode are all 4; and a tamer one from 1989 through 2009, where the mean, median, and mode are all 1. Although each year's movies are only a handful of draws from the larger distribution of culture, creating year-to-year variation, it's clear that sometime in the late '80s or early '90s the main tendency of that distribution shifted sharply in the covered-up direction. To my eye, the year-to-year variation doesn't allow us to pinpoint whether the decline of T&A in movies slightly preceded or occurred right alongside the fall in violence and promiscuity, but it's obvious that the two trends are closely related.
One thing is certain -- the internet has nothing to do with it. The internet only became widespread among the audiences that go to movies around 1994. I remember that very clearly, and there was no porn around. Free and easy-to-get internet porn didn't show up until the late '90s (the first year I recall it being a phenomenon was 2000, but I was probably late to the trend). Nudity in one domain of entertainment or media is not a substitute for nudity in some other domain, as this idea assumes (internet nudity replacing movie nudity). Rather, they are complements -- they all feed off of each other and contribute to a larger culture of sexual liberation. That's why in the heyday of the sexual revolution, there was nudity not only in movies but also in lit fic novels, on album covers, in magazines, and anywhere else they could have shown it.
Similarly, once the sexual counter-revolution began in the early-mid 1990s, nudity vanished from all of those domains. We can't use the internet in this comparison over time because it did not exist during rising-crime times. But can you imagine what people would have uploaded to YouTube in 1977? Or what percent of all internet traffic would have been for porn in 1984? If you think kids today post sexualized photos on social networks, you can guess what it would have looked like if people at Studio 54 or Danceteria had had Facebook or Flickr to show their friends what they were up to last weekend (or what they saw at any rate). I've earlier noted the death of streaking during the falling-crime times. They captured more images of streakers in the '70s and '80s without the benefit of more modern technology because people were actually streaking back then, and have not since.
It's harder to get a feel for, but I think even adolescent guys' bedrooms aren't saturated with nudity like they used to be. I'm just thinking of every YouTube video I've ever watched that was shot in someone's bedroom, and I don't ever recall seeing one where there was a poster of a girl showing her boobs, let alone posing nude. But rewind to Jeff Spicoli's room, and it's covered in pages ripped out of Playboy or Penthouse. During the '90s, this practice was fading but it was still prevalent enough on an absolute level for me to remember it (at least the early-mid '90s anyway, not so much after that). Even if it wasn't a Playboy centerfold, maybe it was Kathy Ireland from Sports Illustrated or the Janet Jackson cover of Rolling Stone.
Maybe someone with more free time can go through all the teen screwball comedies over the years and rate them for how much square footage of the characters' walls are taken up by pictures of girls. I don't recall any in American Pie or Superbad, but I wasn't paying attention for it either. Plus I haven't bothered with a lot of the dork squad comedies like the Harold and Kumar movies. My impression is that the same pattern seen above would hold up. Millennial guys are too busy using their computer, TV, internet, and cell phone to play video games -- no time left to think about girls. And I think the posters on their walls would reflect that too -- more likely to see ones about video games than half-naked chicks.
The final reason that the internet or technology in general has nothing to do with these changes up and down is that we see the same pattern over historical time. When violence was surging in the 14th C., there's an obsession with explicit and raunchy sex themes in even high literature like the Canterbury Tales or the Decameron. Ditto the Elizabethan-Jacobean period, the Romantic-Gothic period, and the lesser crime wave of the early 20th C. These cultures not only are more interested in the act of sex -- the look, the feel, everything -- but they explore a lot more, I hate to use the word, "transgressive" sexual themes. The clearest example is incest, perhaps the ultimate sexual taboo: see Hamlet, The Duchess of Malfi, Vathek and the Episodes, Twin Peaks, among many others.
The periods in between these are falling-crime times, and they barely touch these topics at all. There seems to be a greater treatment of bodily hygiene, STDs, and the unhealthiness of masturbation, but not of wild sexuality per se. People sheltered from the sublime have more important Matters of Exquisite Taste to attend to.
Earlier we saw that reports from lit fic observers that interest in sex shriveled up among the novels of the superstars of the 1990s and 2000s, especially compared to the blockbuster writers of the '60s through the '80s, for whom no amount of sex was gratuitous. An even better place to look for the cultural reflection of the birth and death of sexual liberation is movies. They have to appeal to a much broader audience, there are a lot fewer of them than books and hence easier to study close to the whole universe of "popular movies," and busybody parents have already put together a wealth of information about whether a movie shows skin or not.
To quantify this, I took the top 10 movies at the box office for a year -- that's surely a good measure of the movie's resonance with what audiences wanted. I then checked the "parental advisory" section of their entry at imdb.com (or in a few cases where the movie was more obscure, at other "protect the kids" online resources for parents). If there was at least one scene of partial nudity -- even an exposed nipple -- that lasted longer than a moment, I counted it as having nudity; otherwise, not. So this isn't a measure of really raunchy stuff. Because it's such an easy threshold to clear, if hardly any movies in some year fail to meet it, then we are very safe in categorizing that year as a prudish one.
The first full year that nudity in movies was allowed was 1969, after an earlier ban was reversed in 1968. The ban, also known as the Hays Code, was only enforced starting in 1934 -- the very first year of falling crime that would last through the '50s, after the earlier surge of crime from at least 1900 through the Roaring Twenties and the early '30s. As an aside, a probably something I'll flesh out more later, these bans typically only occur when they are not needed -- when the people are themselves already becoming tamer. They are expressive of the more prudish zeitgeist, not an attempt to deal with a real problem.
Here is the change over time:
The change is so incredibly stark that you could probably go just by your impressions -- though that's only assuming you remember anything, which most people don't. We hear so much nonsense about how skankified and sexually perverted the culture keeps getting, but try to think of the last time you saw some nice T&A in a mainstream movie. Back in the late '70s and '80s, every movie was also partly a softcore porn flick. It wasn't just the screwball teen comedies like Porky's -- even mainstream comedies that were actually funny, like Caddyshack or Stripes or Fast Times at Ridgemont High or Beverly Hills Cop, show plenty of skin. Same with dramas like Fatal Attraction. Ditto for action movies, like when Conan the Barbarian and a witch get it on, or when we see the actual sex act where Sarah Connor conceives the future savior of humankind with Kyle Reese, or when Dirty Harry chases a criminal across some rooftops and they happen to crash through into an orgy with about a dozen people being filmed for a porno. And it's hard to think of a thriller/horror movie from that period that does not have a nude scene.
There are two clear periods: a wilder period from 1969 through 1988, where the mean, median, and mode are all 4; and a tamer one from 1989 through 2009, where the mean, median, and mode are all 1. Although each year's movies are only a handful of draws from the larger distribution of culture, creating year-to-year variation, it's clear that sometime in the late '80s or early '90s the main tendency of that distribution shifted sharply in the covered-up direction. To my eye, the year-to-year variation doesn't allow us to pinpoint whether the decline of T&A in movies slightly preceded or occurred right alongside the fall in violence and promiscuity, but it's obvious that the two trends are closely related.
One thing is certain -- the internet has nothing to do with it. The internet only became widespread among the audiences that go to movies around 1994. I remember that very clearly, and there was no porn around. Free and easy-to-get internet porn didn't show up until the late '90s (the first year I recall it being a phenomenon was 2000, but I was probably late to the trend). Nudity in one domain of entertainment or media is not a substitute for nudity in some other domain, as this idea assumes (internet nudity replacing movie nudity). Rather, they are complements -- they all feed off of each other and contribute to a larger culture of sexual liberation. That's why in the heyday of the sexual revolution, there was nudity not only in movies but also in lit fic novels, on album covers, in magazines, and anywhere else they could have shown it.
Similarly, once the sexual counter-revolution began in the early-mid 1990s, nudity vanished from all of those domains. We can't use the internet in this comparison over time because it did not exist during rising-crime times. But can you imagine what people would have uploaded to YouTube in 1977? Or what percent of all internet traffic would have been for porn in 1984? If you think kids today post sexualized photos on social networks, you can guess what it would have looked like if people at Studio 54 or Danceteria had had Facebook or Flickr to show their friends what they were up to last weekend (or what they saw at any rate). I've earlier noted the death of streaking during the falling-crime times. They captured more images of streakers in the '70s and '80s without the benefit of more modern technology because people were actually streaking back then, and have not since.
It's harder to get a feel for, but I think even adolescent guys' bedrooms aren't saturated with nudity like they used to be. I'm just thinking of every YouTube video I've ever watched that was shot in someone's bedroom, and I don't ever recall seeing one where there was a poster of a girl showing her boobs, let alone posing nude. But rewind to Jeff Spicoli's room, and it's covered in pages ripped out of Playboy or Penthouse. During the '90s, this practice was fading but it was still prevalent enough on an absolute level for me to remember it (at least the early-mid '90s anyway, not so much after that). Even if it wasn't a Playboy centerfold, maybe it was Kathy Ireland from Sports Illustrated or the Janet Jackson cover of Rolling Stone.
Maybe someone with more free time can go through all the teen screwball comedies over the years and rate them for how much square footage of the characters' walls are taken up by pictures of girls. I don't recall any in American Pie or Superbad, but I wasn't paying attention for it either. Plus I haven't bothered with a lot of the dork squad comedies like the Harold and Kumar movies. My impression is that the same pattern seen above would hold up. Millennial guys are too busy using their computer, TV, internet, and cell phone to play video games -- no time left to think about girls. And I think the posters on their walls would reflect that too -- more likely to see ones about video games than half-naked chicks.
The final reason that the internet or technology in general has nothing to do with these changes up and down is that we see the same pattern over historical time. When violence was surging in the 14th C., there's an obsession with explicit and raunchy sex themes in even high literature like the Canterbury Tales or the Decameron. Ditto the Elizabethan-Jacobean period, the Romantic-Gothic period, and the lesser crime wave of the early 20th C. These cultures not only are more interested in the act of sex -- the look, the feel, everything -- but they explore a lot more, I hate to use the word, "transgressive" sexual themes. The clearest example is incest, perhaps the ultimate sexual taboo: see Hamlet, The Duchess of Malfi, Vathek and the Episodes, Twin Peaks, among many others.
The periods in between these are falling-crime times, and they barely touch these topics at all. There seems to be a greater treatment of bodily hygiene, STDs, and the unhealthiness of masturbation, but not of wild sexuality per se. People sheltered from the sublime have more important Matters of Exquisite Taste to attend to.
November 28, 2010
The Expendables, a case study in how low movies have devolved
Well, you can imagine how I would place this review into the larger context of the wussification of the culture after the 1992 peak in the crime rate, and how action movies differ between dangerous vs. safe times, so I'll cut to the chase.
This has to be one of the worst movies ever made, as in not even watchable. The errors are so basic that it's hard to believe it got made, let alone that audiences liked it enough for it to get a 7/10 rating at imdb.com.
Most people don't realize how cultural cycles work. They think that after the glory days of some phenomenon, things will stall out and it won't innovate anymore. Shoot, I'd be happy with stagnation! It's much worse, though -- things actually devolve, losing the wisdom that had been accumulated over years or decades. Given how apparently simplistic the conventions of the '80s action movies were, you'd think it would be impossible for movie-makers to forget them -- but The Expendables proves that even simple wisdom is easily lost. Here are the major mistakes and some no-brainer solutions.
- None of the good guys die. To make the audience sympathize with the main hero, they have to feel that he's in real danger, and nothing signals that like seeing your fellows drop like flies. If everyone around him is doing just fine, then his environment isn't threatening at all. Instead, we feel like we're watching Stallone play a video game with his buddies online -- and not a hard game like Contra where you get your ass kicked, but one of these newer games for pussies, where after taking a lot of damage you merely hide in the corner and your health automatically recovers. Solution: have the bad guys pick off his buddies one at a time, leaving at most three, though perhaps none. See Predator, Aliens, Rambo II, etc.
At the very least, a lot of innocent people have to die in order for us to feel that the world they've gone into is dangerous. In Die Hard, most of those killed are not the official good guys but innocent bystanders at an office party. They can't be faceless (one death is a tragedy, a million is a statistic, or however that goes), though, and the makers of Die Hard make sure to humanize and personalize the individuals who are about to be killed. The Expendables didn't even show the up-close effects of a reign of terror in a Latin American dictatorship, which should not have been too hard to pull off.
- The good guys are mercenaries who never get chastised. The audience wants the heroes' action to serve some larger moral purpose, and mercenaries never meet that requirement unless they undergo a transformation as a result of the mission, becoming a true band of brothers. Otherwise, as with The Expendables, we never get the sense that they're devoted to one another. Solution: give them a real danger to run into, which will force them to adopt a strong Us vs. Them mindset. The soldiers in Predator and Aliens are technically in a state military, but they come off as jaded and complacent mercenaries -- until the Predator and the aliens start picking them off. That wakes them right up and makes them stick together to serve the larger purpose of preserve Us and exterminate the evil Them.
- The girl's role is pointless. None of the mercenaries have any reason to be connected to her, so it feels totally bogus when Stallone goes back to rescue her. The real connection she has is to her kin and especially male kin -- like her father, the dictator. The writers do dip their toes into this obviously better storyline, by having him speak out against his corporate master about her torture, but they abandon it because it seems too Old Testament or Shakespearean and therefore gripping. Solution: they should have junked the entire story about the mercenaries and made it a modern-day Faust legend set in Central America. An idealistic revolutionary makes a Faustian bargain with an American drug lord, whereby he'll rule as puppet dictator in return for allowing the drug lord to run his business there. The dictator then pays for it with the life (or at least the welfare) of his only child, who inherits the zealous idealism of her father and stirs up a group of rebels against the American invaders. Haunted by her ghost, and seeing what a foolish pact with the devil he's made, he plots to avenge her death, rape, torture, or whatever her punishment was, but is himself slain in the act. The highest ranking male in the group of rebels who'd been stirred up by the daughter then assumes the new dictator role (still no democracy or cheery ending), vowing not to let the American drug lords do business there anymore.
The other solution is to keep the main story about mercenaries but have a lengthy courtship and mating relationship between Stallone and the general's daughter, as lengthy as could be squeezed into a short time anyway. Even better -- he gets her pregnant. Now he has a real motive to watch out for her, like with Ko in Rambo II. Also as in that movie, the girl has to die to give him a morally righteous motive to kill the head villain. At the very least she must be kidnapped, like the wife in Die Hard or the teenage daughter in Lethal Weapon.
These are just the major three mistakes, but they're so glaring and so easily corrected that further discussion would be pointless. Still, I'll add for the record how hard the so-called special effects of the past 10 to 15 years have failed to pull the audience in. Maybe in a million years, CGI blood and fire with look more like the real thing than real-life special effects, but not now. Even watered down ketchup would look better than video game blood. And do they not have stuntmen in Hollywood anymore who set themselves on fire? With as much money as they blew on The Expendables, you'd figure they could at least hire a real person to get set on fire. Jesus, A Nightmare on Elm Street had that -- including a long shot where he climbs and falls down a set of basement stairs -- and that was made on a budget over 25 years ago.
By the way, this shows just how much the story does matter even in a blow-shit-up action movie. Terminator 3 and The Expendables are not even watchable, while Rambo II and Aliens are a thrill. It may not take the greatest story, but it does need to be there. The pathetic excuse that "testosterone-fueled action flicks don't need a story" is no better than the Modernist pretension that being able to paint or tell stories didn't matter, that the makers and consumers are like so above those petty concerns.
I went into seeing this movie knowing how strong the zeitgeist determines the quality of the movie, album, or whatever, but I didn't expect that it would go wrong in so many places so badly and so foolishly. Even the cheesiest, low-ranking action movies from the '80s, such as Kickboxer, outscore the most popular blockbusters of the safe times on the above measures, let alone how these would fare against the best ones from dangerous times like Dirty Harry, Rambo II, Die Hard, Lethal Weapon, Predator, Aliens, etc.
My brother, who received this movie from Netflix during Thanksgiving weekend and who I watched it with, always gives me a ribbing about preferring older movies (ones from dangerous times). But it's never hard to point out that this is because those movies are better, and to give him a ribbing back about how old greats beat new garbage. At that point, you either concede the point or admit that you're just following fashion even when it leads in degenerate direction.
This has to be one of the worst movies ever made, as in not even watchable. The errors are so basic that it's hard to believe it got made, let alone that audiences liked it enough for it to get a 7/10 rating at imdb.com.
Most people don't realize how cultural cycles work. They think that after the glory days of some phenomenon, things will stall out and it won't innovate anymore. Shoot, I'd be happy with stagnation! It's much worse, though -- things actually devolve, losing the wisdom that had been accumulated over years or decades. Given how apparently simplistic the conventions of the '80s action movies were, you'd think it would be impossible for movie-makers to forget them -- but The Expendables proves that even simple wisdom is easily lost. Here are the major mistakes and some no-brainer solutions.
- None of the good guys die. To make the audience sympathize with the main hero, they have to feel that he's in real danger, and nothing signals that like seeing your fellows drop like flies. If everyone around him is doing just fine, then his environment isn't threatening at all. Instead, we feel like we're watching Stallone play a video game with his buddies online -- and not a hard game like Contra where you get your ass kicked, but one of these newer games for pussies, where after taking a lot of damage you merely hide in the corner and your health automatically recovers. Solution: have the bad guys pick off his buddies one at a time, leaving at most three, though perhaps none. See Predator, Aliens, Rambo II, etc.
At the very least, a lot of innocent people have to die in order for us to feel that the world they've gone into is dangerous. In Die Hard, most of those killed are not the official good guys but innocent bystanders at an office party. They can't be faceless (one death is a tragedy, a million is a statistic, or however that goes), though, and the makers of Die Hard make sure to humanize and personalize the individuals who are about to be killed. The Expendables didn't even show the up-close effects of a reign of terror in a Latin American dictatorship, which should not have been too hard to pull off.
- The good guys are mercenaries who never get chastised. The audience wants the heroes' action to serve some larger moral purpose, and mercenaries never meet that requirement unless they undergo a transformation as a result of the mission, becoming a true band of brothers. Otherwise, as with The Expendables, we never get the sense that they're devoted to one another. Solution: give them a real danger to run into, which will force them to adopt a strong Us vs. Them mindset. The soldiers in Predator and Aliens are technically in a state military, but they come off as jaded and complacent mercenaries -- until the Predator and the aliens start picking them off. That wakes them right up and makes them stick together to serve the larger purpose of preserve Us and exterminate the evil Them.
- The girl's role is pointless. None of the mercenaries have any reason to be connected to her, so it feels totally bogus when Stallone goes back to rescue her. The real connection she has is to her kin and especially male kin -- like her father, the dictator. The writers do dip their toes into this obviously better storyline, by having him speak out against his corporate master about her torture, but they abandon it because it seems too Old Testament or Shakespearean and therefore gripping. Solution: they should have junked the entire story about the mercenaries and made it a modern-day Faust legend set in Central America. An idealistic revolutionary makes a Faustian bargain with an American drug lord, whereby he'll rule as puppet dictator in return for allowing the drug lord to run his business there. The dictator then pays for it with the life (or at least the welfare) of his only child, who inherits the zealous idealism of her father and stirs up a group of rebels against the American invaders. Haunted by her ghost, and seeing what a foolish pact with the devil he's made, he plots to avenge her death, rape, torture, or whatever her punishment was, but is himself slain in the act. The highest ranking male in the group of rebels who'd been stirred up by the daughter then assumes the new dictator role (still no democracy or cheery ending), vowing not to let the American drug lords do business there anymore.
The other solution is to keep the main story about mercenaries but have a lengthy courtship and mating relationship between Stallone and the general's daughter, as lengthy as could be squeezed into a short time anyway. Even better -- he gets her pregnant. Now he has a real motive to watch out for her, like with Ko in Rambo II. Also as in that movie, the girl has to die to give him a morally righteous motive to kill the head villain. At the very least she must be kidnapped, like the wife in Die Hard or the teenage daughter in Lethal Weapon.
These are just the major three mistakes, but they're so glaring and so easily corrected that further discussion would be pointless. Still, I'll add for the record how hard the so-called special effects of the past 10 to 15 years have failed to pull the audience in. Maybe in a million years, CGI blood and fire with look more like the real thing than real-life special effects, but not now. Even watered down ketchup would look better than video game blood. And do they not have stuntmen in Hollywood anymore who set themselves on fire? With as much money as they blew on The Expendables, you'd figure they could at least hire a real person to get set on fire. Jesus, A Nightmare on Elm Street had that -- including a long shot where he climbs and falls down a set of basement stairs -- and that was made on a budget over 25 years ago.
By the way, this shows just how much the story does matter even in a blow-shit-up action movie. Terminator 3 and The Expendables are not even watchable, while Rambo II and Aliens are a thrill. It may not take the greatest story, but it does need to be there. The pathetic excuse that "testosterone-fueled action flicks don't need a story" is no better than the Modernist pretension that being able to paint or tell stories didn't matter, that the makers and consumers are like so above those petty concerns.
I went into seeing this movie knowing how strong the zeitgeist determines the quality of the movie, album, or whatever, but I didn't expect that it would go wrong in so many places so badly and so foolishly. Even the cheesiest, low-ranking action movies from the '80s, such as Kickboxer, outscore the most popular blockbusters of the safe times on the above measures, let alone how these would fare against the best ones from dangerous times like Dirty Harry, Rambo II, Die Hard, Lethal Weapon, Predator, Aliens, etc.
My brother, who received this movie from Netflix during Thanksgiving weekend and who I watched it with, always gives me a ribbing about preferring older movies (ones from dangerous times). But it's never hard to point out that this is because those movies are better, and to give him a ribbing back about how old greats beat new garbage. At that point, you either concede the point or admit that you're just following fashion even when it leads in degenerate direction.
November 27, 2010
Using time travel narratives to measure the rate of change in our way of life
When a population is more or less in a state of equilibrium, forecasting into the future doesn't yield any dazzling new picture of the world -- being stuck, we'll be living just like we are today. There could be cycles up and down around this equilibrium, like the rise and fall of empires, or the spinning of the wheel of fortune in our personal lives, but there won't be some fundamentally different direction that we head off in.
Thus, why waste time imagining what the future will be like, and why bother listening to such stories, if it'll be so similar to our own world?
There have been two major changes in the human way of life, however, that do seem to have sparked our interest in just how far the changes would go. The first was the switch from a nomadic hunter-gatherer way of life to a sedentary farmer way of life. For quality of life, all measures show that this was a disaster -- people lived shorter and sicker lives, had a more limited and vegan-like diet, although they did have more stuff to accumulate. This is when folklore, legends, mythology, etc., start to explore "fall from grace" and apocalyptic themes. These project just how bad things will get if their present rate of change -- downward -- continues.
Herder mythologies and religion are like those of farmers in this respect at least, as they feel like they're hold-outs for a more free and nomadic way of life, yet under pressure to settle down and join the increasing ranks of farmers. The descendants of Proto-Indo-European mythology, not to mention Judaism, Christianity, and Islam, all were born from nomadic pastoralists.
Then sometime during the 19th C. in the industrializing countries, it became clear that this wasn't just another upward phase in a cycle that would inevitably return to a lower level, but was a sustained move toward a different way of life. It's during this time that writers start to show a profound interest in just how far the technological and social changes will go in the next 50 or 100 or 500 years if their present rate of change continues.
But somewhere around 1990 that century-old fascination will the future comes to a grinding halt. The last major cultural works that show a sincere attempt to project the present rate of change into the future and see what life would look like are Back to the Future II and Total Recall (maybe a couple of less popular others). The original Back to the Future movie looked at how much the way of life had changed from 1985 back to 1955, and it was clear enough even to those who weren't even alive in the 1970s, let alone the '50s.
The sequel sent time travellers the same amount of time in the opposite direction, from 1985 to 2015. There are clothes that blow dry themselves when wet, shoes that fit themselves to the wearer's feet, and of course hoverboards, a skateboard the glides on the air without need for wheels.
Total Recall shows what life looks like once we colonize Mars. There will even be tourist trips to Mars.
Then after Total Recall, released in 1990, there are scarcely any of these stories that were as successful. 12 Monkeys doesn't count because time travel forwards is not used as a way to imagine what life will be like if the current rate of change keeps on going. It's more of declinist or apocalyptic narrative that would have been popular during the transition to agriculture, as an epidemic disease wipes out most of humanity and nature reclaims spaces from ruined civilizations.
In fact, if anyone tried to make a movie like Back to the Future II today, the audiences would laugh them out of the theaters. "Yeah right, we don't even have hoverboards and commuting a la The Jetsons, yet we're supposed to believe that in the near future we're going to have whatever you're showing us? How naive."
Everyone has sensed that the rate of dazzling change that began with the shift from farming to industrial capitalism is more or less completed. They observe this in their personal lives, they hear it from their friends of friends of friends, and they see it in the media, who can show them what life is like outside of the social networks they're a part of. I mean, Jesus, even the Japanese don't have hoverboards! I thought they were supposed to be like 50 years ahead of us in technology.
Hardcore gadget worshippers are desperate to see mind-blowing change everywhere, but normal people recognize that there is little change in going from a world with cordless phones everywhere to iPhones everywhere, at least compared to the no-phone to phone change. Ditto with iPods -- hardly more dazzling than a Walkman, especially compared to the no-portable-music to Walkman change.
The internet has made daily life a little different, but not much. If you took someone from 1990 and showed them what you use the internet for at any time since then, would they be as spellbound as someone who lived in a pre-computer time transported to a world with personal computers (a pre-1990 shift)? Not at all. It's a bit different, but nothing on the scale of "we're moving off into uncharted territory," like the no-electricity to electricity shift.
Social network sites like Facebook don't really connect you to anyone who wasn't already in your real-life social network, or any more strongly to those who were in your real-life network. If there is no radical change to project forward, it's no surprise that there are no cultural hits that forecast what life will be like when the whole world is in your friends list -- it'll be just like now.
Returning for a moment to cell phones, nobody except me remembers how easy it was to communicate in a post-Bell but pre-cell world. We had phones in our houses, and by the '80s we could even walk around the house with them -- outside too! If we were away from home, we would never need one in the car since we were busy paying attention to driving. If at work, they had phones there, and there were always offices or homes that you might pass by and ask to use theirs. If you didn't want to trouble someone else, they even had these things called pay phones that you'd drop a quarter (or earlier a dime) into, and call whoever you wanted. The major difference in the post-cell world is that it's a lot easier for others to bother you on the go -- before it was just easier for you to bother them.
In any case, the closing off of this part of the popular imagination shows that most people have decided that their way of life has not fundamentally changed much in the past 20 years. If there had been a radical change, they would've noticed and dreamt about how far it would go within the next 30 or 50 or 100 years. Therefore, things have stayed the same over this time. Again, gadget-worshipping geeks will desperately try to give examples of big changes in the past 20 years, but we don't care about that -- we're talking about real life as it's lived. If people don't see what the big deal is about the gadget-worshippers' favorite new toy, then this doohickey doesn't make any difference in people's daily lives.
For me it's a bummer that the tumultuous and topsy-turvy future has turned out to be static, but I'm grateful for at least getting to live through part of that industrial transition period. Millennials only have memories from after the equilibrium was reached, although given how playing-it-safe they are, they must find it pleasant to not feel the ground shifting beneath your feet.
Still, even if we've already hit the future, we'll always have the past to travel back to, and I've always found those stories more fascinating anyway.
Thus, why waste time imagining what the future will be like, and why bother listening to such stories, if it'll be so similar to our own world?
There have been two major changes in the human way of life, however, that do seem to have sparked our interest in just how far the changes would go. The first was the switch from a nomadic hunter-gatherer way of life to a sedentary farmer way of life. For quality of life, all measures show that this was a disaster -- people lived shorter and sicker lives, had a more limited and vegan-like diet, although they did have more stuff to accumulate. This is when folklore, legends, mythology, etc., start to explore "fall from grace" and apocalyptic themes. These project just how bad things will get if their present rate of change -- downward -- continues.
Herder mythologies and religion are like those of farmers in this respect at least, as they feel like they're hold-outs for a more free and nomadic way of life, yet under pressure to settle down and join the increasing ranks of farmers. The descendants of Proto-Indo-European mythology, not to mention Judaism, Christianity, and Islam, all were born from nomadic pastoralists.
Then sometime during the 19th C. in the industrializing countries, it became clear that this wasn't just another upward phase in a cycle that would inevitably return to a lower level, but was a sustained move toward a different way of life. It's during this time that writers start to show a profound interest in just how far the technological and social changes will go in the next 50 or 100 or 500 years if their present rate of change continues.
But somewhere around 1990 that century-old fascination will the future comes to a grinding halt. The last major cultural works that show a sincere attempt to project the present rate of change into the future and see what life would look like are Back to the Future II and Total Recall (maybe a couple of less popular others). The original Back to the Future movie looked at how much the way of life had changed from 1985 back to 1955, and it was clear enough even to those who weren't even alive in the 1970s, let alone the '50s.
The sequel sent time travellers the same amount of time in the opposite direction, from 1985 to 2015. There are clothes that blow dry themselves when wet, shoes that fit themselves to the wearer's feet, and of course hoverboards, a skateboard the glides on the air without need for wheels.
Total Recall shows what life looks like once we colonize Mars. There will even be tourist trips to Mars.
Then after Total Recall, released in 1990, there are scarcely any of these stories that were as successful. 12 Monkeys doesn't count because time travel forwards is not used as a way to imagine what life will be like if the current rate of change keeps on going. It's more of declinist or apocalyptic narrative that would have been popular during the transition to agriculture, as an epidemic disease wipes out most of humanity and nature reclaims spaces from ruined civilizations.
In fact, if anyone tried to make a movie like Back to the Future II today, the audiences would laugh them out of the theaters. "Yeah right, we don't even have hoverboards and commuting a la The Jetsons, yet we're supposed to believe that in the near future we're going to have whatever you're showing us? How naive."
Everyone has sensed that the rate of dazzling change that began with the shift from farming to industrial capitalism is more or less completed. They observe this in their personal lives, they hear it from their friends of friends of friends, and they see it in the media, who can show them what life is like outside of the social networks they're a part of. I mean, Jesus, even the Japanese don't have hoverboards! I thought they were supposed to be like 50 years ahead of us in technology.
Hardcore gadget worshippers are desperate to see mind-blowing change everywhere, but normal people recognize that there is little change in going from a world with cordless phones everywhere to iPhones everywhere, at least compared to the no-phone to phone change. Ditto with iPods -- hardly more dazzling than a Walkman, especially compared to the no-portable-music to Walkman change.
The internet has made daily life a little different, but not much. If you took someone from 1990 and showed them what you use the internet for at any time since then, would they be as spellbound as someone who lived in a pre-computer time transported to a world with personal computers (a pre-1990 shift)? Not at all. It's a bit different, but nothing on the scale of "we're moving off into uncharted territory," like the no-electricity to electricity shift.
Social network sites like Facebook don't really connect you to anyone who wasn't already in your real-life social network, or any more strongly to those who were in your real-life network. If there is no radical change to project forward, it's no surprise that there are no cultural hits that forecast what life will be like when the whole world is in your friends list -- it'll be just like now.
Returning for a moment to cell phones, nobody except me remembers how easy it was to communicate in a post-Bell but pre-cell world. We had phones in our houses, and by the '80s we could even walk around the house with them -- outside too! If we were away from home, we would never need one in the car since we were busy paying attention to driving. If at work, they had phones there, and there were always offices or homes that you might pass by and ask to use theirs. If you didn't want to trouble someone else, they even had these things called pay phones that you'd drop a quarter (or earlier a dime) into, and call whoever you wanted. The major difference in the post-cell world is that it's a lot easier for others to bother you on the go -- before it was just easier for you to bother them.
In any case, the closing off of this part of the popular imagination shows that most people have decided that their way of life has not fundamentally changed much in the past 20 years. If there had been a radical change, they would've noticed and dreamt about how far it would go within the next 30 or 50 or 100 years. Therefore, things have stayed the same over this time. Again, gadget-worshipping geeks will desperately try to give examples of big changes in the past 20 years, but we don't care about that -- we're talking about real life as it's lived. If people don't see what the big deal is about the gadget-worshippers' favorite new toy, then this doohickey doesn't make any difference in people's daily lives.
For me it's a bummer that the tumultuous and topsy-turvy future has turned out to be static, but I'm grateful for at least getting to live through part of that industrial transition period. Millennials only have memories from after the equilibrium was reached, although given how playing-it-safe they are, they must find it pleasant to not feel the ground shifting beneath your feet.
Still, even if we've already hit the future, we'll always have the past to travel back to, and I've always found those stories more fascinating anyway.
November 22, 2010
The Sixth Sense -- not even as haunting as Ghost
And that was a chick flick, for god's sake. I watched The Sixth Sense for the first time in a long while, probably the first I've watched it all the way through in one sitting, as it's pretty boring. I figured I'd give it a fair hearing since it's one of the post-1992 movies that people say is still a good thriller, and I'd like to find as many exceptions to the general trend of terrible movies since then, especially in genres that need an element of the sublime in them.
Well, first let me point to a handful of movies that have bucked the trend after 1992 toward emotionally empty thriller and horror movies. In order to stand somewhat above the zeitgeist, it takes someone unusual. (When the zeitgeist itself is pushing things in an exciting direction, even movies and albums that are put together by committees aren't half-bad.)
For supernatural thrillers, who else would make any good ones during this time except for David Lynch? I haven't seen Lost Highway in awhile, but Mulholland Drive was gripping. That Winkie's Diner scene alone is spookier than all of the Saw series combined. The only major downside I found in these two was their urban settings. I showed earlier that an interest in the pastoral and a disdain for the urban tracks the violence level. Blue Velvet and Twin Peaks, both made during rising-crime times, benefit from the antiquated suburban and rural settings that a gothic narrative is naturally suited to.
As for where the thriller shades into the horror movie, Wes Craven made not only Scream but also his New Nightmare a year earlier. Scream is good enough to watch, but it's not unsettling since the characters are too self-aware and there is no supernatural element at all. In New Nightmare, the characters are aware of how famous and cliched horror movies had become by the peak of the crime rate in the early 1990s, but their complacent meta-awareness is shattered when the supernatural evil contained in the Nightmare on Elm Street narrative begins to break into the natural world and haunt the real-life actors and crew of the film series.
(Candyman, made at the peak of the crime rate in 1992, explored the same theme of a supernatural force returning to kill in the real world as a reminder of his existence after people had become too smug and self-aware about urban legends. Still, this one, like the Clive Barker-inspired Hellraiser movies -- or the first two, which I've seen, anyway -- suffers from a main villain who talks like a clingy homosexual instead of a powerful demon.)
Those three or four post-'92 movies are about all that comes to mind, though. After a closer watch tonight, here are some off-the-cuff reasons for why The Sixth Sense doesn't come close to making the cut.
- As with the Lynch movies, the setting is urban, and brownstones are far less likely to be haunted than the woods. It's not impossible, as we saw in Ghostbusters, but it's a real uphill battle to convince the audience. Furthermore, since the movie was shot after the crime rate had been plummeting for nearly a decade, the urban setting doesn't look gritty or threatening at all -- not like in Rocky, made during armageddon times. It rather looks like a brochure for the explosion of gentrification and white flight back into the cities that began once the violence level had dropped through the floor.
- The total ban against synthesizer music since alternative killed off rock music, and gangsta killed off rap music, has really crippled the thriller and horror genres. Its timbre is inherently spooky because it lies in the "uncanny valley" between clearly organic sounds like a flute and clearly artificial ones like early computerized speech. Some movies can pull off a good piano-only score, but this is harder to do since pianos don't benefit from inherent spookiness. Even a so-so synth song like the Ghostbusters theme can strike enough of a creepy note that it doesn't need to be a masterpiece otherwise. (I'm thinking of the part that builds suspensefully before "I ain't afraid of no ghosts.")
- Too much quick and jumpy editing, not enough lingering. There's a shot of a man turning a doorknob at the top of some stairs, then without even a dissolve we see him seated and working away in the basement. This pervades the movie, and other recent movies. Obviously it's just some silly fad that NYU film school dorks fell in love with. To truly build suspense, we need longer shots that track the characters' movements. When the characters in a gothic novel descend a staircase, we get all sorts of detail about the texture and sound of the stairs, how a noxious fog hangs visibly just below the ceiling of the stairwell, the candle that the person has brought in a vain attempt to see clearly, how they descend faster and faster in order to get the hell out of the creepy claustrophobic space, until it feels like they're in freefall rather than making steps, and so on. Cutting away every fraction of a second to another scene so remote from the previous one in time and space utterly dissipates whatever tension there might have been.
- The ghosts aren't evil by disposition or even by deed. They're simply misunderstood souls who want to establish communication with certain of the living. This makes the ghost-seer more like a well-trained marriage or family counselor than a shaman, an exorcist, or a Faustian character who wants to join them in their evil.
- Related to the above, there is little emphasis on temptation and sin, crime and punishment, trust and betrayal, or any of the grander themes of human narratives. A more quotidian role for ghosts in driving the plot cannot call up the sublime, only a somewhat flaccid and preachy message that more communication is the best therapy for relationship sadness. This fake memento mori aspect of the movie, laid on so thick yet solemnly, also ruined any potential that the Saw movies might have had. At the end of Aliens, we don't need a narrator to walk us through a sequence of shots to tell us how fragile life is and how much we take it for granted. It's already been worked seamlessly into the action and dialogue, like Hudson's neverending gallows humor.
Other things too, but those were the main ones. It was watchable, but definitely not an exception to the trend. For that I say go with Mulholland Drive or Wes Craven's New Nightmare.
Well, first let me point to a handful of movies that have bucked the trend after 1992 toward emotionally empty thriller and horror movies. In order to stand somewhat above the zeitgeist, it takes someone unusual. (When the zeitgeist itself is pushing things in an exciting direction, even movies and albums that are put together by committees aren't half-bad.)
For supernatural thrillers, who else would make any good ones during this time except for David Lynch? I haven't seen Lost Highway in awhile, but Mulholland Drive was gripping. That Winkie's Diner scene alone is spookier than all of the Saw series combined. The only major downside I found in these two was their urban settings. I showed earlier that an interest in the pastoral and a disdain for the urban tracks the violence level. Blue Velvet and Twin Peaks, both made during rising-crime times, benefit from the antiquated suburban and rural settings that a gothic narrative is naturally suited to.
As for where the thriller shades into the horror movie, Wes Craven made not only Scream but also his New Nightmare a year earlier. Scream is good enough to watch, but it's not unsettling since the characters are too self-aware and there is no supernatural element at all. In New Nightmare, the characters are aware of how famous and cliched horror movies had become by the peak of the crime rate in the early 1990s, but their complacent meta-awareness is shattered when the supernatural evil contained in the Nightmare on Elm Street narrative begins to break into the natural world and haunt the real-life actors and crew of the film series.
(Candyman, made at the peak of the crime rate in 1992, explored the same theme of a supernatural force returning to kill in the real world as a reminder of his existence after people had become too smug and self-aware about urban legends. Still, this one, like the Clive Barker-inspired Hellraiser movies -- or the first two, which I've seen, anyway -- suffers from a main villain who talks like a clingy homosexual instead of a powerful demon.)
Those three or four post-'92 movies are about all that comes to mind, though. After a closer watch tonight, here are some off-the-cuff reasons for why The Sixth Sense doesn't come close to making the cut.
- As with the Lynch movies, the setting is urban, and brownstones are far less likely to be haunted than the woods. It's not impossible, as we saw in Ghostbusters, but it's a real uphill battle to convince the audience. Furthermore, since the movie was shot after the crime rate had been plummeting for nearly a decade, the urban setting doesn't look gritty or threatening at all -- not like in Rocky, made during armageddon times. It rather looks like a brochure for the explosion of gentrification and white flight back into the cities that began once the violence level had dropped through the floor.
- The total ban against synthesizer music since alternative killed off rock music, and gangsta killed off rap music, has really crippled the thriller and horror genres. Its timbre is inherently spooky because it lies in the "uncanny valley" between clearly organic sounds like a flute and clearly artificial ones like early computerized speech. Some movies can pull off a good piano-only score, but this is harder to do since pianos don't benefit from inherent spookiness. Even a so-so synth song like the Ghostbusters theme can strike enough of a creepy note that it doesn't need to be a masterpiece otherwise. (I'm thinking of the part that builds suspensefully before "I ain't afraid of no ghosts.")
- Too much quick and jumpy editing, not enough lingering. There's a shot of a man turning a doorknob at the top of some stairs, then without even a dissolve we see him seated and working away in the basement. This pervades the movie, and other recent movies. Obviously it's just some silly fad that NYU film school dorks fell in love with. To truly build suspense, we need longer shots that track the characters' movements. When the characters in a gothic novel descend a staircase, we get all sorts of detail about the texture and sound of the stairs, how a noxious fog hangs visibly just below the ceiling of the stairwell, the candle that the person has brought in a vain attempt to see clearly, how they descend faster and faster in order to get the hell out of the creepy claustrophobic space, until it feels like they're in freefall rather than making steps, and so on. Cutting away every fraction of a second to another scene so remote from the previous one in time and space utterly dissipates whatever tension there might have been.
- The ghosts aren't evil by disposition or even by deed. They're simply misunderstood souls who want to establish communication with certain of the living. This makes the ghost-seer more like a well-trained marriage or family counselor than a shaman, an exorcist, or a Faustian character who wants to join them in their evil.
- Related to the above, there is little emphasis on temptation and sin, crime and punishment, trust and betrayal, or any of the grander themes of human narratives. A more quotidian role for ghosts in driving the plot cannot call up the sublime, only a somewhat flaccid and preachy message that more communication is the best therapy for relationship sadness. This fake memento mori aspect of the movie, laid on so thick yet solemnly, also ruined any potential that the Saw movies might have had. At the end of Aliens, we don't need a narrator to walk us through a sequence of shots to tell us how fragile life is and how much we take it for granted. It's already been worked seamlessly into the action and dialogue, like Hudson's neverending gallows humor.
Other things too, but those were the main ones. It was watchable, but definitely not an exception to the trend. For that I say go with Mulholland Drive or Wes Craven's New Nightmare.
November 17, 2010
Taboo against spoilers is a sign of low-quality products and vapid consumers
Did anybody enjoy The Passion of the Christ less because they knew that Jesus was going to get crucified? Or did having such an easily predicted ending prevent Titanic from breaking box office records? Obviously not -- so why is there such a hysteria about revealing "spoilers"? It is now even worse than when it was confined only to movies; now every reviewer of video games is spooked about blurting out spoilers, lest the army of gamer dorks desert him and ruin his chances of being an internet celebrity.
Some part of the enjoyment of a movie, book, or video game is remaining in suspense and feeling the shock when a secret is finally explained. But in any narrative worth wading through, this is only a small part, hence the appeal of plowing through them over and over if they're that good. Therefore, the panic about revealing spoilers means that this is the only possible source of enjoyment -- don't rob us of what little potential is there!
Now, enjoyment is a function of both the product and the consumer. Maybe spoiler-phobia means the product is otherwise garbage and its sole appeal is a shocking reveal. However, it could also mean the product is pretty good, but the mindset and personality of the consumer is so incapable of appreciating any aspect other than the acquisition of new information, as though following narratives were like flipping through someone else's PowerPoint slideshow for the gist and the all-important "take-home message."
Steve Sailer guessed that spoiler-phobia began in full force with the release of The Sixth Sense, and there was indeed a real hysteria about that one. Here's one example from an NYT review:
The first appearance in the NYT of the term "spoilers" in this sense actually came earlier that year, referring to newly added scenes to the original Star Wars movies when they were re-released in theaters. Dictionary.com has an entry from some jargon databank that shows it appearing on Usenet in 1995.
So, like all other infuriating aspects of today's culture, spoiler-phobia began just after the violence level began plummeting in the early-mid 1990s, causing everyone to focus on more trivial matters, throwing overboard their earlier appreciation for the grand and the sublime. Again, that could be due to the movies themselves becoming more empty, the audience members becoming more airheaded, or both (yep).
Even though the exact term "spoilers" doesn't appear earlier, perhaps the same panic appeared but using different words, as with the Sixth Sense review above. Note: not just a lack of spoilers, but a conscious paranoia about even hinting at spoilers, and a clear declaration that there will be no spoilers in the review. I tried looking for NYT articles on The Empire Strikes Back and Psycho from the year they came out, but they're all behind a paywall. I could do a Lexis-Nexis search, but I don't care that much about this topic. Someone else can go for it.
I have little personal experience to relate from the movie culture of the 1980s and before, as I either wasn't born or watched more movies on home video than in the theater (though I saw lots there too). I don't recall any such paranoia, though. There was one popular video game, however, that had a shocking plot reveal at the end -- that the bounty hunter who's been kicking alien butt is really a woman under the suit of armor -- and yet there was no hysteria about it. In fact, one of the first password codes that we learned for Metroid put us far enough into the game that this plot point had already been revealed -- you start off playing in plain clothes and no armor, and it's clear that it's a woman.
Even as late as 1995, I don't recall any spoiler-phobia about a shocking twist in Super Metroid where one of the aliens who you'd been destroying in the first game now comes to your aid right as you're about to be killed, sacrificing itself for you. Back before video games attempted pathetically to imitate movies, though, no one played them for the narrative, so spoiler panic was ruled out for that reason alone.
Some part of the enjoyment of a movie, book, or video game is remaining in suspense and feeling the shock when a secret is finally explained. But in any narrative worth wading through, this is only a small part, hence the appeal of plowing through them over and over if they're that good. Therefore, the panic about revealing spoilers means that this is the only possible source of enjoyment -- don't rob us of what little potential is there!
Now, enjoyment is a function of both the product and the consumer. Maybe spoiler-phobia means the product is otherwise garbage and its sole appeal is a shocking reveal. However, it could also mean the product is pretty good, but the mindset and personality of the consumer is so incapable of appreciating any aspect other than the acquisition of new information, as though following narratives were like flipping through someone else's PowerPoint slideshow for the gist and the all-important "take-home message."
Steve Sailer guessed that spoiler-phobia began in full force with the release of The Sixth Sense, and there was indeed a real hysteria about that one. Here's one example from an NYT review:
At first, the doctor doesn't believe the boy. But then, well, let's not take the story any further lest its colossally sentimental payoff be compromised.
The first appearance in the NYT of the term "spoilers" in this sense actually came earlier that year, referring to newly added scenes to the original Star Wars movies when they were re-released in theaters. Dictionary.com has an entry from some jargon databank that shows it appearing on Usenet in 1995.
So, like all other infuriating aspects of today's culture, spoiler-phobia began just after the violence level began plummeting in the early-mid 1990s, causing everyone to focus on more trivial matters, throwing overboard their earlier appreciation for the grand and the sublime. Again, that could be due to the movies themselves becoming more empty, the audience members becoming more airheaded, or both (yep).
Even though the exact term "spoilers" doesn't appear earlier, perhaps the same panic appeared but using different words, as with the Sixth Sense review above. Note: not just a lack of spoilers, but a conscious paranoia about even hinting at spoilers, and a clear declaration that there will be no spoilers in the review. I tried looking for NYT articles on The Empire Strikes Back and Psycho from the year they came out, but they're all behind a paywall. I could do a Lexis-Nexis search, but I don't care that much about this topic. Someone else can go for it.
I have little personal experience to relate from the movie culture of the 1980s and before, as I either wasn't born or watched more movies on home video than in the theater (though I saw lots there too). I don't recall any such paranoia, though. There was one popular video game, however, that had a shocking plot reveal at the end -- that the bounty hunter who's been kicking alien butt is really a woman under the suit of armor -- and yet there was no hysteria about it. In fact, one of the first password codes that we learned for Metroid put us far enough into the game that this plot point had already been revealed -- you start off playing in plain clothes and no armor, and it's clear that it's a woman.
Even as late as 1995, I don't recall any spoiler-phobia about a shocking twist in Super Metroid where one of the aliens who you'd been destroying in the first game now comes to your aid right as you're about to be killed, sacrificing itself for you. Back before video games attempted pathetically to imitate movies, though, no one played them for the narrative, so spoiler panic was ruled out for that reason alone.
November 15, 2010
You don't have to be old to be wise
One thing that strikes you when reading about periods of dramatic upheaval, as signaled by a sustained rise in the homicide rate, is the youth rebellion that coincides with it. It's not just young people acting even more prototypically youthful -- becoming more violent and horny -- but a broader behavioral and cultural experimentation.
The clearest sign of this is a widespread disobedience of adult wishes and commands when it comes to dating and mating. Usually these come from parents who want their children to marry in a way that will benefit themselves and the wider family, not just now but into posterity, which is not entirely the same as the way that will benefit the son or daughter themselves. But they could also come from non-kin in their parents' age group. Against that, the young person pursues only who they want to.
This practice of romantic love shows up in statistics during the recent crime wave -- those young people's parents certainly did not want them having sex so early and with so many partners. During previous waves of violence, the concern about romantic love -- whether for or against -- starts to take over much of the cultural products. The 14th C. saw such a wave of both violence and romantic impulsivity, as did the Elizabethan-Jacobean period, and the Romantic-Gothic period. (The U.S. had another wave from at least 1900 to 1933, and the Roaring Twenties were the culmination of a youthful, romantic movement away from Victorian and Gilded Age sensibilities.)
Are these youth rebellions irrational, misguided, harmful, and so on? Probably not. (We are only ever talking about "compared to the alternatives.") Why not?
Behavioral strategies, and codes of morality that regulate behavior, are like tools that help a person make it through life successfully. Like other tools (and cultural thoughts or practices in general), they may become outmoded if the environment changes. If sea levels swallow the land, we will have to throw out a lot of things that just won't work underwater and we'll have to add things like goggles, air tanks, etc. And of course if sea levels don't change that radically, our existing toolkit will work just fine as it is.
It sounds silly to state something so obvious, but most people don't get it in the context of behavioral strategies and moral codes. The "don't trust anyone over 30" crowd is wrong because if the relevant features of the world haven't changed so much in 100 years, then grandma probably does know best and you shouldn't waste time experimenting so much. On the other hand, the "respect the status quo" crowd is wrong because if the world has changed a lot, then today's young people can no longer adapt themselves to their environment using the cultural tools of their recent ancestors.
As with genetic adaptation, they have to resort to trial-and-error -- blind mutation -- if they ever want to stumble upon what works best in this new environment. Most experiments will fail, just like most genetic mutations are harmful. But the ones that begin sweeping throughout a population are probably not just getting lucky (especially if the population is large) -- they're the ones that lead people successfully through life in the new environment.
You cannot object that, sure, the new mix of behaviors and codes can be successful, but they may still be fundamentally immoral. Remember -- compared to the alternatives. Take an extreme case, where the elders preach non-confrontation and peace. In a world of falling violence levels, like most of the 18th C., that advice works fine. But when violence levels soar, how moral is it to let the resurgent forces of evil just walk all over you and everybody else? Existing institutions are always impotent to stem or reverse the crime wave, so outsourcing the job is not very satisfying. You have to develop a greater propensity to use violence in order to protect yourself and others, and occasionally you'll even have to act out that impulse.
The same applies to the full range of newly successful behaviors and codes. In a safe and monogamous world, perhaps the elders really do "know what's best for you" by advising you to pick a certain type of mate, but only one of them, and to stay with them forever. However, when violence soars, this "good dad" strategy may be for suckers only, as I detailed in a recent post about why females will shift to a more promiscuous strategy in more dangerous environments. The elders may also be out of touch when it comes to moral arguments for monogamy that stress community harmony -- in the new world, an attempt to enforce monogamy may only lead to greater resentment and destructive behavior, unlike such an attempt in a safer and more well-behaved world.
This evolutionary way of looking at things -- that the culture of the elders is wise for the young to the extent that the environment has remained the same -- also shows why for every youth rebellion there is a later youth counter-rebellion. Again the main environmental change that matters is the level of violence. When the violence level cycles into a much safer phase, then the adults -- who came of age during violent times -- preaching the value of self-defense, courage, etc., will have little to contribute to the young, who don't need a lot of that anymore. Advice from middle-aged males to younger males that they should get out and cat around more will fall on deaf ears -- safe times cause greater monogamy, so that strategy would go nowhere. Or at least, not where it went during the sexual revolution that the middle-aged guy lived through during violent times.
As with youth rebellions, we have clear statistics on the youth counter-rebellion of the past 15 to 20 years, after the crime rate began plummeting. I've gone over that enough by now that it doesn't need repeating. There was a similar self-domestication of young people during the mid-'30s through 1950s, during the Victorian era in Europe, during the long fall in violence from roughly 1630 through 1780 (peaking during the Age of Reason), and during the interval between the 14th C. and late 16th C. waves of violence (peaking during the heyday of Renaissance humanism).
We don't typically think of young people wanting to live tamer and more boring lives than their elders, but this analysis shows why we should expect to see it, and when it should happen. Fortunately for skeptics, we're living through just such a phase right now, so don't take my word for it. Middle-aged people today have more exciting tastes in music and movies, are more likely to have unprotected sex, and drink and smoke more frequently; meanwhile the teenagers and 20-somethings are a bunch of young fogies.
The clearest sign of this is a widespread disobedience of adult wishes and commands when it comes to dating and mating. Usually these come from parents who want their children to marry in a way that will benefit themselves and the wider family, not just now but into posterity, which is not entirely the same as the way that will benefit the son or daughter themselves. But they could also come from non-kin in their parents' age group. Against that, the young person pursues only who they want to.
This practice of romantic love shows up in statistics during the recent crime wave -- those young people's parents certainly did not want them having sex so early and with so many partners. During previous waves of violence, the concern about romantic love -- whether for or against -- starts to take over much of the cultural products. The 14th C. saw such a wave of both violence and romantic impulsivity, as did the Elizabethan-Jacobean period, and the Romantic-Gothic period. (The U.S. had another wave from at least 1900 to 1933, and the Roaring Twenties were the culmination of a youthful, romantic movement away from Victorian and Gilded Age sensibilities.)
Are these youth rebellions irrational, misguided, harmful, and so on? Probably not. (We are only ever talking about "compared to the alternatives.") Why not?
Behavioral strategies, and codes of morality that regulate behavior, are like tools that help a person make it through life successfully. Like other tools (and cultural thoughts or practices in general), they may become outmoded if the environment changes. If sea levels swallow the land, we will have to throw out a lot of things that just won't work underwater and we'll have to add things like goggles, air tanks, etc. And of course if sea levels don't change that radically, our existing toolkit will work just fine as it is.
It sounds silly to state something so obvious, but most people don't get it in the context of behavioral strategies and moral codes. The "don't trust anyone over 30" crowd is wrong because if the relevant features of the world haven't changed so much in 100 years, then grandma probably does know best and you shouldn't waste time experimenting so much. On the other hand, the "respect the status quo" crowd is wrong because if the world has changed a lot, then today's young people can no longer adapt themselves to their environment using the cultural tools of their recent ancestors.
As with genetic adaptation, they have to resort to trial-and-error -- blind mutation -- if they ever want to stumble upon what works best in this new environment. Most experiments will fail, just like most genetic mutations are harmful. But the ones that begin sweeping throughout a population are probably not just getting lucky (especially if the population is large) -- they're the ones that lead people successfully through life in the new environment.
You cannot object that, sure, the new mix of behaviors and codes can be successful, but they may still be fundamentally immoral. Remember -- compared to the alternatives. Take an extreme case, where the elders preach non-confrontation and peace. In a world of falling violence levels, like most of the 18th C., that advice works fine. But when violence levels soar, how moral is it to let the resurgent forces of evil just walk all over you and everybody else? Existing institutions are always impotent to stem or reverse the crime wave, so outsourcing the job is not very satisfying. You have to develop a greater propensity to use violence in order to protect yourself and others, and occasionally you'll even have to act out that impulse.
The same applies to the full range of newly successful behaviors and codes. In a safe and monogamous world, perhaps the elders really do "know what's best for you" by advising you to pick a certain type of mate, but only one of them, and to stay with them forever. However, when violence soars, this "good dad" strategy may be for suckers only, as I detailed in a recent post about why females will shift to a more promiscuous strategy in more dangerous environments. The elders may also be out of touch when it comes to moral arguments for monogamy that stress community harmony -- in the new world, an attempt to enforce monogamy may only lead to greater resentment and destructive behavior, unlike such an attempt in a safer and more well-behaved world.
This evolutionary way of looking at things -- that the culture of the elders is wise for the young to the extent that the environment has remained the same -- also shows why for every youth rebellion there is a later youth counter-rebellion. Again the main environmental change that matters is the level of violence. When the violence level cycles into a much safer phase, then the adults -- who came of age during violent times -- preaching the value of self-defense, courage, etc., will have little to contribute to the young, who don't need a lot of that anymore. Advice from middle-aged males to younger males that they should get out and cat around more will fall on deaf ears -- safe times cause greater monogamy, so that strategy would go nowhere. Or at least, not where it went during the sexual revolution that the middle-aged guy lived through during violent times.
As with youth rebellions, we have clear statistics on the youth counter-rebellion of the past 15 to 20 years, after the crime rate began plummeting. I've gone over that enough by now that it doesn't need repeating. There was a similar self-domestication of young people during the mid-'30s through 1950s, during the Victorian era in Europe, during the long fall in violence from roughly 1630 through 1780 (peaking during the Age of Reason), and during the interval between the 14th C. and late 16th C. waves of violence (peaking during the heyday of Renaissance humanism).
We don't typically think of young people wanting to live tamer and more boring lives than their elders, but this analysis shows why we should expect to see it, and when it should happen. Fortunately for skeptics, we're living through just such a phase right now, so don't take my word for it. Middle-aged people today have more exciting tastes in music and movies, are more likely to have unprotected sex, and drink and smoke more frequently; meanwhile the teenagers and 20-somethings are a bunch of young fogies.
So long, "psych!"
While watching Candyman the other night, I was struck by one of the character's use of the exclamation "psych!" (Also spelled "sike!" by elementary school kids who didn't know that Greek prefix. We couldn't have seen it in print since it was only spoken.)
It not only means that what the speaker had said before was a lie, or that a connotation that the listener inferred was wrong, but that the speaker led the hearer to that false belief, and in a playful prankster way. It's roughly the same as "I had you going," "fooled you," and so on, except better because it was concise and commented on how the speaker used cunning or toyed with the listener psychologically.
I recall this word spreading sometime in my later elementary school years, late '80s or early '90s. Candyman was released in 1992. Then suddenly it vanished. "Not!" and "as if!" were not up to the job. They both meant that what the speaker had said or implied was a lie, but the context was not one of playing a fun practical joke -- it was to make a sarcastic and haughty dismissal. "Sure Mom, I'd love to babysit my younger brothers on Saturday night -- not! [as if!]"
You could also have used "psych!" in that sarcastic way, but it was a lot more versatile, allowing the prankster usage too. "Not!" and "as if!" couldn't be used in that playful practical joke way, though -- only in the contemptuous way.
During my many years of tutoring during the 2000s, I never heard the Millennials (whether children or high schoolers) using another word that could replace "psych!" Either it just went out of fashion or young people were far less likely to be in a playful prankster mindset after the 1992 peak of the violence level, paralleling the wider decline in wildness among young people.
Whatever the reason, it's about time for a new "psych!"
It not only means that what the speaker had said before was a lie, or that a connotation that the listener inferred was wrong, but that the speaker led the hearer to that false belief, and in a playful prankster way. It's roughly the same as "I had you going," "fooled you," and so on, except better because it was concise and commented on how the speaker used cunning or toyed with the listener psychologically.
I recall this word spreading sometime in my later elementary school years, late '80s or early '90s. Candyman was released in 1992. Then suddenly it vanished. "Not!" and "as if!" were not up to the job. They both meant that what the speaker had said or implied was a lie, but the context was not one of playing a fun practical joke -- it was to make a sarcastic and haughty dismissal. "Sure Mom, I'd love to babysit my younger brothers on Saturday night -- not! [as if!]"
You could also have used "psych!" in that sarcastic way, but it was a lot more versatile, allowing the prankster usage too. "Not!" and "as if!" couldn't be used in that playful practical joke way, though -- only in the contemptuous way.
During my many years of tutoring during the 2000s, I never heard the Millennials (whether children or high schoolers) using another word that could replace "psych!" Either it just went out of fashion or young people were far less likely to be in a playful prankster mindset after the 1992 peak of the violence level, paralleling the wider decline in wildness among young people.
Whatever the reason, it's about time for a new "psych!"
Subscribe to:
Posts (Atom)