If ornaments serve to anchor the decorated thing in your memory, then perhaps one reason why video games have been so uninteresting and forgettable over the past 15-20 years is that they have dropped ornamentation from their overall look. No great loss, but it's still worth a quick look at since video games have become part of our visual culture. They're just another example of how bland things look -- naturalistic, bare-surfaced, and drained of color.
Even back in the good old days the typical video game wasn't very dazzling, but there was one that went as far toward an Art Deco look as they'll probably ever get, the best-selling Sonic the Hedgehog.
The basic ingredients are all there: a mixture of machine and organic forms, geometric stylization, exuberant colors, and icons borrowed from Native American and Near Eastern sources. Here is a fuller gallery showing the bonus stages where giant jewel-like fish and birds float in the background. And compare the palm trees with stylized rings around them and metallic leaves to those found in a pool house made during the 1980s Deco revival:
This game came out in 1991, right as the culture was about to make a huge U-turn. Already by the next year with Sonic 2, the geometric / machine-age stylization of the organic objects was basically gone, with tree leaves looking more tree-leafy. Here is a screenshot of Sonic 3 from 1994 showing the move toward naturalism (within the programming constraints of the time of course).
Sonic CD came out in 1993 and still had an ornamental look, but it had to be played on a system that hardly anyone owned (the Sega CD), so even by that time only a game that few people would play could have that sort of look. The screenshot below shows the return of geometrically stylized nature, complete with mountains in the background that look like terraced 1920s skyscrapers. Here is a gallery showing off the rest of the game's decoration-heavy visuals.
After that, you were lucky to find a game with eye-catching colors, let alone one that didn't attempt to look realistic. I've seen screenshots of the buildings in BioShock that are going for a Deco look, but that's more of a realistic portrayal of such a style, like snapping a picture of the Chrysler Building. It's not the incorporation of that style throughout the look of the game itself, i.e. the lines, volumes, and textures of all that you see.
It's puzzling that video game designers decided to go that way, given how pitiful the technology is at recreating life-like images. Bland-looking movies are understandable -- at least they can do naturalism very well. But why didn't video games stick with the stylization and colorfulness? I guess with the zeitgeist moving so strongly against that look, even poorly done naturalism has been more to the audience's liking than catchy ornamentation.
February 28, 2012
February 27, 2012
Empathizing with imaginary people (three cases)
When people's mindset shifts toward social avoidance, they never fully lose the basic human desire to empathize with others. Yet reaching out and connecting with real people poses the risk of cementing long-term bonds, so they search for a solution that will let them have their cake and eat it too.
Put simply, it is to try empathizing with things that are not people, but that through repeated practice the mind could construe as at least passably human. It is the social bonding equivalent of jerking off to pornography.
An obvious example is the gizmo worship that cocooners develop during falling-crime times. Not even they can convince themselves that those things have a mind and emotions, so the urge to hug machines usually shows up in fiction, with movies being the ideal medium. A.I. and Iron Giant are just two recent movies where the idea is that the machines are all but fully human, and suitable for affection-starved children to bond with. Terminator 2 was right about there as well, although he at least had to learn how to become more human.
In rising-crime times people are more suspicious of drastic technological change, so they don't feel like trying to bond with machines. Blade Runner had a "droids as humans" portrayal, but no one bought it. Everyone perceives them as human outcasts, not mostly-human robots, and anyway you only like that movie for the spectacular visuals. Johnny Five from Short Circuit was even less human than the Arnold terminator from T2, more like a talking pet than a being with the warmth and sensitivity of a person. C-3PO is a stock character with no emotional depth, so he doesn't really count either. The closest we got to empathizing with a robot was Bishop from Aliens, although he too had totally flat affect and seemed autistic.
The best movie about our attempt to welcome androids into the in-group circle is of course RoboCop. There are only a handful of vignettes, but the message is hard to ignore. All involve people looking up to RoboCop as a hero or savior, trying to connect with him, and only getting a canned response that leaves them emotionally unsatisfied. One shows a bunch of children climbing over him and admiring him, and a TV news reporter asking him excitedly what message he has for the kids watching. Just some monotone pre-programmed line about staying in school and saying no to drugs or something. Not exactly what a living hero would say, or how he'd say it.
In another, a woman is chased, held hostage with a knife, and nearly raped by two thugs, when RoboCop shoots one right in the dick and sends the other running. The terrified woman runs up to her savior, hugs and thanks him profoundly, and looks up at his face for some kind of "Everything's going to be OK now" reassurance, as well as an acknowledgement of her gratitude. Instead he states the obvious ("Madam you've suffered a traumatic experience"), and says he'll notify a rape crisis center. Not what you want to hear when you basically tell someone you owe them your life. He doesn't acknowledge her gratitude at all. This cold and distancing response makes her face twist into a mix of shock and puzzlement.
The movie doesn't belabor the point, but it remains clear: that's what you can expect from trying to connect emotionally with machines.
Reality TV solves the problem a little better because at least you're watching people. But however much empathy you send their way, you know it'll never stick and be returned. There's no danger that you'll have to interact with those real people.
When we see people trying to hug a brick wall in this way, it should make us feel pity for them, or depending on our mood even disgust. I've never seen the movie Real Life, a spoof about reality TV back when the genre was barely visible. But it doesn't sound like it looks at the audience of the show, just the participants and producers. That leaves only two movies about reality TV, one from rising-crime and the other from falling-crime times. The Running Man may be a schlock fest, but at least they have one of the proper reactions -- disgust -- to an audience that tries to satisfy their social and emotional needs through reality TV characters.
By the time The Truman Show came out, we were asked to empathize with the Truman character as well as his audience members, making us one of them. Indeed it was supposed to be spiritually uplifting, not degrading and pitiable, that the audience resulted to watching reality TV as their supreme form of an attempt at empathy.
It's too bad there wasn't a movie in the '80s that took the disapproving view of the state of the world that The Running Man did, but delivered it in the more sincere and less hammed-up tone of The Truman Show.
Perhaps the purest form of empathizing with imaginary people is gay friends. Because they're only 3% of men, few people make use of this solution, but it is one of the clearest examples of the imaginary empathy trend since the '90s. It's even better than gizmo worship and reality TV because it's an actual person, just not a real one. How can we tell? Simply by the fact that fag hags never have any straight guy friends, where I mean someone with whom they mutually let their guard down around, disclose personal matters, share secrets, and so on.
Normal males come in such a wide variety that at least some should be to her liking, yet she keeps them at a distance and only lets homosexuals close. It is a fear or anxiety of the real per se, not just this or that annoying sub-class of real people. Girls with gay friends are blind to even the most basic facts about homos, for example how much more they crave drugs than any normal guy does, how uninterested they are in monogamy, how obsessed they are with sexual fetishes, and all other manner of sick thoughts and behavior.
Because they are merely imaginary people, gay friends can take on whatever qualities the fag hag desires. Most people outgrow that in elementary school, when they're starting to learn how to deal with other people. Since gays are all afflicted with a Peter Pan complex, they're a perfect match for the childish regressive queer collector.
In all other ways, though, the two are from different planets, one real (if detestable) and the other not, preventing any chance at empathy. It's the most dangerous form of trying to empathize with imaginary people because the responses during chit-chat are so convincing -- she feels like he's really picking up on her mental wavelength. "He's so much more understanding and sensitive than a straight man!"
Except that he cannot understand anything about her desires. She wants to be seen as pretty but appreciated for more than just that, and longs for a partner who likewise is attractive but also funny, exciting, courageous, and so on. All he worries about is whether he looks good naked, and whether his warm body for the weekend does too. She wants long-term affection that she's earned, not just some throwaway compliments to try to get her in bed. All that matters to him is quick-fix praise and attention, no matter who it's from or how insincere.
During their interactions, she's sending a bunch of empathy his way, but he has no interest in cementing a bond with her -- only in using her as a reliable source of the attentional quick fixes he craves at every moment. Again the back-and-forth is far more convincing than with gizmos or reality TV show stars that I wonder whether fag hags ever wake up to the fact that there's no real empathy being established, that she's squandered much of her social life acting like a groupie to a disturbed drama queen.
Those are just a handful of examples that probably only scratch the surface.
Put simply, it is to try empathizing with things that are not people, but that through repeated practice the mind could construe as at least passably human. It is the social bonding equivalent of jerking off to pornography.
An obvious example is the gizmo worship that cocooners develop during falling-crime times. Not even they can convince themselves that those things have a mind and emotions, so the urge to hug machines usually shows up in fiction, with movies being the ideal medium. A.I. and Iron Giant are just two recent movies where the idea is that the machines are all but fully human, and suitable for affection-starved children to bond with. Terminator 2 was right about there as well, although he at least had to learn how to become more human.
In rising-crime times people are more suspicious of drastic technological change, so they don't feel like trying to bond with machines. Blade Runner had a "droids as humans" portrayal, but no one bought it. Everyone perceives them as human outcasts, not mostly-human robots, and anyway you only like that movie for the spectacular visuals. Johnny Five from Short Circuit was even less human than the Arnold terminator from T2, more like a talking pet than a being with the warmth and sensitivity of a person. C-3PO is a stock character with no emotional depth, so he doesn't really count either. The closest we got to empathizing with a robot was Bishop from Aliens, although he too had totally flat affect and seemed autistic.
The best movie about our attempt to welcome androids into the in-group circle is of course RoboCop. There are only a handful of vignettes, but the message is hard to ignore. All involve people looking up to RoboCop as a hero or savior, trying to connect with him, and only getting a canned response that leaves them emotionally unsatisfied. One shows a bunch of children climbing over him and admiring him, and a TV news reporter asking him excitedly what message he has for the kids watching. Just some monotone pre-programmed line about staying in school and saying no to drugs or something. Not exactly what a living hero would say, or how he'd say it.
In another, a woman is chased, held hostage with a knife, and nearly raped by two thugs, when RoboCop shoots one right in the dick and sends the other running. The terrified woman runs up to her savior, hugs and thanks him profoundly, and looks up at his face for some kind of "Everything's going to be OK now" reassurance, as well as an acknowledgement of her gratitude. Instead he states the obvious ("Madam you've suffered a traumatic experience"), and says he'll notify a rape crisis center. Not what you want to hear when you basically tell someone you owe them your life. He doesn't acknowledge her gratitude at all. This cold and distancing response makes her face twist into a mix of shock and puzzlement.
The movie doesn't belabor the point, but it remains clear: that's what you can expect from trying to connect emotionally with machines.
Reality TV solves the problem a little better because at least you're watching people. But however much empathy you send their way, you know it'll never stick and be returned. There's no danger that you'll have to interact with those real people.
When we see people trying to hug a brick wall in this way, it should make us feel pity for them, or depending on our mood even disgust. I've never seen the movie Real Life, a spoof about reality TV back when the genre was barely visible. But it doesn't sound like it looks at the audience of the show, just the participants and producers. That leaves only two movies about reality TV, one from rising-crime and the other from falling-crime times. The Running Man may be a schlock fest, but at least they have one of the proper reactions -- disgust -- to an audience that tries to satisfy their social and emotional needs through reality TV characters.
By the time The Truman Show came out, we were asked to empathize with the Truman character as well as his audience members, making us one of them. Indeed it was supposed to be spiritually uplifting, not degrading and pitiable, that the audience resulted to watching reality TV as their supreme form of an attempt at empathy.
It's too bad there wasn't a movie in the '80s that took the disapproving view of the state of the world that The Running Man did, but delivered it in the more sincere and less hammed-up tone of The Truman Show.
Perhaps the purest form of empathizing with imaginary people is gay friends. Because they're only 3% of men, few people make use of this solution, but it is one of the clearest examples of the imaginary empathy trend since the '90s. It's even better than gizmo worship and reality TV because it's an actual person, just not a real one. How can we tell? Simply by the fact that fag hags never have any straight guy friends, where I mean someone with whom they mutually let their guard down around, disclose personal matters, share secrets, and so on.
Normal males come in such a wide variety that at least some should be to her liking, yet she keeps them at a distance and only lets homosexuals close. It is a fear or anxiety of the real per se, not just this or that annoying sub-class of real people. Girls with gay friends are blind to even the most basic facts about homos, for example how much more they crave drugs than any normal guy does, how uninterested they are in monogamy, how obsessed they are with sexual fetishes, and all other manner of sick thoughts and behavior.
Because they are merely imaginary people, gay friends can take on whatever qualities the fag hag desires. Most people outgrow that in elementary school, when they're starting to learn how to deal with other people. Since gays are all afflicted with a Peter Pan complex, they're a perfect match for the childish regressive queer collector.
In all other ways, though, the two are from different planets, one real (if detestable) and the other not, preventing any chance at empathy. It's the most dangerous form of trying to empathize with imaginary people because the responses during chit-chat are so convincing -- she feels like he's really picking up on her mental wavelength. "He's so much more understanding and sensitive than a straight man!"
Except that he cannot understand anything about her desires. She wants to be seen as pretty but appreciated for more than just that, and longs for a partner who likewise is attractive but also funny, exciting, courageous, and so on. All he worries about is whether he looks good naked, and whether his warm body for the weekend does too. She wants long-term affection that she's earned, not just some throwaway compliments to try to get her in bed. All that matters to him is quick-fix praise and attention, no matter who it's from or how insincere.
During their interactions, she's sending a bunch of empathy his way, but he has no interest in cementing a bond with her -- only in using her as a reliable source of the attentional quick fixes he craves at every moment. Again the back-and-forth is far more convincing than with gizmos or reality TV show stars that I wonder whether fag hags ever wake up to the fact that there's no real empathy being established, that she's squandered much of her social life acting like a groupie to a disturbed drama queen.
Those are just a handful of examples that probably only scratch the surface.
February 26, 2012
Remake Sunset Boulevard for today's small-pictures world?
The discussion below about Network reminds me of the other movie starring William Holden about the entertainment media and generational differences, Sunset Boulevard.
The main influence on the zeitgeist is the trend in the violence rate, and by 1950 when the movie was made, it had been falling ever since the 1933 peak. Our most recent peak was 1992, so we're just about where things were when the original was made, 15-20 years into falling-crime times. Since similar environments produce similar outcomes, we should be able to make something along the lines of Sunset Boulevard by now.
No one ever knows how well a movie will turn out beforehand, but it would be worth a shot -- and it would certainly set a higher bar for remakes than all other recent attempts. Perhaps all involved would take the project more seriously than the failed resurrection of Conan the Barbarian.
By 1950, the pictures really had gotten small compared to the Roaring Twenties. It didn't have to do with the shift from silent to talkie movies, though, as the talkies from the early '30s are still big productions, exotic, sublime, ornamental, etc. *
At first, people who lived through the meaningful and exciting times might have thought that the move away from it was just a brief fad. After 15-20 years of steady erosion, though, it becomes hard to ignore. So we can take stock of what the movie culture was like during the recent heyday of the later '70s and '80s, and how sharp the change has been to today.
Gloria Swanson was born in 1899, growing up entirely during rising-crime times, and becoming a star during her 20s when the homicide rate was nearing its ceiling. For the recent cycle, that would be someone born circa 1958 who became a star in the '80s. Really, Kathleen Turner is about the only one who could bring Norma Desmond to life today.
William Holden was born in 1918, so his counterpart would be born in the late '70s, and doesn't have to be a superstar right now but at least has leading man potential. Why not Leonardo DiCaprio? It's hard to think of anyone from the later Gen X-ers or Gen Y-ers who can act well. He could play independent and ambitious while also lapsing into passive and disaffected.
I don't think the age of Norma Desmond's ex-husband-director-butler is so important, since he doesn't play a role in the inter-generational dynamics. He just has to have been famous during the visual-action days, and faded from public recognition during the chatty-introspective days. There are probably lots more choices here. One who comes to mind is Richard Donner, who directed The Omen, the first two Superman movies (the good ones), the Lethal Weapon movies, and The Goonies, but who hasn't done a whole lot during falling-crime times.
The actress who played Betty was born in 1928, an early Silent Gen girl, so her descendant today would be a Millennial born in the late '80s. She doesn't have much character development and is there mostly to serve as a stable, wholesome foil to the breaking-down Desmond character. So she doesn't have to be a very good actress. I avoid most new movies, so I have no ideas here.
Like the original, the remake should avoid any nerdy focus on the technological change, and stick to the sweeping changes in the whole zeitgeist. At most, maybe Donner and Turner, who are used to the world of demanding physical stunts and dazzling visual effects, could be shown having to get by in the new world of ugly CGI and boring stunt work. DiCaprio as Joe could be shown trying to bring his scripts to risk-averse studio suits who only want to make Harry Potter 29 and Hungry Hungry Hippos: Origins.
Also as with the original, it will be hard to keep it from being one big slam against a bygone era. But even little details showing how much more magical it felt back then would humanize the older characters -- they're trying to get by in a world that doesn't want to believe in magic anymore. The shots of Norma Desmond's image-oriented silent films, the grand interiors of her palazzo, and the intimate style of dancing all contrasted with what the average movie-goer would have been used to in 1950.
The remake could show how much more spectacular real exotic footage is, to remind audiences who've grown used to or only known what CGI jungles look like. Turner's home could be in the Art Deco revival style, to contrast with the beige / olive / black minimalism of today. And she and DiCaprio could slow dance to just about anything from the good old days -- "Take My Breath Away" would be laying it on thick, but then maybe Turner's character is desperate enough to try to take it over the top like that. Just something to play against the isolated "dancing" of the younger generation.
As for who would make it? The team at Mad Men have done a great job reviving the mid-century for contemporary audiences, and not just in Pomona. They are in TV rather than movies, but it still seems like they'd make a good core. No ideas about who should direct it. I don't think the Coen brothers could keep the characters' tone straight and deadpan; there's always something self-aware and cartoony about their characters. Maybe the Mad Men people could keep them in check, though.
* Probably the last great example of Jazz Age movie-making was King Kong, released in 1933 at the peak year of the homicide rate. The entire visual culture went limp after that, so that cinema-specific causes don't really capture what changed. Look what happened to architecture, design, and museum art, for example -- the public turned its back on Art Deco and Expressionism, preferring instead the minimalist cocoons of International Style buildings and the emotionless, unrecognizable abstract art of the mid-century.
Once crime rates began soaring during the '60s, '70s, and '80s, visually oriented movies rose from the grave, slowly at first but taking off in the later '70s. Again that drive took over the entire visual culture, including the return to colorful, figurative painting, psychedelic posters a la Art Nouveau, the Art Deco revival, and so on. As the crime rate has fallen since the '90s, the visual appeal of most movies has disappeared to what it was like during the average mid-century movie.
The main influence on the zeitgeist is the trend in the violence rate, and by 1950 when the movie was made, it had been falling ever since the 1933 peak. Our most recent peak was 1992, so we're just about where things were when the original was made, 15-20 years into falling-crime times. Since similar environments produce similar outcomes, we should be able to make something along the lines of Sunset Boulevard by now.
No one ever knows how well a movie will turn out beforehand, but it would be worth a shot -- and it would certainly set a higher bar for remakes than all other recent attempts. Perhaps all involved would take the project more seriously than the failed resurrection of Conan the Barbarian.
By 1950, the pictures really had gotten small compared to the Roaring Twenties. It didn't have to do with the shift from silent to talkie movies, though, as the talkies from the early '30s are still big productions, exotic, sublime, ornamental, etc. *
At first, people who lived through the meaningful and exciting times might have thought that the move away from it was just a brief fad. After 15-20 years of steady erosion, though, it becomes hard to ignore. So we can take stock of what the movie culture was like during the recent heyday of the later '70s and '80s, and how sharp the change has been to today.
Gloria Swanson was born in 1899, growing up entirely during rising-crime times, and becoming a star during her 20s when the homicide rate was nearing its ceiling. For the recent cycle, that would be someone born circa 1958 who became a star in the '80s. Really, Kathleen Turner is about the only one who could bring Norma Desmond to life today.
William Holden was born in 1918, so his counterpart would be born in the late '70s, and doesn't have to be a superstar right now but at least has leading man potential. Why not Leonardo DiCaprio? It's hard to think of anyone from the later Gen X-ers or Gen Y-ers who can act well. He could play independent and ambitious while also lapsing into passive and disaffected.
I don't think the age of Norma Desmond's ex-husband-director-butler is so important, since he doesn't play a role in the inter-generational dynamics. He just has to have been famous during the visual-action days, and faded from public recognition during the chatty-introspective days. There are probably lots more choices here. One who comes to mind is Richard Donner, who directed The Omen, the first two Superman movies (the good ones), the Lethal Weapon movies, and The Goonies, but who hasn't done a whole lot during falling-crime times.
The actress who played Betty was born in 1928, an early Silent Gen girl, so her descendant today would be a Millennial born in the late '80s. She doesn't have much character development and is there mostly to serve as a stable, wholesome foil to the breaking-down Desmond character. So she doesn't have to be a very good actress. I avoid most new movies, so I have no ideas here.
Like the original, the remake should avoid any nerdy focus on the technological change, and stick to the sweeping changes in the whole zeitgeist. At most, maybe Donner and Turner, who are used to the world of demanding physical stunts and dazzling visual effects, could be shown having to get by in the new world of ugly CGI and boring stunt work. DiCaprio as Joe could be shown trying to bring his scripts to risk-averse studio suits who only want to make Harry Potter 29 and Hungry Hungry Hippos: Origins.
Also as with the original, it will be hard to keep it from being one big slam against a bygone era. But even little details showing how much more magical it felt back then would humanize the older characters -- they're trying to get by in a world that doesn't want to believe in magic anymore. The shots of Norma Desmond's image-oriented silent films, the grand interiors of her palazzo, and the intimate style of dancing all contrasted with what the average movie-goer would have been used to in 1950.
The remake could show how much more spectacular real exotic footage is, to remind audiences who've grown used to or only known what CGI jungles look like. Turner's home could be in the Art Deco revival style, to contrast with the beige / olive / black minimalism of today. And she and DiCaprio could slow dance to just about anything from the good old days -- "Take My Breath Away" would be laying it on thick, but then maybe Turner's character is desperate enough to try to take it over the top like that. Just something to play against the isolated "dancing" of the younger generation.
As for who would make it? The team at Mad Men have done a great job reviving the mid-century for contemporary audiences, and not just in Pomona. They are in TV rather than movies, but it still seems like they'd make a good core. No ideas about who should direct it. I don't think the Coen brothers could keep the characters' tone straight and deadpan; there's always something self-aware and cartoony about their characters. Maybe the Mad Men people could keep them in check, though.
* Probably the last great example of Jazz Age movie-making was King Kong, released in 1933 at the peak year of the homicide rate. The entire visual culture went limp after that, so that cinema-specific causes don't really capture what changed. Look what happened to architecture, design, and museum art, for example -- the public turned its back on Art Deco and Expressionism, preferring instead the minimalist cocoons of International Style buildings and the emotionless, unrecognizable abstract art of the mid-century.
Once crime rates began soaring during the '60s, '70s, and '80s, visually oriented movies rose from the grave, slowly at first but taking off in the later '70s. Again that drive took over the entire visual culture, including the return to colorful, figurative painting, psychedelic posters a la Art Nouveau, the Art Deco revival, and so on. As the crime rate has fallen since the '90s, the visual appeal of most movies has disappeared to what it was like during the average mid-century movie.
February 22, 2012
Ornament and memory (with many applications)
People don't prefer ornamentation because they want to show off, signal social status or genetic fitness, etc. Showing the inadequacies of that line of thinking would be a post in itself. Suffice it to say that the only human case of ornamentation, whether genetic or cultural, that looks like the peacock's tail is head hair.
So then why do some people prefer more ornamentation than others? I mean that as it applies in any domain, since the preference for it cuts across all of them. If you like Art Deco buildings, you probably also like New Wave music, nicknames (particularly non-standard ones), Metaphysical poetry conceits, etc.
It is because people differ in their desire for things to be memorable. I think this stems from their social-emotional attachment styles, where the secure and anxious-preoccupied people want to hold on to memories, as they help to cement the social bonds that these people crave. The dismissive-avoidant and fearful-avoidant people want to block them out, minimize their importance, smudge out the defining features into generic stereotypes, etc., in order to keep their connections to others sufficiently loose. That does come across in interviews where they are asked to recall earlier parts of their life.
Ornaments serve as landmarks as your unconscious mind navigates its way through all of the information it keeps on file. Once you hear the opening riff of "Satisfaction" by the Rolling Stones, you effortlessly retrieve it and everything associated with it from memory. It doesn't really contribute to the melody -- it's there to make sure the song has something unique to it, keeping it from being just another pop song interchangeable with all others.
The riff is also repeated over and over to make sure it sticks in your mind, in just the same way that most ornamental flourishes on buildings or pottery are repeated in chains or grids, groupings of the same theme in different sizes, and so on. The fact that ornamentation and repetition are employed so closely together tells us that they're almost surely for the same purpose, which for repetition is obviously to aid memory. Poetry that abhors ornament also tends to shun alliteration, rhyme, and parallel grammatical structures.
Needless to say, I don't consider things encrusted with detail to be "ornamental," as the information overload hinders rather than helps you to remember it. You may not even recognize what the basic thing is underneath. Think of a singer's embellishments that obscure the underlying melody, decorative sculpture done in such high relief that the surface of the building gets lost, or a metaphor so overwrought that you forget what the point of the poem or passage was.
With this view in mind, we can gain insight into at least four areas where ornamentation levels differ, aside from the case of people of different attachment styles mentioned earlier.
First, why is the culture made in rising-crime times more ornamental? Because when primed with thoughts about their own mortality, people are more desiring of symbolic immortality. On the creators' side, writing a catchy riff or designing catchy details for a building give them a shot at outliving their earthly existence. On the audience's side, they feel like the more ornamental nature of the culture they've identified with will make it more likely to endure after their own death. They've become part of something larger and longer-lasting than their individual bodies.
Second, related to that, why do falling-crime times produce both more minimalist and more overblown culture? The Palladian and more Baroque or Rococo ideals for architecture co-existed in time, the mid-1930s through the '50s saw the rise of both drowsier pop music alongside the cacophonous Big Band swing music, and the Victorian era was home to less sublime poets like Robert Browning as well as the emo Symbolists. Both are ways of making and identifying with less memorable culture, whether through underwhelming or overwhelming means.
Third, why do societies differ in ornamentation preferences? It's a combination of needing to remember individuals, plus a large social scale, so that ornaments beyond the range of basic human body plans will be necessary to keep track of everyone.
- Hunter-gatherers live in very small groups, so they can probably get by with faces, names, and voices. Their visual culture is always drab -- plain clothing and fairly undecorated technology.
- Agriculturalists don't interact face-to-face with many people, outside their family. Just go out to the same plot of crop land, dig up weeds, sow seeds, cut down plants, ad nauseam, day in and day out. They also look pretty drab and minimalist, East Asians being the extreme case. Only the very wealthy in a rich, stratified agrarian society prefer ornament, but then they don't work the fields in isolation, but rather live a highly social life at court, where they need to keep track of who's favoring and who's slighting who else.
- Horticulturalists are highly decorated, but usually more at a group level than an individual level. Their ornaments look more like ethnic markers than an embellishment to make an individual uniquely memorable. Most of their need to remember who's benefited and who's harmed who else is across groups. The yellow-feathered tribe gave us a feast last month, so us red-feathered people will have to feast them this month. Some bunch of yellow-feathered guys raided us and ran off with some women, so us red-feathered guys will have to raid them in return. So ornament will only need to distinguish the large groups that interact, i.e. become ethnic markers.
- My favorite group the pastoralists are of course the most fascinated with individual ornamentation. See this picture of Maasai boys being initiated into manhood, and notice how individual the decoration is. (Compare to this picture of the horticulturalist Huli from highland New Guinea, whose ornamentation as mentioned above is much more uniform.) The profusion of ornamental motifs from the northwestern part of India through the Middle East and up into agro-pastoralist Europe is also well known. So is the penchant for musical ornamentation.
Pastoralists live in largish populations, so unlike hunter-gatherers they need cultural ways to distinguish individuals. Like hunter-gatherers, though, they are nomadic and interact face-to-face with others in the area. Not being sedentary, they lack the strong culture of law that comes with agriculture and even horticulture somewhat. Rewards and punishments are given out by the affected parties themselves, so they are obsessed with reciprocity, and that requires a good memory for who's who. Who was a gracious host to me last year? I'd better make sure to host them well this year. Who tried to rustle my cattle the other month? I'd better make sure to raid him back.
And fourth, this may even help explain why some groups have more ornaments genetically, such as lighter eye and hair color. No hunter-gatherers do, whether they're in tropical, highland, or Cape parts of Africa, Alaska, etc. Neither do agriculturalists, whether from Africa, East Asia or the descendants of the Aztecs and Incas. Horticulturalists don't either, again whether from Southeastern Asia and the South Pacific, the Americas, or Africa.
The prevalence of light eyes and hair across the world is just about the same as the ability to digest lactose in adulthood, suggesting a link to pastoralism somehow. There are exceptions: East Africans and Mongolians (and maybe Tibetans?) can drink milk but don't have light eyes and hair. Maybe the genes for lighter eyes and hair haven't had the thousands of years to become common like they have elsewhere in older animal-herding societies. Perhaps in 3000 years the Ethiopians will have green eyes and dark blond hair too. (And you thought they produced a lot of supermodels now...)
The idea is that lighter hair and eyes would make a person more memorable when darker colors are more common. So having light hair and/or eyes is a way of signaling your good faith to the rest of the group, like you won't be able to exploit them and disappear into the crowd like a Chinese person could in China. In a group that is hyper-sensitive about reciprocity and keeping track of individuals, that sign of good faith might get you a slight fitness advantage, i.e. by being more easily welcomed into the group and enjoying the benefits of membership. Individuals with less distinctive features would be kept more at a distance: they're harder to track down if they harm you, and they can more easily lie about having benefited you in the past -- "Don't you remember me? I was that dark-eyed, dark-haired guy who..."
Sure, this last one is a real stretch, but then it's no crazier than the other theories for why pastoralists are genetically programmed to vary more in color.
So then why do some people prefer more ornamentation than others? I mean that as it applies in any domain, since the preference for it cuts across all of them. If you like Art Deco buildings, you probably also like New Wave music, nicknames (particularly non-standard ones), Metaphysical poetry conceits, etc.
It is because people differ in their desire for things to be memorable. I think this stems from their social-emotional attachment styles, where the secure and anxious-preoccupied people want to hold on to memories, as they help to cement the social bonds that these people crave. The dismissive-avoidant and fearful-avoidant people want to block them out, minimize their importance, smudge out the defining features into generic stereotypes, etc., in order to keep their connections to others sufficiently loose. That does come across in interviews where they are asked to recall earlier parts of their life.
Ornaments serve as landmarks as your unconscious mind navigates its way through all of the information it keeps on file. Once you hear the opening riff of "Satisfaction" by the Rolling Stones, you effortlessly retrieve it and everything associated with it from memory. It doesn't really contribute to the melody -- it's there to make sure the song has something unique to it, keeping it from being just another pop song interchangeable with all others.
The riff is also repeated over and over to make sure it sticks in your mind, in just the same way that most ornamental flourishes on buildings or pottery are repeated in chains or grids, groupings of the same theme in different sizes, and so on. The fact that ornamentation and repetition are employed so closely together tells us that they're almost surely for the same purpose, which for repetition is obviously to aid memory. Poetry that abhors ornament also tends to shun alliteration, rhyme, and parallel grammatical structures.
Needless to say, I don't consider things encrusted with detail to be "ornamental," as the information overload hinders rather than helps you to remember it. You may not even recognize what the basic thing is underneath. Think of a singer's embellishments that obscure the underlying melody, decorative sculpture done in such high relief that the surface of the building gets lost, or a metaphor so overwrought that you forget what the point of the poem or passage was.
With this view in mind, we can gain insight into at least four areas where ornamentation levels differ, aside from the case of people of different attachment styles mentioned earlier.
First, why is the culture made in rising-crime times more ornamental? Because when primed with thoughts about their own mortality, people are more desiring of symbolic immortality. On the creators' side, writing a catchy riff or designing catchy details for a building give them a shot at outliving their earthly existence. On the audience's side, they feel like the more ornamental nature of the culture they've identified with will make it more likely to endure after their own death. They've become part of something larger and longer-lasting than their individual bodies.
Second, related to that, why do falling-crime times produce both more minimalist and more overblown culture? The Palladian and more Baroque or Rococo ideals for architecture co-existed in time, the mid-1930s through the '50s saw the rise of both drowsier pop music alongside the cacophonous Big Band swing music, and the Victorian era was home to less sublime poets like Robert Browning as well as the emo Symbolists. Both are ways of making and identifying with less memorable culture, whether through underwhelming or overwhelming means.
Third, why do societies differ in ornamentation preferences? It's a combination of needing to remember individuals, plus a large social scale, so that ornaments beyond the range of basic human body plans will be necessary to keep track of everyone.
- Hunter-gatherers live in very small groups, so they can probably get by with faces, names, and voices. Their visual culture is always drab -- plain clothing and fairly undecorated technology.
- Agriculturalists don't interact face-to-face with many people, outside their family. Just go out to the same plot of crop land, dig up weeds, sow seeds, cut down plants, ad nauseam, day in and day out. They also look pretty drab and minimalist, East Asians being the extreme case. Only the very wealthy in a rich, stratified agrarian society prefer ornament, but then they don't work the fields in isolation, but rather live a highly social life at court, where they need to keep track of who's favoring and who's slighting who else.
- Horticulturalists are highly decorated, but usually more at a group level than an individual level. Their ornaments look more like ethnic markers than an embellishment to make an individual uniquely memorable. Most of their need to remember who's benefited and who's harmed who else is across groups. The yellow-feathered tribe gave us a feast last month, so us red-feathered people will have to feast them this month. Some bunch of yellow-feathered guys raided us and ran off with some women, so us red-feathered guys will have to raid them in return. So ornament will only need to distinguish the large groups that interact, i.e. become ethnic markers.
- My favorite group the pastoralists are of course the most fascinated with individual ornamentation. See this picture of Maasai boys being initiated into manhood, and notice how individual the decoration is. (Compare to this picture of the horticulturalist Huli from highland New Guinea, whose ornamentation as mentioned above is much more uniform.) The profusion of ornamental motifs from the northwestern part of India through the Middle East and up into agro-pastoralist Europe is also well known. So is the penchant for musical ornamentation.
Pastoralists live in largish populations, so unlike hunter-gatherers they need cultural ways to distinguish individuals. Like hunter-gatherers, though, they are nomadic and interact face-to-face with others in the area. Not being sedentary, they lack the strong culture of law that comes with agriculture and even horticulture somewhat. Rewards and punishments are given out by the affected parties themselves, so they are obsessed with reciprocity, and that requires a good memory for who's who. Who was a gracious host to me last year? I'd better make sure to host them well this year. Who tried to rustle my cattle the other month? I'd better make sure to raid him back.
And fourth, this may even help explain why some groups have more ornaments genetically, such as lighter eye and hair color. No hunter-gatherers do, whether they're in tropical, highland, or Cape parts of Africa, Alaska, etc. Neither do agriculturalists, whether from Africa, East Asia or the descendants of the Aztecs and Incas. Horticulturalists don't either, again whether from Southeastern Asia and the South Pacific, the Americas, or Africa.
The prevalence of light eyes and hair across the world is just about the same as the ability to digest lactose in adulthood, suggesting a link to pastoralism somehow. There are exceptions: East Africans and Mongolians (and maybe Tibetans?) can drink milk but don't have light eyes and hair. Maybe the genes for lighter eyes and hair haven't had the thousands of years to become common like they have elsewhere in older animal-herding societies. Perhaps in 3000 years the Ethiopians will have green eyes and dark blond hair too. (And you thought they produced a lot of supermodels now...)
The idea is that lighter hair and eyes would make a person more memorable when darker colors are more common. So having light hair and/or eyes is a way of signaling your good faith to the rest of the group, like you won't be able to exploit them and disappear into the crowd like a Chinese person could in China. In a group that is hyper-sensitive about reciprocity and keeping track of individuals, that sign of good faith might get you a slight fitness advantage, i.e. by being more easily welcomed into the group and enjoying the benefits of membership. Individuals with less distinctive features would be kept more at a distance: they're harder to track down if they harm you, and they can more easily lie about having benefited you in the past -- "Don't you remember me? I was that dark-eyed, dark-haired guy who..."
Sure, this last one is a real stretch, but then it's no crazier than the other theories for why pastoralists are genetically programmed to vary more in color.
February 20, 2012
Portraits vs. non-figurative cover art for the Saturday Evening Post, 1900-1960
While browsing through the library stacks, I found a book with every cover of the Saturday Evening Post, one of the most iconic magazines from pre-Sixties America. Here is an online gallery going back to 1923.
The Post first put pictures on the cover in 1900, and they gradually went out of existence during the '60s, so we can get a good view of social and cultural change during an earlier cycle of violence rates. (They rose from 1900 through the 1933 peak, then fell through 1958.) They don't have any natural subject matter to show, unlike magazines about fashion, very important people, gardening, etc. All they wanted to show was Americana. The wider possible variation means it'll be more able to reflect any changes in the zeitgeist.
This look will show whether the same patterns during the recent cycle showed up back then as well, although only people interested in my rising-crime vs. falling-crime ideas will find that interesting. It's even more important because nobody knows anything about American life before 1960. Most views of the 1950s especially are just made up and passed on as folklore. You can get a better feel for those times from The Man in the Gray Flannel Suit, Edward Hopper paintings, and Mad Men. In none of those do you see, for example, a powerful patriarch who keeps his wife under his thumb, whether cool-headedly or tempermentally.
So far I've done four case studies to clarify what the zeitgeist was like during the mid-century, i.e. the falling-crime era of the mid-'30s through the '50s. Rather than review them here, just search this blog for "mid-century". In brief, the zeitgeist then was more cocooning, don't-make-waves, poker-faced, and visually drab and severe. That was a reversal of the rising-crime period just before the mid-century, from roughly 1900 through the early '30s. Back then people were more outgoing, heroic, neighborly, expressive, and ornamental. I've already looked at the reasons why those behaviors go with rising or falling-crime periods, and won't belabor them again here.
The Post covers will let us get more quantitative, an approach I took earlier when looking at Radio Shack catalog covers, which are more likely to show people during rising-crime times. To get at the socially approaching vs. socially avoidant, people vs. things, empathic vs. autistic split, I coded the covers at two extremes.
On the people-person side, I judged whether the cover showed a portrait or not. These are not just pictures of people, but ones where the face is central, and moreover where its expression isn't simply a reflex to conditions in the immediate environment. The person is letting their guard down enough to show some authentic part of themselves. So, someone laughing at a jester isn't in portrait mode -- presumably anybody would laugh at that jester, and the laugher is thus interchangeable with anyone else who could've been shown laughing, not unique. There cannot be any real narrative content, which would provide backstory or reasons why the person is making the expression they are. To be an invitation to get to know the real person better, the expressions has to be fairly context-free. I didn't mind if there were props (say, a woman holding a tennis racket), but they couldn't have narrative force.
On the socially avoidant side, I judged whether the cover focused on one of three types of non-human content -- animal-only, landscape, or tech / engineering / misc. artifacts. Several covers overlapped the landscape and engineering categories, so I lumped the two groups into a single non-figurative group. The pattern over time for all non-human covers or just those with non-figurative covers is the same since the landscapes and tech covers dwarf the animal-only covers. To make a stronger case about socially avoidant or autistic tendencies, I'll use the non-figurative covers, leaving out the animal-only ones.
Here's how the popularity of each cover type changed over time:
Right away we see portraits being very popular in rising-crime times, then falling off toward near disappearance by the end of falling-crime times. It looks like they overshot the emphasis on portraits in the 1900s, as they groped to find their niche. But they basically fluctuate between 20 and 25 covers per year. There is an unusual plunge during 1928-30 (so, too early to reflect the stock market crash), but it recovers for the next four years, right up through 1934, the first year of falling crime rates. It's not until 1935 that they start plummeting steadily.
With all those portraits, not to mention the other less personal pictures of people, it shouldn't be surprising that the early decades of the Post are mostly free of non-figurative covers. During the entire 61-year period, the number of portraits and number of non-figurative covers are strongly inversely correlated at -0.81, showing that this approach really is measuring an antagonistic pair of preferences.
There are some early covers that focus on railroads, cars, boats, etc., but only rarely. Sometime in the early or mid '30s, this category finds legs and eventually comes close to half of all covers. At first the category is more tech than landscape, though by the '50s they're about even.
The spike in the mid-'40s is due to the obsession with machine-saviors during WWII. During WWI, there was none of that sentiment: it was more about the heroism of an individual or team armed with rifles at most. Not only had we moved politically and economically closer toward Communism during the New Deal, but we also began worshiping big machinery and heavy artillery itself, and not the individuals who mastered it, just like some Stalinist May Day parade. Here's the cover from Feb. 26, 1944:
Although it calmed down somewhat after the war, the blind faith in and fetishization of technology would continue right through the '50s. It's only during rising-crime times that people grow cautious about unchecked technological change, which they see as potential evil magic. They believe more in the supernatural and magical because the rising sense of danger makes them search for extraordinary causes; it can't be one of the usual suspects.
At any rate, this content analysis should dispel some misconceptions about American society before the 1960s, as well as give detail to periods that we have no conception of at all. The fit with the rising-crime vs. falling-crime periods is very strong, just as it was during the recent cycle of violence. People were more interested in people during the '60s through the '80s, and have become more autistic in the past 20 years. The simplest reason is that when the world gets more dangerous, you need to reach out to others more in order to make it, but when it gets safer, you don't feel as strong of a need to hang on to them any longer.
I thought of putting together a gallery of covers from the more inspiring Jazz Age to counteract the machine worship above, but there's too many to choose from. Just browse around that first link to the covers. Harrison Fisher's portraits are mostly realistic, somewhat atmospheric pictures of inviting young babes. Art deco lovers will like J.C. Leyendecker's more stylized and ornamental portraits. And of course there are the All-American nostalgic portraits of Norman Rockwell, who hit his peak during the Jazz Age as well, not the '40s or '50s as you might have expected.
It's like how the '80s saw the dominance of Duran Duran and John Cougar Mellencamp at the same time. In more heroic times, every group does things better.
...More content analyses of Post covers to follow, on different themes.
The Post first put pictures on the cover in 1900, and they gradually went out of existence during the '60s, so we can get a good view of social and cultural change during an earlier cycle of violence rates. (They rose from 1900 through the 1933 peak, then fell through 1958.) They don't have any natural subject matter to show, unlike magazines about fashion, very important people, gardening, etc. All they wanted to show was Americana. The wider possible variation means it'll be more able to reflect any changes in the zeitgeist.
This look will show whether the same patterns during the recent cycle showed up back then as well, although only people interested in my rising-crime vs. falling-crime ideas will find that interesting. It's even more important because nobody knows anything about American life before 1960. Most views of the 1950s especially are just made up and passed on as folklore. You can get a better feel for those times from The Man in the Gray Flannel Suit, Edward Hopper paintings, and Mad Men. In none of those do you see, for example, a powerful patriarch who keeps his wife under his thumb, whether cool-headedly or tempermentally.
So far I've done four case studies to clarify what the zeitgeist was like during the mid-century, i.e. the falling-crime era of the mid-'30s through the '50s. Rather than review them here, just search this blog for "mid-century". In brief, the zeitgeist then was more cocooning, don't-make-waves, poker-faced, and visually drab and severe. That was a reversal of the rising-crime period just before the mid-century, from roughly 1900 through the early '30s. Back then people were more outgoing, heroic, neighborly, expressive, and ornamental. I've already looked at the reasons why those behaviors go with rising or falling-crime periods, and won't belabor them again here.
The Post covers will let us get more quantitative, an approach I took earlier when looking at Radio Shack catalog covers, which are more likely to show people during rising-crime times. To get at the socially approaching vs. socially avoidant, people vs. things, empathic vs. autistic split, I coded the covers at two extremes.
On the people-person side, I judged whether the cover showed a portrait or not. These are not just pictures of people, but ones where the face is central, and moreover where its expression isn't simply a reflex to conditions in the immediate environment. The person is letting their guard down enough to show some authentic part of themselves. So, someone laughing at a jester isn't in portrait mode -- presumably anybody would laugh at that jester, and the laugher is thus interchangeable with anyone else who could've been shown laughing, not unique. There cannot be any real narrative content, which would provide backstory or reasons why the person is making the expression they are. To be an invitation to get to know the real person better, the expressions has to be fairly context-free. I didn't mind if there were props (say, a woman holding a tennis racket), but they couldn't have narrative force.
On the socially avoidant side, I judged whether the cover focused on one of three types of non-human content -- animal-only, landscape, or tech / engineering / misc. artifacts. Several covers overlapped the landscape and engineering categories, so I lumped the two groups into a single non-figurative group. The pattern over time for all non-human covers or just those with non-figurative covers is the same since the landscapes and tech covers dwarf the animal-only covers. To make a stronger case about socially avoidant or autistic tendencies, I'll use the non-figurative covers, leaving out the animal-only ones.
Here's how the popularity of each cover type changed over time:
Right away we see portraits being very popular in rising-crime times, then falling off toward near disappearance by the end of falling-crime times. It looks like they overshot the emphasis on portraits in the 1900s, as they groped to find their niche. But they basically fluctuate between 20 and 25 covers per year. There is an unusual plunge during 1928-30 (so, too early to reflect the stock market crash), but it recovers for the next four years, right up through 1934, the first year of falling crime rates. It's not until 1935 that they start plummeting steadily.
With all those portraits, not to mention the other less personal pictures of people, it shouldn't be surprising that the early decades of the Post are mostly free of non-figurative covers. During the entire 61-year period, the number of portraits and number of non-figurative covers are strongly inversely correlated at -0.81, showing that this approach really is measuring an antagonistic pair of preferences.
There are some early covers that focus on railroads, cars, boats, etc., but only rarely. Sometime in the early or mid '30s, this category finds legs and eventually comes close to half of all covers. At first the category is more tech than landscape, though by the '50s they're about even.
The spike in the mid-'40s is due to the obsession with machine-saviors during WWII. During WWI, there was none of that sentiment: it was more about the heroism of an individual or team armed with rifles at most. Not only had we moved politically and economically closer toward Communism during the New Deal, but we also began worshiping big machinery and heavy artillery itself, and not the individuals who mastered it, just like some Stalinist May Day parade. Here's the cover from Feb. 26, 1944:
Although it calmed down somewhat after the war, the blind faith in and fetishization of technology would continue right through the '50s. It's only during rising-crime times that people grow cautious about unchecked technological change, which they see as potential evil magic. They believe more in the supernatural and magical because the rising sense of danger makes them search for extraordinary causes; it can't be one of the usual suspects.
At any rate, this content analysis should dispel some misconceptions about American society before the 1960s, as well as give detail to periods that we have no conception of at all. The fit with the rising-crime vs. falling-crime periods is very strong, just as it was during the recent cycle of violence. People were more interested in people during the '60s through the '80s, and have become more autistic in the past 20 years. The simplest reason is that when the world gets more dangerous, you need to reach out to others more in order to make it, but when it gets safer, you don't feel as strong of a need to hang on to them any longer.
I thought of putting together a gallery of covers from the more inspiring Jazz Age to counteract the machine worship above, but there's too many to choose from. Just browse around that first link to the covers. Harrison Fisher's portraits are mostly realistic, somewhat atmospheric pictures of inviting young babes. Art deco lovers will like J.C. Leyendecker's more stylized and ornamental portraits. And of course there are the All-American nostalgic portraits of Norman Rockwell, who hit his peak during the Jazz Age as well, not the '40s or '50s as you might have expected.
It's like how the '80s saw the dominance of Duran Duran and John Cougar Mellencamp at the same time. In more heroic times, every group does things better.
...More content analyses of Post covers to follow, on different themes.
February 18, 2012
Watching Network to see Millennials as the next Silent Generation
The vapidity and social-emotional stunting of the Millennials has become clearer over the past five or so years. Earlier than that, most of them weren't really old enough to judge, and during the mid-2000s everyone was a bit more outgoing and fun-loving, which disguised their fundamentally avoidant nature.
It looks like social avoidance goes up in falling-crime times, as people see less reason to band together, look out for each other, and so on. And Millennials indeed grew up in such an environment.
A helpful way to look at this generational gap is to ask if it's happened before -- not just a gap, but one with these particular features. We need to ask what earlier generations were like ours, and which ones were like theirs. Did they clash the way we are now?
Earlier I showed how to move back and forth between recent historical periods in America by asking where they were at in the crime rate cycle, since that is the strongest influence on the zeitgeist. The two peaks, one in 1933 and the other in 1992, are separated by 59 years. Thus, you can take a recent year and move back 59 years to find a similar year in the past, or take a year far in the past that you don't know about and add 59 to arrive at a similar year that you'll have a better feel for.
Millennial births begin in 1985. Moving back 59 years, we find their ancestors born starting in 1926 and lasting for awhile after, i.e. the Silent Generation (whose last births are in the mid-'40s). Several older generations look at Millennials like they're from another planet; again just subtract 59 from your birth year to find out who your ancestors would have been.
One of the most vivid portrayals of how bizarre the Silent Generation seemed to those who were older can be seen in the movie Network. The least sympathetic characters are played by Faye Dunaway and Robert Duvall. Although they aren't playing themselves, they are still playing someone close to their own age, which is what matters here. Dunaway was born in 1941 and Duvall in 1931, so both are Silent Gen members.
Their dismissive-avoidant style of social and emotional attachment, combined with their numbness to and lack of yearning for anything eternal, moral, or sublime, clearly separate them from the more sympathetic characters played by William Holden and Peter Finch. They were born in 1918 and 1916, respectively, so their descendants are 1977 and 1975 births, making them Generation X. Screenplay writer Paddy Chayefsky was born in 1923 and director Sidney Lumet in 1924. So their descendants would be 1982-'83 births, part of the small cohort between Gen X and Millennials, though more similar to the former.
The conflict between the Holden-Finch and the Dunaway-Duvall generations pervades the movie, but it's revealed in its rawest form during a breakup scene between Holden and Dunaway, who had been carrying on a May-December affair marked by her inability to connect emotionally. YouTube won't let me embed the clip, but click on the link:
Network breakup scene
Notice that the pre-Silent Gen figures weren't born in the 1900s or early 1910s. Those people, having also come of age during rising-crime times, would have resembled Holden, Finch, Chayefsky, and Lumet, but they wouldn't have been close enough in age to the Silent Generation to have been familiar with all of their gross and subtle differences. It's the same today with Millennials. Few Boomers notice, as most of their relationships with them are parent-child. Generation X and Y, who interact with them more and outside of family contexts, spontaneously remark, in detail, about how dorky and stunted the Millennials are.
Chayefsky and Lumet weren't even adolescents during rising-crime times, which suggests that even making it through primary school age in such an environment goes a long way to making you human. That shows up in today's world too, where Gen Y (i.e., born between '79 and '84) aren't quite as developed as the Boomers and X-ers, but still closer to them than to the Millennials, who are a quantum drop below.
We must be forming our expectations of what the world is like, and unconsciously molding our minds to reflect that, even as toddlers and elementary school students. Maybe even in infancy, noticing how expressive people's faces are around each other, how open vs. restricted their vocal inflection is when they talk to each other, how much pheromones they're pumping into the air, and who knows what else.
It looks like most of the learning is done from age 7 and after, given that Millennials born in the later '80s, and so who were toddlers during the end of the rising-crime period, look just about the same as the ones born after the early '90s peak, and so who have lived entirely within falling-crime times. Learning in infancy and toddlerhood probably makes a difference only if it persists through the elementary school and early adolescent years. Otherwise, if there's a fundamental shift in the environment during ages 4-7, the mind says it's not too late to change, and unlearns all that now irrelevant -- even misleading -- information it had processed earlier on.
It looks like social avoidance goes up in falling-crime times, as people see less reason to band together, look out for each other, and so on. And Millennials indeed grew up in such an environment.
A helpful way to look at this generational gap is to ask if it's happened before -- not just a gap, but one with these particular features. We need to ask what earlier generations were like ours, and which ones were like theirs. Did they clash the way we are now?
Earlier I showed how to move back and forth between recent historical periods in America by asking where they were at in the crime rate cycle, since that is the strongest influence on the zeitgeist. The two peaks, one in 1933 and the other in 1992, are separated by 59 years. Thus, you can take a recent year and move back 59 years to find a similar year in the past, or take a year far in the past that you don't know about and add 59 to arrive at a similar year that you'll have a better feel for.
Millennial births begin in 1985. Moving back 59 years, we find their ancestors born starting in 1926 and lasting for awhile after, i.e. the Silent Generation (whose last births are in the mid-'40s). Several older generations look at Millennials like they're from another planet; again just subtract 59 from your birth year to find out who your ancestors would have been.
One of the most vivid portrayals of how bizarre the Silent Generation seemed to those who were older can be seen in the movie Network. The least sympathetic characters are played by Faye Dunaway and Robert Duvall. Although they aren't playing themselves, they are still playing someone close to their own age, which is what matters here. Dunaway was born in 1941 and Duvall in 1931, so both are Silent Gen members.
Their dismissive-avoidant style of social and emotional attachment, combined with their numbness to and lack of yearning for anything eternal, moral, or sublime, clearly separate them from the more sympathetic characters played by William Holden and Peter Finch. They were born in 1918 and 1916, respectively, so their descendants are 1977 and 1975 births, making them Generation X. Screenplay writer Paddy Chayefsky was born in 1923 and director Sidney Lumet in 1924. So their descendants would be 1982-'83 births, part of the small cohort between Gen X and Millennials, though more similar to the former.
The conflict between the Holden-Finch and the Dunaway-Duvall generations pervades the movie, but it's revealed in its rawest form during a breakup scene between Holden and Dunaway, who had been carrying on a May-December affair marked by her inability to connect emotionally. YouTube won't let me embed the clip, but click on the link:
Network breakup scene
Notice that the pre-Silent Gen figures weren't born in the 1900s or early 1910s. Those people, having also come of age during rising-crime times, would have resembled Holden, Finch, Chayefsky, and Lumet, but they wouldn't have been close enough in age to the Silent Generation to have been familiar with all of their gross and subtle differences. It's the same today with Millennials. Few Boomers notice, as most of their relationships with them are parent-child. Generation X and Y, who interact with them more and outside of family contexts, spontaneously remark, in detail, about how dorky and stunted the Millennials are.
Chayefsky and Lumet weren't even adolescents during rising-crime times, which suggests that even making it through primary school age in such an environment goes a long way to making you human. That shows up in today's world too, where Gen Y (i.e., born between '79 and '84) aren't quite as developed as the Boomers and X-ers, but still closer to them than to the Millennials, who are a quantum drop below.
We must be forming our expectations of what the world is like, and unconsciously molding our minds to reflect that, even as toddlers and elementary school students. Maybe even in infancy, noticing how expressive people's faces are around each other, how open vs. restricted their vocal inflection is when they talk to each other, how much pheromones they're pumping into the air, and who knows what else.
It looks like most of the learning is done from age 7 and after, given that Millennials born in the later '80s, and so who were toddlers during the end of the rising-crime period, look just about the same as the ones born after the early '90s peak, and so who have lived entirely within falling-crime times. Learning in infancy and toddlerhood probably makes a difference only if it persists through the elementary school and early adolescent years. Otherwise, if there's a fundamental shift in the environment during ages 4-7, the mind says it's not too late to change, and unlearns all that now irrelevant -- even misleading -- information it had processed earlier on.
February 15, 2012
Why don't gays like long hair?
Since we're likely to hear more and more lies about how gays are just like everyone else, it's worth mentioning and trying to understand some of the ways that they aren't like us, whether we're male or female.
Most gay deviance seems to boil down to two disorders (aside from homosexuality itself): 1) a Peter Pan complex, and 2) an addict's mind. So when trying to account for why they're weird, we should try to keep it simple and stick with those.
A strange fact that I have no good explanation for is that gays only prefer very short hair, both for their own hairstyle and for their ideal warm body ("partner"). It must be the only time they'll complain that 4 inches is too long. This holds across cultures, as any fag parade shows, and it seems to go back through time, judging from the preferred models of gay artists.
Too much lazy thinking about homos tries to interpret their deviance as an extreme form of a male or female-typical trait. Here is a clear case where that breaks down.
Chick porn, i.e. romance novels, depict long-haired men, or at least medium-length hair, or anything other than very short. Male sex symbols rarely have such short hair either, again if anything a bit longer than average. Girls show plenty of variation, some preferring a bit longer and some a bit shorter and some more medium-length. Gays are just about uniformly against longish hair.
And obviously men don't prefer short hair in women, just the opposite. So this preference is a uniquely gay trait.
I don't see this fitting in with their addictive tendencies, so that leaves their Peter Pan-ism. If gays are attracted to males who resemble little boys -- not a stretch, given their other quasi-pedophilic preferences -- then any one of them will have to wear short hair to get approved.
Pre-pubescent boys and girls don't have very sharply distinguished appearances, so usually the grown-ups help create those differences in part by letting girls' hair grow and making boys cut theirs. During adolescence and after that isn't necessary, as a long-haired 20 or 30 year-old guy will have a jawline, chin, broad shoulders, deep voice, etc., to clearly signal that he's male.
One flaw in the argument is that even the queers who like hairy, burly men also don't wear their hair longer than a crew cut, and prefer the same in their men. Liking body hair and a stocky build would seem to go against wanting a little boy appearance.
Anyone got any better ideas?
Most gay deviance seems to boil down to two disorders (aside from homosexuality itself): 1) a Peter Pan complex, and 2) an addict's mind. So when trying to account for why they're weird, we should try to keep it simple and stick with those.
A strange fact that I have no good explanation for is that gays only prefer very short hair, both for their own hairstyle and for their ideal warm body ("partner"). It must be the only time they'll complain that 4 inches is too long. This holds across cultures, as any fag parade shows, and it seems to go back through time, judging from the preferred models of gay artists.
Too much lazy thinking about homos tries to interpret their deviance as an extreme form of a male or female-typical trait. Here is a clear case where that breaks down.
Chick porn, i.e. romance novels, depict long-haired men, or at least medium-length hair, or anything other than very short. Male sex symbols rarely have such short hair either, again if anything a bit longer than average. Girls show plenty of variation, some preferring a bit longer and some a bit shorter and some more medium-length. Gays are just about uniformly against longish hair.
And obviously men don't prefer short hair in women, just the opposite. So this preference is a uniquely gay trait.
I don't see this fitting in with their addictive tendencies, so that leaves their Peter Pan-ism. If gays are attracted to males who resemble little boys -- not a stretch, given their other quasi-pedophilic preferences -- then any one of them will have to wear short hair to get approved.
Pre-pubescent boys and girls don't have very sharply distinguished appearances, so usually the grown-ups help create those differences in part by letting girls' hair grow and making boys cut theirs. During adolescence and after that isn't necessary, as a long-haired 20 or 30 year-old guy will have a jawline, chin, broad shoulders, deep voice, etc., to clearly signal that he's male.
One flaw in the argument is that even the queers who like hairy, burly men also don't wear their hair longer than a crew cut, and prefer the same in their men. Liking body hair and a stocky build would seem to go against wanting a little boy appearance.
Anyone got any better ideas?
February 14, 2012
Slow dance music for a New Wave Valentine's Day
As people withdraw from social life during falling-crime times, their ego needs a soothing and flattering cover story, typically a derogation of the good old days. When the world was more expressive and approaching toward others, it was easier to find cases at the far extreme of that spectrum, so clingy that they invite easy caricature. "Wake Me Up Before You Go-Go," "Don't You Want Me," etc. -- hardly bad songs, but still ones that will understandably be the first targets of the pro-cocooning dork squad.
Hey everybody, that hysterical neediness is what openness and gregariousness ultimately lead to -- and you don't want THAT, do you?
The mellow response is that, sure, sometimes they'll go too far, but you can just ignore those and enjoy the ones that have plenty of vital force without going overboard -- "In a Big Country," "Save a Prayer," "Back on the Chain Gang," and so on. In order to keep this obvious truth from people's sight, only the easily mocked extremes can be allowed to remain in the public's memory, so the richness of the full range must be erased.
By the late 1970s and early '80s, rock music had gone through a coming-out-of-its-cocoon phase, then after the initial headiness wore off, an attempt to steer it in more Very Artistically Serious directions, followed by a reaction against that pretentiousness and back toward a stripped-down early sound.
Yet musicians and audiences must have felt that the deliberately retro approach of punk-influenced rock music and Motown revival R&B music had been a bit too knee-jerk -- yeah, all of that overwrought stuff from the late '60s and early '70s took itself too way too seriously, but abandoning that level of seriousness doesn't mean you have to be so self-conscious about it. Having that mental spotlight switched on only makes it harder to get absorbed in the music.
Around 1982, what would become New Wave music began taking over, somehow combining the lack of pretentiousness from punk, the inventive and ornamental drive from the Counter-Culture era, as well as the carefree spirit from rock music's younger years. It really did sound new! Although its heyday was already gone by 1985, its influences could still be heard later in the decade in "Shattered Dreams" and "Need You Tonight," among others. Pioneers Duran Duran released the recognizably wave-y hit song "Come Undone" as late as 1993.
(And when, for a few years in the mid-2000s, the culture moved somewhat against the trend of drowsy and unskilled songwriting, it was the body of New Wave music that they looked to revive.)
Most people today, even if they were around for its original release, probably don't remember Roxy Music's final album Avalon, even though at the time it topped the UK and Australian album charts, and still reached #53 in America. It's saddening to think that the major force keeping it alive in memory is a karaoke scene from the watchable but mostly forgettable movie Lost in Translation.
So both for preservation, as well as for Valentine's Day, here is the greatest slow dance song ever written:
Hey everybody, that hysterical neediness is what openness and gregariousness ultimately lead to -- and you don't want THAT, do you?
The mellow response is that, sure, sometimes they'll go too far, but you can just ignore those and enjoy the ones that have plenty of vital force without going overboard -- "In a Big Country," "Save a Prayer," "Back on the Chain Gang," and so on. In order to keep this obvious truth from people's sight, only the easily mocked extremes can be allowed to remain in the public's memory, so the richness of the full range must be erased.
By the late 1970s and early '80s, rock music had gone through a coming-out-of-its-cocoon phase, then after the initial headiness wore off, an attempt to steer it in more Very Artistically Serious directions, followed by a reaction against that pretentiousness and back toward a stripped-down early sound.
Yet musicians and audiences must have felt that the deliberately retro approach of punk-influenced rock music and Motown revival R&B music had been a bit too knee-jerk -- yeah, all of that overwrought stuff from the late '60s and early '70s took itself too way too seriously, but abandoning that level of seriousness doesn't mean you have to be so self-conscious about it. Having that mental spotlight switched on only makes it harder to get absorbed in the music.
Around 1982, what would become New Wave music began taking over, somehow combining the lack of pretentiousness from punk, the inventive and ornamental drive from the Counter-Culture era, as well as the carefree spirit from rock music's younger years. It really did sound new! Although its heyday was already gone by 1985, its influences could still be heard later in the decade in "Shattered Dreams" and "Need You Tonight," among others. Pioneers Duran Duran released the recognizably wave-y hit song "Come Undone" as late as 1993.
(And when, for a few years in the mid-2000s, the culture moved somewhat against the trend of drowsy and unskilled songwriting, it was the body of New Wave music that they looked to revive.)
Most people today, even if they were around for its original release, probably don't remember Roxy Music's final album Avalon, even though at the time it topped the UK and Australian album charts, and still reached #53 in America. It's saddening to think that the major force keeping it alive in memory is a karaoke scene from the watchable but mostly forgettable movie Lost in Translation.
So both for preservation, as well as for Valentine's Day, here is the greatest slow dance song ever written:
February 12, 2012
Whitney Houston
I can't say that much about her music since I only own one of her albums, Whitney. Still, what a breath of fresh air it is, all of the hit songs blending a deeply insecure view of herself with an intense yearning for intimacy, which keeps her from just feeling sorry for herself and impels her to do something about it. Here's one of everyone's favorite sunshine-wrapped-around-your-shoulders kind of songs:
It's like a voice from another world, now that pop singers have become so self-aggrandizing and dismissive of needing to connect with others. Tellingly, the most boastful have nothing to boast about, whereas a real talent like Whitney Houston seemed if anything to need frequent validation of her worthiness.
That somewhat insecure self-image combined with her desire to reach out to others made her a naturally great performer, even if at the cost of leading a rollercoaster lifestyle and having a bipolar persona. Hey, among those who aren't going to develop into completely mentally healthy adults, I'd rather hang out with someone who's bipolar rather than non-polar or anti-polar, the repulsive types that we seem to be breeding so many of lately.
During the trainwreck part of her career, at least she tried to keep it semi-private. Her husband started his own reality show, but at least as far as I remember, she didn't try to flaunt her celebrity fuck-up lifestyle, nor did she write one of those lame "I love haters" songs in response.
A more heartwarming way that she kept it old school was by not pandering to nancyboys, attention whores, or wannabe hardasses, which is where most dance and R&B music has gone. That had to have taken guts after watching Madonna and Kylie Minogue re-brand themselves that way, not to mention newcomers like Fergie and Lady Gaga, who hit the skankified ground running.
I don't know much about what her charity did, but it sounds like your average needy children's foundation, as in sick, poor, from broken homes... you know, needy. Not a sob story charity for the children of illegal immigrants, gay teens who get taunted for talking in a sissy voice, etc.
In a world where most celebrities pride themselves on wanting to wreck the culture, the death of the somewhat traditional-minded Whitney Houston is a real loss.
It's like a voice from another world, now that pop singers have become so self-aggrandizing and dismissive of needing to connect with others. Tellingly, the most boastful have nothing to boast about, whereas a real talent like Whitney Houston seemed if anything to need frequent validation of her worthiness.
That somewhat insecure self-image combined with her desire to reach out to others made her a naturally great performer, even if at the cost of leading a rollercoaster lifestyle and having a bipolar persona. Hey, among those who aren't going to develop into completely mentally healthy adults, I'd rather hang out with someone who's bipolar rather than non-polar or anti-polar, the repulsive types that we seem to be breeding so many of lately.
During the trainwreck part of her career, at least she tried to keep it semi-private. Her husband started his own reality show, but at least as far as I remember, she didn't try to flaunt her celebrity fuck-up lifestyle, nor did she write one of those lame "I love haters" songs in response.
A more heartwarming way that she kept it old school was by not pandering to nancyboys, attention whores, or wannabe hardasses, which is where most dance and R&B music has gone. That had to have taken guts after watching Madonna and Kylie Minogue re-brand themselves that way, not to mention newcomers like Fergie and Lady Gaga, who hit the skankified ground running.
I don't know much about what her charity did, but it sounds like your average needy children's foundation, as in sick, poor, from broken homes... you know, needy. Not a sob story charity for the children of illegal immigrants, gay teens who get taunted for talking in a sissy voice, etc.
In a world where most celebrities pride themselves on wanting to wreck the culture, the death of the somewhat traditional-minded Whitney Houston is a real loss.
February 11, 2012
Cold Case's portrayal of the history of crime
I've never watched the show Cold Case, but it and similar shows are popular enough that it's worth looking at to see what people these days want to believe about the pattern of crime over time. Like other TV shows, Cold Case does not warp the unwilling minds of viewers -- they'll just change the channel or not bother watching in the first place. It tells people a story that they already wanted to hear.
You could bicker over many aspects of the intended realism, but one of the least nitpicky ways is to look at how the crimes depicted are distributed over time, and compare that to the pattern over time of violent crime rates. *
Here's how the show's crimes stack up across the years, lumped into 3-year spans labelled by the middle year:
Only a few of the crimes took place during the Jazz Age, a period like the '80s that had seen violence rates rising for decades. Far more of the show's crimes took place from the late '30s through the mid-'50s -- the writers mistook film noir crimes for the real thing, as the homicide rate plummeted after its 1933 peak, through a low in 1958.
They do get the rising-crime era of the '60s, '70s, and '80s correct, although there's a very neglected span from 1970 to '72. That was still the heyday of the Counter-Culture, the anti-war movement, and so on, but it has been filtered out of public memory because it did not occur during the decade of the 1960s. All of that has been squished back into the '67-'69 span, where audiences will infer the context more easily. **
One of the least depicted spans of the more recent years is 1991-'93, centered around the very year when crime rates peaked in 1992. Since then, Cold Case crimes soar, even though crime rates have fallen off a cliff. So aside from the early '90s, the picture is that things started getting bad in the late 1950s and have only gotten worse since then.
That error shows up in all over -- that the influences of The Sixties have either remained or worsened. Some of them were reversed almost immediately, like the race hustling, although it did come back with a vengeance during the falling-crime period of political correctness and identity politics. Others lasted through the rising-crime period, like drug use, gregariousness, community-mindedness, religiosity, and equating love with sexual activity, only to evaporate during falling-crime times.
In any case, you might ask what the big deal is about some TV show misrepresenting the history. But if we're going to understand anything about what causes crime rates to cycle up and down, we have to at least have a good picture of what that rising and falling pattern looks like over time. Then we have to have a not terribly biased picture of what society looked like during the different rising and falling phases. After that, we can look for common threads in the rising and falling phases, or what society was like just before the crime rates made a turn upward or downward.
There's also something that feels like desecrating the past by erasing and re-molding its features, as well as introducing those of your own, to make it tell the story that pleases you. By exaggerating the violence of recent two decades compared to the previous three, they come close to trivializing the memory of murder, robbery, and sex crimes. Periods when serial killers were truly wreaking greater havoc are downplayed and not taken as seriously, while the safer periods are blown out of proportion like some middle-class wigger faggot trying to boast about how ghetto-hardened his life has been.
* You could be more accurate and look at the count of violent crimes, not their rate. But the main question here is did the period of falling crime since the 1992 peak show up in the TV show, and the population size did not soar during this time, so counts and rates will tell the same story.
** The Sixties (capitalized) last through 1973, yet our desire for uniform boundaries on time periods makes us confuse decades beginning with 0 and ending in 9 with a coherent zeitgeist. Similarly, The Fifties lasted through 1962.
You could bicker over many aspects of the intended realism, but one of the least nitpicky ways is to look at how the crimes depicted are distributed over time, and compare that to the pattern over time of violent crime rates. *
Here's how the show's crimes stack up across the years, lumped into 3-year spans labelled by the middle year:
Only a few of the crimes took place during the Jazz Age, a period like the '80s that had seen violence rates rising for decades. Far more of the show's crimes took place from the late '30s through the mid-'50s -- the writers mistook film noir crimes for the real thing, as the homicide rate plummeted after its 1933 peak, through a low in 1958.
They do get the rising-crime era of the '60s, '70s, and '80s correct, although there's a very neglected span from 1970 to '72. That was still the heyday of the Counter-Culture, the anti-war movement, and so on, but it has been filtered out of public memory because it did not occur during the decade of the 1960s. All of that has been squished back into the '67-'69 span, where audiences will infer the context more easily. **
One of the least depicted spans of the more recent years is 1991-'93, centered around the very year when crime rates peaked in 1992. Since then, Cold Case crimes soar, even though crime rates have fallen off a cliff. So aside from the early '90s, the picture is that things started getting bad in the late 1950s and have only gotten worse since then.
That error shows up in all over -- that the influences of The Sixties have either remained or worsened. Some of them were reversed almost immediately, like the race hustling, although it did come back with a vengeance during the falling-crime period of political correctness and identity politics. Others lasted through the rising-crime period, like drug use, gregariousness, community-mindedness, religiosity, and equating love with sexual activity, only to evaporate during falling-crime times.
In any case, you might ask what the big deal is about some TV show misrepresenting the history. But if we're going to understand anything about what causes crime rates to cycle up and down, we have to at least have a good picture of what that rising and falling pattern looks like over time. Then we have to have a not terribly biased picture of what society looked like during the different rising and falling phases. After that, we can look for common threads in the rising and falling phases, or what society was like just before the crime rates made a turn upward or downward.
There's also something that feels like desecrating the past by erasing and re-molding its features, as well as introducing those of your own, to make it tell the story that pleases you. By exaggerating the violence of recent two decades compared to the previous three, they come close to trivializing the memory of murder, robbery, and sex crimes. Periods when serial killers were truly wreaking greater havoc are downplayed and not taken as seriously, while the safer periods are blown out of proportion like some middle-class wigger faggot trying to boast about how ghetto-hardened his life has been.
* You could be more accurate and look at the count of violent crimes, not their rate. But the main question here is did the period of falling crime since the 1992 peak show up in the TV show, and the population size did not soar during this time, so counts and rates will tell the same story.
** The Sixties (capitalized) last through 1973, yet our desire for uniform boundaries on time periods makes us confuse decades beginning with 0 and ending in 9 with a coherent zeitgeist. Similarly, The Fifties lasted through 1962.
February 9, 2012
The breakdown of interaction in online social media, '90s to present
Social interaction cannot be replicated online, but there's still a range among online social media from more atomized and impersonal to more interactive and personal.
The most noticeable trend since the rise of the internet is how far it's moved in the detached and withdrawn direction. This is just the online version of the wider trend toward cocooning, but it's worth noting since the internet could have evolved to serve as a (poor) substitute for real-world gregariousness. Instead people act online more or less the way they do in real life, in this case anyway.
Hardly anyone was connected to the internet before about 1994, when AOL began spreading like mad. Online social media thus all grew up in falling-crime times, but the shift toward cocooning has been gradual. When teenagers first filled those AOL chat rooms, we would talk to anyone about anything. It was like an online version of a corner hang-out, mall food court, etc. There weren't any real divisions of chat rooms by theme, demographic, or anything else. Just a free-for-all, though again most everyone was a middle-class teenager.
Those kind of chat rooms had just about vanished by the late '90s. If you were in a chat room at all, it was probably around a specific theme and only open to a small group who already knew each other online. I don't think any teenagers today spend time in chat rooms of even that narrower type.
As for media that let you interact with people you know in real life, first there was AOL's Instant Messenger. It was released in 1997, and at first it was mostly for messaging people who were online friends. By 1999, it was more common to use it for people you knew in real life, like friends at high school or college. It was more or less like texting, only you had to have computer and internet access.
Already by the early 2000s, people began abusing the "away message" feature, which was probably intended to serve like an answering machine message for when you'd be away from your computer for awhile, though still logged in to IM. Yet it morphed from simple things like "at the library till 5, then dinner at the Max" to the more attention-whoring broadcasts that have become standard in Facebook status updates (more about those in a bit). Although it usually wasn't an obscure rant, but more like a long quote or bit of song lyrics. Either way it made IM a little less personal, knowing that others were using it more and more to tell the world what their favorite quotes were (who cares?).
The next big thing was MySpace, which I actually found more interesting than IM. It was part of that small, brief detour away from the boring and cocooning trend, which I speculated could have been a reaction, not to a rising crime rate, but the similar cause of 9/11. That zeitgeist had gotten started by 2003, peaked during 2004 and '05, was hanging on in '06, and began returning to the longer-term trend sometime in 2007.
You could use MySpace just like IM, sending messages back and forth to people you knew in real life, as well as leave lots of Who Cares? junk littered around your customizable profile. But it was more open than IM because anyone could view anyone else's profile, except for a small number who kept it all private. This admittedly small level of letting your guard down and trusting others online, being willing to disclose at least somewhat personal stuff, was nevertheless unique compared to the longer-term trend toward emotional distance and avoidance.
Sure, some people were a bit too confessional (TMI), but they were only the right tail of the distribution, showing that on average everyone had moved a bit in the more open direction.
By 2006, Facebook began sucking users away from MySpace, and by now has all but killed it off. Since this is around the time when the brief bit of sociability was fading from the zeitgeist, it's interesting to look at the devolution of social interaction on Facebook over the years.
When it started out, Facebook was more insulated than MySpace because you could only see profiles of people in your network (typically a person's high school or college). That was just a small step back toward the cocooning trend. There was still an interactive feel to it, though. Katie posts a message on Dani's wall, Dani replies with a message of her own on Katie's wall, and so on. Not so much the guys, but definitely the girls left "bumper stickers" (funny or sweet pictures) on each others walls just about once a day.
At the start, it was like an avoidant kid's version of exchanging notes through their friends' school lockers. Katie directed each of her messages to particular individuals, received personalized messages in return, and felt free to start a topic of conversation about whatever.
Gradually Facebook profiles have become more and more closed; now you basically have to be in someone's friend list to see their profile. Also the stuff that you might not have known about your acquaintances -- who their favorite author is, what their home town is, their religious views, etc. -- has been relegated to a tab that no one will click on. Before all that stuff was right there on the main profile page, but now it's hidden. I don't know what it is, but people feel like it's snooping to click on their "Info" tab, so that semi-personal story rarely gets shared anymore.
But by far the worst change on Facebook is the shift from exchanging messages with specific individuals to broadcasting a single status update to nobody in particular. Just look at anyone's wall these days -- it's mostly the person talking to themselves through status updates, along with the odd photo that is again being shared to no one specifically.
Originally the status update was like the answering machine message -- "out getting burger king, call my cell if you need me." Now the blank form says, "What's on your mind?" because it's mostly used to blurt out how you hate stupid people on the bus, how this new mango avocado dip is like seriously the sickest thing ever, or whatever else that no one really cares about.
Because these broadcasts aren't directed at specific people, no one of the recipients in the friend list will feel like responding to it. ("Well, it wasn't addressed to me...") Just about everyone ignores them, maybe 2 or 3 people will "like" the update (i.e. want to make it known they agree), and maybe 2 or 3 people will actually respond with a message, although that's only about every other status update. So overall perhaps 1 or 2 people will post a response.
Typically those few who do respond are the same people who respond to all other updates -- they just feel like responding no matter what. So it's not as though a large number of your friends respond to your updates, a handful to this update, another handful to that update, a different handful still to some other update... That's quite a reduction in the range of people you communicate with on Facebook, compared to the days when you only posted messages on each other's walls.
Moreover, the range of conversation topics has now been narrowed to whatever the broadcaster chooses to make a status update about, since that's now the context of most messages you get from others. This move toward egocentrism is off-putting to would-be responders. Not surprisingly, you rarely see spontaneous, unsolicited conversation-starters on people's walls anymore.
Facebook started off as a somewhat more closed version of the anything-goes MySpace, but by now there is virtually no interaction going on. It's just a bunch of atomized people broadcasting their random thoughts that are mostly ignored, occasionally liked, and rarely replied to -- and even then only by the same few reply-happy people. It's hard to believe that they used to use it to exchange messages person-to-person, each conversation having a topical flow of its own.
This is just an impression since I don't have hard data, but from what I see of young people now, it seems like Facebook is taking away more of their time from texting, a very annoying but still more personal way of communicating. Texting co-existed with MySpace and the early Facebook, so what gives? Again I think that was just part of that brief bubble of more sociability during the mid-2000s. Now even texting is too personal and direct, so people are spending less time checking their texts and more time hunched over their laptop to access Facebook, or perhaps using their phones -- but still to use the Facebook app.
The now two-decade-long trend toward social avoidance has gotten so bad that even texting seems quaintly intimate. And the medium that looked like it might offer an online form of gregariousness has instead changed to serve the population's demand for emotional distance and lack of interpersonal attachment, plus a good dose of attention-whoring.
It really makes you wonder what the internet and online social media would have evolved into if they'd been introduced and adopted in the 1960s, '70s, or '80s instead of the '90s and 2000s. But I guess we'll find out when the zeitgeist eventually swings back in the outgoing and fun-loving direction.
The most noticeable trend since the rise of the internet is how far it's moved in the detached and withdrawn direction. This is just the online version of the wider trend toward cocooning, but it's worth noting since the internet could have evolved to serve as a (poor) substitute for real-world gregariousness. Instead people act online more or less the way they do in real life, in this case anyway.
Hardly anyone was connected to the internet before about 1994, when AOL began spreading like mad. Online social media thus all grew up in falling-crime times, but the shift toward cocooning has been gradual. When teenagers first filled those AOL chat rooms, we would talk to anyone about anything. It was like an online version of a corner hang-out, mall food court, etc. There weren't any real divisions of chat rooms by theme, demographic, or anything else. Just a free-for-all, though again most everyone was a middle-class teenager.
Those kind of chat rooms had just about vanished by the late '90s. If you were in a chat room at all, it was probably around a specific theme and only open to a small group who already knew each other online. I don't think any teenagers today spend time in chat rooms of even that narrower type.
As for media that let you interact with people you know in real life, first there was AOL's Instant Messenger. It was released in 1997, and at first it was mostly for messaging people who were online friends. By 1999, it was more common to use it for people you knew in real life, like friends at high school or college. It was more or less like texting, only you had to have computer and internet access.
Already by the early 2000s, people began abusing the "away message" feature, which was probably intended to serve like an answering machine message for when you'd be away from your computer for awhile, though still logged in to IM. Yet it morphed from simple things like "at the library till 5, then dinner at the Max" to the more attention-whoring broadcasts that have become standard in Facebook status updates (more about those in a bit). Although it usually wasn't an obscure rant, but more like a long quote or bit of song lyrics. Either way it made IM a little less personal, knowing that others were using it more and more to tell the world what their favorite quotes were (who cares?).
The next big thing was MySpace, which I actually found more interesting than IM. It was part of that small, brief detour away from the boring and cocooning trend, which I speculated could have been a reaction, not to a rising crime rate, but the similar cause of 9/11. That zeitgeist had gotten started by 2003, peaked during 2004 and '05, was hanging on in '06, and began returning to the longer-term trend sometime in 2007.
You could use MySpace just like IM, sending messages back and forth to people you knew in real life, as well as leave lots of Who Cares? junk littered around your customizable profile. But it was more open than IM because anyone could view anyone else's profile, except for a small number who kept it all private. This admittedly small level of letting your guard down and trusting others online, being willing to disclose at least somewhat personal stuff, was nevertheless unique compared to the longer-term trend toward emotional distance and avoidance.
Sure, some people were a bit too confessional (TMI), but they were only the right tail of the distribution, showing that on average everyone had moved a bit in the more open direction.
By 2006, Facebook began sucking users away from MySpace, and by now has all but killed it off. Since this is around the time when the brief bit of sociability was fading from the zeitgeist, it's interesting to look at the devolution of social interaction on Facebook over the years.
When it started out, Facebook was more insulated than MySpace because you could only see profiles of people in your network (typically a person's high school or college). That was just a small step back toward the cocooning trend. There was still an interactive feel to it, though. Katie posts a message on Dani's wall, Dani replies with a message of her own on Katie's wall, and so on. Not so much the guys, but definitely the girls left "bumper stickers" (funny or sweet pictures) on each others walls just about once a day.
At the start, it was like an avoidant kid's version of exchanging notes through their friends' school lockers. Katie directed each of her messages to particular individuals, received personalized messages in return, and felt free to start a topic of conversation about whatever.
Gradually Facebook profiles have become more and more closed; now you basically have to be in someone's friend list to see their profile. Also the stuff that you might not have known about your acquaintances -- who their favorite author is, what their home town is, their religious views, etc. -- has been relegated to a tab that no one will click on. Before all that stuff was right there on the main profile page, but now it's hidden. I don't know what it is, but people feel like it's snooping to click on their "Info" tab, so that semi-personal story rarely gets shared anymore.
But by far the worst change on Facebook is the shift from exchanging messages with specific individuals to broadcasting a single status update to nobody in particular. Just look at anyone's wall these days -- it's mostly the person talking to themselves through status updates, along with the odd photo that is again being shared to no one specifically.
Originally the status update was like the answering machine message -- "out getting burger king, call my cell if you need me." Now the blank form says, "What's on your mind?" because it's mostly used to blurt out how you hate stupid people on the bus, how this new mango avocado dip is like seriously the sickest thing ever, or whatever else that no one really cares about.
Because these broadcasts aren't directed at specific people, no one of the recipients in the friend list will feel like responding to it. ("Well, it wasn't addressed to me...") Just about everyone ignores them, maybe 2 or 3 people will "like" the update (i.e. want to make it known they agree), and maybe 2 or 3 people will actually respond with a message, although that's only about every other status update. So overall perhaps 1 or 2 people will post a response.
Typically those few who do respond are the same people who respond to all other updates -- they just feel like responding no matter what. So it's not as though a large number of your friends respond to your updates, a handful to this update, another handful to that update, a different handful still to some other update... That's quite a reduction in the range of people you communicate with on Facebook, compared to the days when you only posted messages on each other's walls.
Moreover, the range of conversation topics has now been narrowed to whatever the broadcaster chooses to make a status update about, since that's now the context of most messages you get from others. This move toward egocentrism is off-putting to would-be responders. Not surprisingly, you rarely see spontaneous, unsolicited conversation-starters on people's walls anymore.
Facebook started off as a somewhat more closed version of the anything-goes MySpace, but by now there is virtually no interaction going on. It's just a bunch of atomized people broadcasting their random thoughts that are mostly ignored, occasionally liked, and rarely replied to -- and even then only by the same few reply-happy people. It's hard to believe that they used to use it to exchange messages person-to-person, each conversation having a topical flow of its own.
This is just an impression since I don't have hard data, but from what I see of young people now, it seems like Facebook is taking away more of their time from texting, a very annoying but still more personal way of communicating. Texting co-existed with MySpace and the early Facebook, so what gives? Again I think that was just part of that brief bubble of more sociability during the mid-2000s. Now even texting is too personal and direct, so people are spending less time checking their texts and more time hunched over their laptop to access Facebook, or perhaps using their phones -- but still to use the Facebook app.
The now two-decade-long trend toward social avoidance has gotten so bad that even texting seems quaintly intimate. And the medium that looked like it might offer an online form of gregariousness has instead changed to serve the population's demand for emotional distance and lack of interpersonal attachment, plus a good dose of attention-whoring.
It really makes you wonder what the internet and online social media would have evolved into if they'd been introduced and adopted in the 1960s, '70s, or '80s instead of the '90s and 2000s. But I guess we'll find out when the zeitgeist eventually swings back in the outgoing and fun-loving direction.
February 8, 2012
Battleshots: When beer pong isn't boring or nerdy enough
Last summer I took a quick (and drunken) look at how the rise of beer pong reflects lots of the trends during falling-crime times, such as an obsession with rules and structure, minimizing social interaction, passivity / inactivity, and so on. Nothing wrong with that stuff in its proper place, but when it dominates even a party setting, it's time to roll it back.
Beer pong at least involves a minimal amount of physical activity, the potential for things to get messed up, etc. How could we make it even more of a buzzkill? How about making a shot-taking game out of Battleship --
Do a google image search for "battleshots" and you'll see tons of pictures, and Google Trends shows that people started searching like crazy for the term in the last quarter of 2011. So who knows if it'll actually catch on, but I wouldn't be surprised. They may settle on a different board game, just something that removes what little physicality there is in beer pong.
The only level left to take it to is to combine shots with a role-playing video game based on one of the dorky franchises that Millennials love, perhaps SpongeBlotto.
By the way, this shows the importance of not just looking at quantitative data, which show that teenage drinking has been falling since the late 1990s. Even among those who do still drink, the whole experience has become watered down (sorry, that one was unavoidable).
Beer pong at least involves a minimal amount of physical activity, the potential for things to get messed up, etc. How could we make it even more of a buzzkill? How about making a shot-taking game out of Battleship --
Do a google image search for "battleshots" and you'll see tons of pictures, and Google Trends shows that people started searching like crazy for the term in the last quarter of 2011. So who knows if it'll actually catch on, but I wouldn't be surprised. They may settle on a different board game, just something that removes what little physicality there is in beer pong.
The only level left to take it to is to combine shots with a role-playing video game based on one of the dorky franchises that Millennials love, perhaps SpongeBlotto.
By the way, this shows the importance of not just looking at quantitative data, which show that teenage drinking has been falling since the late 1990s. Even among those who do still drink, the whole experience has become watered down (sorry, that one was unavoidable).
February 7, 2012
Which narrative media last? And could video games join them?
If we judge success over the long term, it doesn't look like video games will last. But why not?
First it's useful to look at other media that tried but have ultimately failed to thrive as narrative forms. TV shows have been mostly a bust, again judged long-term. Aside from the serial dramas Twin Peaks and Mad Men (safe call for long-term recognition), hardly any of it holds up, in the sense of people will still want to watch the episodes in re-runs or for the first time decades later. I would add sit-coms like The Simpsons and Seinfeld, but comedies generally don't last very long. Give them another 10 years, and they could be as faintly remembered as All in the Family, another great sit-com.
Narrative radio programs have not lasted either. No one goes back and listens to the originals, or bothers re-interpreting them with contemporary actors. Judging from the ratings estimates, as well as children of radio's heyday telling their stories, the death of radio narratives was almost instantaneous once TV came along.
Also gone are serial short films that you'd see as part of an afternoon of movie-going. They didn't even last through the 1960s. George Lucas may have been inspired by some of the adventure serials when he thought up Star Wars and Indiana Jones, but now we're talking feature films. The first Star Wars movie is nearly 35 years old and continues to absorb audiences, whereas the Flash Gordon serial from 1936 was not still captivating viewers in 1971.
Last, and probably most relevant to video games, are comic books, intended more for a younger audience. As with serial films, some of the characters and related stories have achieved lasting fame when they were turned into movies. Still, few read comic books anymore, let alone the classic ones.
There are a bunch of others -- dime novels, penny dreadfuls, etc. -- but you get the idea.
Which narrative forms have worked? Epics, sagas, tale cycles, folktales (including urban legends and fairy tales), plays, frame tales (harder to pull off), novels, movies... maybe a few others that I'm forgetting.
Right away we see it has nothing to do with ancient vs. modern. Novels and movies are both very new, yet they've caught on for good.
The main difference is how interruptible the narrative is. We want to get absorbed in the story, shut off our self-consciousness, lose track of time, and enter a dreamlike state. Every interruption that jars us awake from the dream ruins the experience. We might give a new medium the benefit of the doubt for awhile, or ride the fashion wave while it lasts. Ultimately, though, if it is too easily interrupted, future generations won't put up with it.
The most successful media offer you an experience that you can take in during one sitting, probably not more than a few hours, like plays, movies, and oral folklore. Even the longer forms can usually be broken up into several coherent chunks without feeling like the dream has been interrupted. You might read a novel in a few sittings, or watch the Star Wars trilogy on separate occasions (as you had to when they first came out).
The chunks of an epic or novel are coherent and nourishing enough that you don't feel like you were just about to get to something good and then -- oh no, tune in next week to see how it all turns out! No one appreciates a culture-maker who keeps on giving the audience blue balls, some cold lab scientist who keeps them running tired on a treadmill for weeks on end. Imagine if you had to wake up and fall back asleep 10 times in a night just to finish a single dream! The chunks of a larger successful medium are more like several fluid, completed dreams over the course of several nights.
Watching TV shows when they air is the worst because there's two fractal layers of interruptions -- the week between episodes, but then even within an episode, all the fucking commercials.
With that big picture in mind, what does the fate look like for video games? They'll probably go the way of comic books and thriller dramas on radio. I never got into narrative video games, but I do keep in touch with what direction the video game world is going. By now you're lucky if a narrative video game only takes 15 hours to complete, and it's easy for them to last 20, 40, or 60 hours. This new Skyrim game that everyone is crazy for can provide them with more than 100 hours of gameplay before the story is over.
At that large of a scale, kiss the narrative good night. Even a long novel, say an 800-page Gothic novel, with 250 words per page, would have about 200,000 words. At the slowest typical reading rate (for comprehension), or 200 words per minute, it would only take 16-17 hours to read. A 300-page novel read by someone on the faster side, at 400 words per minute, would go even faster at just over 3 hours.
Once you get to 20, 40, 60 hours to complete the narrative, now you're into TV and comic book territory. It's not just a handful of chunks that hang together in a gestalt, and experienced over a few days. Now we're talking dozens of chunks experienced over a week or more (aside from real weirdos who would finish a 40-hour game in two 20-hour marathons).
I liked video games best when they didn't have any narrative pretensions at all -- you're some good guy, a bad guy is causing some kind of trouble, now go stop him. Who cares beyond that, we just wanted to have some pointless fun for an hour or so. Now everyone is so obsessed with the story, no matter how lame. Even the not-so-story-driven games like the first-person-shooters: googling "call of duty" "spoiler alert" gets over a million results.
To end with, I know some video game addicts are going to geek out in the comments about how the narrative games have only gotten more and more popular, how they're here to stay, etc. But just remember that people said that about penny dreadfuls, comic books, serial films, radio dramas, and the rest. They kept getting more and more popular, until they evaporated. Given that narrative video games are much closer to the serial media than to the more concentrated, digestible media, their long-term fate looks pretty bleak.
First it's useful to look at other media that tried but have ultimately failed to thrive as narrative forms. TV shows have been mostly a bust, again judged long-term. Aside from the serial dramas Twin Peaks and Mad Men (safe call for long-term recognition), hardly any of it holds up, in the sense of people will still want to watch the episodes in re-runs or for the first time decades later. I would add sit-coms like The Simpsons and Seinfeld, but comedies generally don't last very long. Give them another 10 years, and they could be as faintly remembered as All in the Family, another great sit-com.
Narrative radio programs have not lasted either. No one goes back and listens to the originals, or bothers re-interpreting them with contemporary actors. Judging from the ratings estimates, as well as children of radio's heyday telling their stories, the death of radio narratives was almost instantaneous once TV came along.
Also gone are serial short films that you'd see as part of an afternoon of movie-going. They didn't even last through the 1960s. George Lucas may have been inspired by some of the adventure serials when he thought up Star Wars and Indiana Jones, but now we're talking feature films. The first Star Wars movie is nearly 35 years old and continues to absorb audiences, whereas the Flash Gordon serial from 1936 was not still captivating viewers in 1971.
Last, and probably most relevant to video games, are comic books, intended more for a younger audience. As with serial films, some of the characters and related stories have achieved lasting fame when they were turned into movies. Still, few read comic books anymore, let alone the classic ones.
There are a bunch of others -- dime novels, penny dreadfuls, etc. -- but you get the idea.
Which narrative forms have worked? Epics, sagas, tale cycles, folktales (including urban legends and fairy tales), plays, frame tales (harder to pull off), novels, movies... maybe a few others that I'm forgetting.
Right away we see it has nothing to do with ancient vs. modern. Novels and movies are both very new, yet they've caught on for good.
The main difference is how interruptible the narrative is. We want to get absorbed in the story, shut off our self-consciousness, lose track of time, and enter a dreamlike state. Every interruption that jars us awake from the dream ruins the experience. We might give a new medium the benefit of the doubt for awhile, or ride the fashion wave while it lasts. Ultimately, though, if it is too easily interrupted, future generations won't put up with it.
The most successful media offer you an experience that you can take in during one sitting, probably not more than a few hours, like plays, movies, and oral folklore. Even the longer forms can usually be broken up into several coherent chunks without feeling like the dream has been interrupted. You might read a novel in a few sittings, or watch the Star Wars trilogy on separate occasions (as you had to when they first came out).
The chunks of an epic or novel are coherent and nourishing enough that you don't feel like you were just about to get to something good and then -- oh no, tune in next week to see how it all turns out! No one appreciates a culture-maker who keeps on giving the audience blue balls, some cold lab scientist who keeps them running tired on a treadmill for weeks on end. Imagine if you had to wake up and fall back asleep 10 times in a night just to finish a single dream! The chunks of a larger successful medium are more like several fluid, completed dreams over the course of several nights.
Watching TV shows when they air is the worst because there's two fractal layers of interruptions -- the week between episodes, but then even within an episode, all the fucking commercials.
With that big picture in mind, what does the fate look like for video games? They'll probably go the way of comic books and thriller dramas on radio. I never got into narrative video games, but I do keep in touch with what direction the video game world is going. By now you're lucky if a narrative video game only takes 15 hours to complete, and it's easy for them to last 20, 40, or 60 hours. This new Skyrim game that everyone is crazy for can provide them with more than 100 hours of gameplay before the story is over.
At that large of a scale, kiss the narrative good night. Even a long novel, say an 800-page Gothic novel, with 250 words per page, would have about 200,000 words. At the slowest typical reading rate (for comprehension), or 200 words per minute, it would only take 16-17 hours to read. A 300-page novel read by someone on the faster side, at 400 words per minute, would go even faster at just over 3 hours.
Once you get to 20, 40, 60 hours to complete the narrative, now you're into TV and comic book territory. It's not just a handful of chunks that hang together in a gestalt, and experienced over a few days. Now we're talking dozens of chunks experienced over a week or more (aside from real weirdos who would finish a 40-hour game in two 20-hour marathons).
I liked video games best when they didn't have any narrative pretensions at all -- you're some good guy, a bad guy is causing some kind of trouble, now go stop him. Who cares beyond that, we just wanted to have some pointless fun for an hour or so. Now everyone is so obsessed with the story, no matter how lame. Even the not-so-story-driven games like the first-person-shooters: googling "call of duty" "spoiler alert" gets over a million results.
To end with, I know some video game addicts are going to geek out in the comments about how the narrative games have only gotten more and more popular, how they're here to stay, etc. But just remember that people said that about penny dreadfuls, comic books, serial films, radio dramas, and the rest. They kept getting more and more popular, until they evaporated. Given that narrative video games are much closer to the serial media than to the more concentrated, digestible media, their long-term fate looks pretty bleak.
February 5, 2012
When did the loudness wars in recorded music begin?
At least according to the Wikipedia entry, in the early 1990s. No surprise there -- just about everything went wrong around that time, and has only gotten worse since. Those changes therefore require a few broad causes to explain them, unlike the proliferation of specific causes that everyone cooks up.
The over-arching one is the trend in the crime rate, although I don't see that working directly here. Falling crime does reduce creativity, though, which sounds right in this case. The junkier the composition, the more you have to rely on over-the-top presentation to make an impression.
I think you see that in a related way with how blaring the composition is, aside from how loud it's played live or recorded. Heavy metal from the good old days isn't overly busy or offensive to the ears (I mean Judas Priest, Iron Maiden, Scorpions, and so on). Once metal bands couldn't compose anything good, why not try to make albums that sound like Thor is farting thunder, the Devil is growling a tidal wave, etc. etc.? Same thing with swing music in the later '30s and early '40s -- absolutely, positively cacophonous compared to the sweeter dance music of the Jazz Age.
The '90s also seems to me the time when TV commercials got horribly loud. I don't remember people always muting commercials in the '80s. Same reason as with recorded music: those commercials were sincere and inoffensive ("This Bud's for you"), whereas the snarkily composed ones from more recent times need to be louder since we tend to tune out obnoxious sarcasm.
Nothing in the CD or TV technology of the '80s prevented producers from starting a loudness war, so it was either a change in the content of what was being recorded / broadcast, or a change in audience preferences. The content change is obvious. I was about to say I'm not sure that people like listening to loud junk, but then I remember how often you hear just that blasted out of a car stereo. Or how often you hear that in a mall or shopping center, where every store has their own awful music playing right next to everyone else's. It's not so bad that you hear it down the block, but still strolling along the storefronts, it's off-putting.
The one place I enjoy loud music (other than the obvious dance club) is supermarkets. They're the only place that sometimes plays good music, and it can be hard to hear it above all the rolling cart wheels, crinkling bags being pulled off of shelves, babies crying, and whatnot. The only wrinkle is that the supermarkets that draw penny-pinchers will frequently interrupt good songs just to blare some horrid pitch for UNBEATABLE BARGAINS.
It doesn't even have to be an expensive supermarket, just one that isn't trying to be an efficiency-obsessed hellhole. ("What a waste to play music on the PA when we could be hawking our deals!") Tonight I got to hear "Walking on Sunshine" and almost expected to hear "Tenderness" after -- well, some day.
The over-arching one is the trend in the crime rate, although I don't see that working directly here. Falling crime does reduce creativity, though, which sounds right in this case. The junkier the composition, the more you have to rely on over-the-top presentation to make an impression.
I think you see that in a related way with how blaring the composition is, aside from how loud it's played live or recorded. Heavy metal from the good old days isn't overly busy or offensive to the ears (I mean Judas Priest, Iron Maiden, Scorpions, and so on). Once metal bands couldn't compose anything good, why not try to make albums that sound like Thor is farting thunder, the Devil is growling a tidal wave, etc. etc.? Same thing with swing music in the later '30s and early '40s -- absolutely, positively cacophonous compared to the sweeter dance music of the Jazz Age.
The '90s also seems to me the time when TV commercials got horribly loud. I don't remember people always muting commercials in the '80s. Same reason as with recorded music: those commercials were sincere and inoffensive ("This Bud's for you"), whereas the snarkily composed ones from more recent times need to be louder since we tend to tune out obnoxious sarcasm.
Nothing in the CD or TV technology of the '80s prevented producers from starting a loudness war, so it was either a change in the content of what was being recorded / broadcast, or a change in audience preferences. The content change is obvious. I was about to say I'm not sure that people like listening to loud junk, but then I remember how often you hear just that blasted out of a car stereo. Or how often you hear that in a mall or shopping center, where every store has their own awful music playing right next to everyone else's. It's not so bad that you hear it down the block, but still strolling along the storefronts, it's off-putting.
The one place I enjoy loud music (other than the obvious dance club) is supermarkets. They're the only place that sometimes plays good music, and it can be hard to hear it above all the rolling cart wheels, crinkling bags being pulled off of shelves, babies crying, and whatnot. The only wrinkle is that the supermarkets that draw penny-pinchers will frequently interrupt good songs just to blare some horrid pitch for UNBEATABLE BARGAINS.
It doesn't even have to be an expensive supermarket, just one that isn't trying to be an efficiency-obsessed hellhole. ("What a waste to play music on the PA when we could be hawking our deals!") Tonight I got to hear "Walking on Sunshine" and almost expected to hear "Tenderness" after -- well, some day.
February 2, 2012
Women-only gyms and ego-protection
On the topic of how socially avoidant women have become, when did all these women-only gyms start popping up? Curves was founded in 1992, and began franchising in '95. That suggests it's part of the cocooning trend since the early '90s, one aspect of which is the re-segregation of the sexes.
I couldn't easily find data on numbers of women-only gyms across the years, but the two sources that mentioned their history both say the same thing (first and second source):
And:
They were already well under way by 2000, when this article on CNN reviewed the legal battles surrounding their rise.
The only pre-'90s example I could find was a chain called Spa Lady. From this recollection, though, it sounds like more of an aerobic dancing place:
People who desire more social contact need to have a higher threshold for what they consider "creepy," or else they'll find too many people off-putting and won't get the chance to make new connections. Back in the good old days, exercising around a bunch of sweaty, horny guys wasn't creepy. Now that their threshold has fallen through the floor, nothing could be more frightening for women than working out next to dudes.
It's odd that the women-only gyms try to sell customers on the idea that sheltering will boost their confidence. It certainly makes them more comfortable and less self-conscious, but greater confidence only comes after conquering fears, achieving what you thought your self-consciousness would prevent you from doing, and so on. "I didn't know I had it in me!"
In a way it's like guys who say they're more confident after hiding away in their man-cave, where they can have video game marathons and watch cartoons without shame, when shame is exactly what they should feel.
There's a time and a place for not feeling judged, but people have come to value that over everything else. Part of social life is other people in your network passing judgment on you, not necessarily in a "We need to talk" way of course. This whole need to never feel judged underlies the women's gym stuff, but also serves as a broader way to isolate yourself from others. "I'll only let you into my circle if you promise never to judge me." Uh, sorry, that's not what normal people do; they judge the people they care about. "Well sucks to them, then! I don't need anybody anyway!"
That mindset also ensures that their so-called friends won't really care about them. Either they'll only be shallow acquaintances who never look out for and support each other, or the "I'll never judge you" person is just trying to use the other one (usually unsuccessfully, as when dorks try to play the white knight for a female acquaintance).
Girls were much more well-adjusted back when they wanted to be judged by the boys. They were a lot more on-display and inviting, even approaching boys themselves. They were just eager to figure out what boys thought of them. I guess the truly repulsive ones felt rotten when their worst fears were confirmed, but everyone else would've felt a boost of confidence after learning that it wasn't so intimidating after all.
Putting yourself out there in front of the boys shouldn't be confused with trying to be one of the guys, of course. That's the other awful trend in women's fitness centers -- trying to turn them into butt-kicking babes. Whether through mousiness or mannishness, the gym-going gal of recent times has achieved her goal of keeping the boys away.
I couldn't easily find data on numbers of women-only gyms across the years, but the two sources that mentioned their history both say the same thing (first and second source):
As recently as the early 1990's, health clubs were typically co-ed gyms that promoted muscle-building exercising on stacked-weight equipment to an under-50's clientele. Today's fitness centers are likely to cater a special demographic (teens, women-only, families, seniors) . . .
And:
Women's Only Facilities are a trend that is on the rise. Since the early 1990s women's only health clubs such as Lady of America, Ladies Express Workout and Curves for Women have gained popularity because owners target the club's environment and workouts to their female clientele.
They were already well under way by 2000, when this article on CNN reviewed the legal battles surrounding their rise.
The only pre-'90s example I could find was a chain called Spa Lady. From this recollection, though, it sounds like more of an aerobic dancing place:
After I turned 17, I began working out at a women's fitness club called "Spa Lady." Spa Lady was "the" place for women to be back in the mid and late 80s! It had some fitness equipment, but we ladies mostly loved going to dance our aerobic steps in hot pink leotards and tights!
People who desire more social contact need to have a higher threshold for what they consider "creepy," or else they'll find too many people off-putting and won't get the chance to make new connections. Back in the good old days, exercising around a bunch of sweaty, horny guys wasn't creepy. Now that their threshold has fallen through the floor, nothing could be more frightening for women than working out next to dudes.
It's odd that the women-only gyms try to sell customers on the idea that sheltering will boost their confidence. It certainly makes them more comfortable and less self-conscious, but greater confidence only comes after conquering fears, achieving what you thought your self-consciousness would prevent you from doing, and so on. "I didn't know I had it in me!"
In a way it's like guys who say they're more confident after hiding away in their man-cave, where they can have video game marathons and watch cartoons without shame, when shame is exactly what they should feel.
There's a time and a place for not feeling judged, but people have come to value that over everything else. Part of social life is other people in your network passing judgment on you, not necessarily in a "We need to talk" way of course. This whole need to never feel judged underlies the women's gym stuff, but also serves as a broader way to isolate yourself from others. "I'll only let you into my circle if you promise never to judge me." Uh, sorry, that's not what normal people do; they judge the people they care about. "Well sucks to them, then! I don't need anybody anyway!"
That mindset also ensures that their so-called friends won't really care about them. Either they'll only be shallow acquaintances who never look out for and support each other, or the "I'll never judge you" person is just trying to use the other one (usually unsuccessfully, as when dorks try to play the white knight for a female acquaintance).
Girls were much more well-adjusted back when they wanted to be judged by the boys. They were a lot more on-display and inviting, even approaching boys themselves. They were just eager to figure out what boys thought of them. I guess the truly repulsive ones felt rotten when their worst fears were confirmed, but everyone else would've felt a boost of confidence after learning that it wasn't so intimidating after all.
Putting yourself out there in front of the boys shouldn't be confused with trying to be one of the guys, of course. That's the other awful trend in women's fitness centers -- trying to turn them into butt-kicking babes. Whether through mousiness or mannishness, the gym-going gal of recent times has achieved her goal of keeping the boys away.
Subscribe to:
Posts (Atom)