Earlier I showed that homosexuals are more likely to become serial killers. Taking serial killers to be the extreme right-tail of the group's entire distribution across the violence spectrum, it stands to reason that on average they're more psychopathic than normal males.
What about on the victim's side? You always hear about how bullying is so widespread against gays. But the General Social Survey says otherwise. I split male respondents into those whose sex partners in the past year were only female vs. not heterosexual. It turns out that queers were a tiny bit less likely to have ever been punched or beaten in their lives, 52% vs. 56% for straights. Also, they were equally likely to have been shot at or been threatened with a gun, 33%. (The comparisons do not change if we restrict respondents to urban-dwellers.)
So much for the idea that violent people are more likely to target gays, a la the moral panic over fag-bashing. I think it's just because gays are so emotionally stunted that getting punched is some kind of end-of-the-world thing that they have to broadcast a sob story about.
That idea is confirmed by looking at how afraid men are to walk the streets of their neighborhood at night. Obviously we have to control for the fact that gays are more likely to live in urban, hence more dangerous areas. It turns out not to matter which definition of "urban" we use, so I left it as open to include the greatest sample size (the 100 largest Standard Metropolitan Statistical Areas, and "other urban"). Compared to straights, gays are more than twice as likely to be afraid to walk their local urban streets, 49% vs. 23%.
The only evidence I found for gays being more subject to crime is for robbery and burglary. Again looking just at urban-dwellers, in the past year 7% of straights vs. 14% of gays had been the victim of burglary (I'm fighting off a joke here…), and 2% of straights vs. 5% of gays had been forcefully robbed. Maybe this is due to their being drawn to the more run-down ghetto parts of town, so that the "urban" control is missing that finer difference in where they live. That could also explain some fraction of the difference in how fearful they are of their streets, although certainly not such a yawning chasm.
Overall then, the idea that queers are any more subject to violent or even property crimes is totally bogus. I'm certain that they got made fun of more growing up (and into adulthood, back in the good old days before everyone was a pussified homophile). But being so emotionally stunted, they never were able to develop thick skin, and every little remark or dirty look sends them into a neurotic meltdown and public temper-tantrum.
GSS variables used: sexsex, sex, srcbelt, hit, gun, fear, burglr, robbry
September 5, 2012
September 4, 2012
Neon lights and caryatids (female-shaped columns)
Shot on location for Scarface, where it plays as the Babylon Club:


It's nothing to shout about, but still the eclecticism looks carefree and playful, taking part in the fun-loving, coked-up party culture of southern Florida in the 1980s. Compare to the more campy Piazza d'Italia, another place that combines the classical with the futuristic: because the latter is so much more spastically self-conscious -- "Hey guys, dig what two styles I'm combining -- it appeals to critics, who believe that a work must always "say something." Whether they like it or not, it at least garners their respect for Saying Something, so they include it in architecture history books as an exemplar of Postmodernism. *
Critics seem to be numb to pleasure, especially the guilty kind, where you're going with the flow and not thinking or asking any questions, just letting your senses play around. They prefer the opposite of a guilty pleasure -- something that you cannot enjoy, but that on a conscious, rational level, you tell yourself that you should, usually on the basis of some authority who proclaims, It's a Very Important Piece (git the fuck outta here!).
*Not really that dopey when referring to architecture, as long as you steer away from the superstar architects and look broadly at the visual culture back then, kind of like how Art Deco had no celebrities, manifestos, or other forms of attention whoring.
It's nothing to shout about, but still the eclecticism looks carefree and playful, taking part in the fun-loving, coked-up party culture of southern Florida in the 1980s. Compare to the more campy Piazza d'Italia, another place that combines the classical with the futuristic: because the latter is so much more spastically self-conscious -- "Hey guys, dig what two styles I'm combining -- it appeals to critics, who believe that a work must always "say something." Whether they like it or not, it at least garners their respect for Saying Something, so they include it in architecture history books as an exemplar of Postmodernism. *
Critics seem to be numb to pleasure, especially the guilty kind, where you're going with the flow and not thinking or asking any questions, just letting your senses play around. They prefer the opposite of a guilty pleasure -- something that you cannot enjoy, but that on a conscious, rational level, you tell yourself that you should, usually on the basis of some authority who proclaims, It's a Very Important Piece (git the fuck outta here!).
*Not really that dopey when referring to architecture, as long as you steer away from the superstar architects and look broadly at the visual culture back then, kind of like how Art Deco had no celebrities, manifestos, or other forms of attention whoring.
Categories:
Architecture
September 3, 2012
Cocooning and authoritarianism, the basic picture
That seemed like a shorter title than cocooning and authoritarianism, Corporatism, Communism, technocracy, and bureaucracy. You get the idea of what kind of political and economic institutions I'm talking about.
In this first post, I'll take a look at the link empirically, both across regions and over time. Then in the next post, I'll propose two mechanisms that could explain this relationship, one social and the other emotional.
The groups with the lowest degree of cocooning are hunter-gatherers, e.g. the Bushmen of southern Africa, where people don't have private homes and spend most of their plentiful free time socializing and gossiping (and at such close distances that "personal space" as we know it scarcely exists). They also have established social ties to a variety of neighboring groups, wandering over toward one when they're in need of what it can provide, then touring around to another when they need that group's support. These people have nothing like an elite authority who manages their affairs, and come closest to the egalitarian ideal.
After them are the pastoralists, who also are not tied to any patch of land and thus spend much of their time interacting with others. They do hold private wealth, though (mostly in livestock and jewelry), so these interactions can take an antagonistic form over some contested resource like grazing land, travel routes, etc. But then there's the flip-side where they entertain guests to a higher standard than they would enjoy themselves, with the expectation that sometime later on the hosts will play the role of guests in their turn. Whether it's the culture of honor or the culture of hospitality, pastoralists largely carry on in a grassroots, face-to-face manner.
And they too have limited political hierarchy, although more nested grouping than hunter-gatherers have -- livestock herders are too fiercely independent and proud to tolerate too much for too long. They might band together to take out a common enemy, but even these phases show limited authoritarianism and more of a band-of-brothers ethos, like the early period of Mongol expansion. Hierarchical decision-making is not an enduring feature of their societies.
Then come the somewhat more cocooning horticulturalists. They hang out mostly around the home, where their gardens are, with the women doing the hard work at home and a handful of guys getting drunk or high, perhaps while playing music, in the men's hut (kind of like a small frat). They don't interact too much with their "neighbors." When they do, it is even more likely to be hostile than it is for pastoralists, who at least have a reason to be hospitable toward strangers. But gardeners do not go on prolonged travels where they may be in need of the kindness of strangers; lacking the expected benefits that they themselves would receive as guests, they feel little need to extend it as hosts. When kindness is shown to others, it is typically a potlatch kind of ceremony where the Big Man tries to show off how wealthy he is by giving so much away. That produces more of an implicit patron-client relationship between the Big Man and the peons, not a guest-host relationship between long-term equals, but where one is only temporarily in need and the other able to provide.
Their political institutions are also more stratified, going up to tribal chiefdoms or small kingdoms that are meant to last. Not being very nomadic, they don't have the option to pick up and move somewhere else if they're on a crash-course with their neighbors, so that higher authority gets implemented; it isn't just there symbolically.
Finally there are the agriculturalists, who have given birth to the most hierarchical forms of governance, from Ancient Egypt and the Aztecs to Communist Russia and China. These large-scale crop-growers are even more tied to a patch of land than the gardeners are, who at least move every now and then once the current garden is no longer worth planting in. Most of their day is spent carrying out the drudgery needed to make their own private farm work, pretty much restricted to interactions with close kin. Like the gardeners, they don't go on long treks, so they do not have elaborate hospitality cultures. But they do have larger common interests with their non-kin neighbors, like setting up and maintaining an irrigation system that will provide water to all of their fields. So tend not to interact with their neighbors even violently. There's just not much face-to-face socializing of any kind; even behavior needed to coordinate their interests is through intermediaries, e.g. some official who visits each household to collect the dues that fund a public good.
As for changes over time, which are better at showing cause and effect, consider the more outgoing and free-wheeling times in the United States of the early 1900s through the early 1930s, as well as the '60s through the early '90s, which reached their peak during the Roaring Twenties and the Go-Go Eighties. Those were also periods marked by a steady erosion of faith in technocrats and centralized authority, which was gradually replaced by a belief in entrepreneurialism and soft libertarianism (not the wacko kind), again peaking during the '20s and the '80s.
In the more cocooning mid-century, the public grew steadily more in favor of Big Business, Big Labor, and Big Government meeting up to hammer out a harmonious plan for the whole society, not quite outright Communism or Fascism, but about as close as America could get to it. (Speaking of which, the mid-century was also the peak of Stalinism and Fascism in Europe, so it wasn't just a pattern here.) The men in white coats had figured out what was best for you, and you had better take their prescriptions seriously -- or ignore them at your own peril.
Then in the cocooning era of the past 20 years, we've seen a steady reversal of Reagan-era beliefs in decentralization, distrust of experts, and action from below. Remember all those vigilante movies that followed Dirty Harry? That was the point -- the system and its experts are too limited in their knowledge, morality, and efficacy, that those lower down the chain of command may have to make more decisions on their own instead of just following orders.
It began with the worship of Clinton after the '90s economic boom got going -- praising the technocrats who set the right dials to the right settings, rather than the lower-down businessmen who actually did the hiring and expansion of their businesses. It only grew under Bush, whose massive top-down programs for widening home-ownership rates, making all the nation's children smart, and spreading democracy to the Middle East, hardly anyone raised an eyebrow at. And of course this airheaded faith in the experts has only gotten worse under Obama, who everyone thought was a magician, or who at least would know who to select as his magician sidekicks. Just look at how many true believers there were in the economic magicianship of Big Government there was during the late 2000s recession compared to the early '80s recession.
Confusion often results surrounding the first part of a rising-crime, more outgoing, more decentralizing period, for example the 1900s or the 1960s. Each of those periods began at the epitome of the previous phase, the Gilded Age and the Fifties. The social-cultural shift away from that is gradual, so for awhile in the early transition there is still a good deal of the previous phase kicking around. The Progressive Era still believed in the power of bureaucracy -- just get the right people in office to pass the right laws, and problem solved. Same with the '60s -- just vote in Johnson, and pass the Great Society programs, and problem solved. That is an inheritance of the Gilded Age (or Victorian era in Europe) and the Mid-century Modern period in America, and it was steadily eroded as people gradually withdrew their faith in technocratic experts.
I won't go in depth for every period of rising and falling homicide rates (the primary link to less vs. more cocooning behavior). But consider also the falling-crime Age of Reason / Enlightenment era, who gave the world the idea of the Enlightened Despot, and whose faith in men of learning to wisely plan society was growing compared to the previous Early Modern period that was marked by the Wars of Religion and the Witch Craze in Europe. Indeed the Early Modern period (ca. 1580 to 1630) saw a revival of the Medieval culture of revenge and dueling, immortalized in all those revenge tragedies of the Elizabethan and Jacobean years. That showed a belief in local, face-to-face solutions to your problems, distrusting the central authorities' ability to solve them for you.
After the Enlightenment, homicide rates began rising during the Romantic-Gothic period, and there was by any look at these societies a more outgoing and free-wheeling culture. They gradually lost their faith in technocracy and placed their hope more in bottom-up, local governance, and an overall revival of all things regional, especially to break away from a huge empire.
And as with the other rising-crime periods, the very beginning of the Romantic-Gothic era still carried a good deal of the previous era -- namely the naivete that ushered in the French Revolution. As with the Progressive Era and the Great Society, that was a relic of the mindset in the previous period (the Enlightenment), which was steadily eroded and built over with the views shown in Frankenstein by Mary Shelley, the unsheltered and clear-sighted daughter of two naive ideologists from the Enlightenment era.
Even by the time people were cheering on Napoleon, it was not because they wanted authoritarian planning and rule within their own society, but because he could conquer foreign nations and enrich the glory of his fellow countrymen back home. It's like the regular high school students who cheer on the jocks at a football game, hoping that they'll crush the rival school's team and give them -- the regular students -- something to be proud of. It's not to celebrate the superior status the jocks will enjoy afterward.
That took a little longer than I expected, but I guess it's better to spend more time documenting the pattern in the first place. The next post proposing two related psychological mechanisms for this pattern will be quite shorter.
In this first post, I'll take a look at the link empirically, both across regions and over time. Then in the next post, I'll propose two mechanisms that could explain this relationship, one social and the other emotional.
The groups with the lowest degree of cocooning are hunter-gatherers, e.g. the Bushmen of southern Africa, where people don't have private homes and spend most of their plentiful free time socializing and gossiping (and at such close distances that "personal space" as we know it scarcely exists). They also have established social ties to a variety of neighboring groups, wandering over toward one when they're in need of what it can provide, then touring around to another when they need that group's support. These people have nothing like an elite authority who manages their affairs, and come closest to the egalitarian ideal.
After them are the pastoralists, who also are not tied to any patch of land and thus spend much of their time interacting with others. They do hold private wealth, though (mostly in livestock and jewelry), so these interactions can take an antagonistic form over some contested resource like grazing land, travel routes, etc. But then there's the flip-side where they entertain guests to a higher standard than they would enjoy themselves, with the expectation that sometime later on the hosts will play the role of guests in their turn. Whether it's the culture of honor or the culture of hospitality, pastoralists largely carry on in a grassroots, face-to-face manner.
And they too have limited political hierarchy, although more nested grouping than hunter-gatherers have -- livestock herders are too fiercely independent and proud to tolerate too much for too long. They might band together to take out a common enemy, but even these phases show limited authoritarianism and more of a band-of-brothers ethos, like the early period of Mongol expansion. Hierarchical decision-making is not an enduring feature of their societies.
Then come the somewhat more cocooning horticulturalists. They hang out mostly around the home, where their gardens are, with the women doing the hard work at home and a handful of guys getting drunk or high, perhaps while playing music, in the men's hut (kind of like a small frat). They don't interact too much with their "neighbors." When they do, it is even more likely to be hostile than it is for pastoralists, who at least have a reason to be hospitable toward strangers. But gardeners do not go on prolonged travels where they may be in need of the kindness of strangers; lacking the expected benefits that they themselves would receive as guests, they feel little need to extend it as hosts. When kindness is shown to others, it is typically a potlatch kind of ceremony where the Big Man tries to show off how wealthy he is by giving so much away. That produces more of an implicit patron-client relationship between the Big Man and the peons, not a guest-host relationship between long-term equals, but where one is only temporarily in need and the other able to provide.
Their political institutions are also more stratified, going up to tribal chiefdoms or small kingdoms that are meant to last. Not being very nomadic, they don't have the option to pick up and move somewhere else if they're on a crash-course with their neighbors, so that higher authority gets implemented; it isn't just there symbolically.
Finally there are the agriculturalists, who have given birth to the most hierarchical forms of governance, from Ancient Egypt and the Aztecs to Communist Russia and China. These large-scale crop-growers are even more tied to a patch of land than the gardeners are, who at least move every now and then once the current garden is no longer worth planting in. Most of their day is spent carrying out the drudgery needed to make their own private farm work, pretty much restricted to interactions with close kin. Like the gardeners, they don't go on long treks, so they do not have elaborate hospitality cultures. But they do have larger common interests with their non-kin neighbors, like setting up and maintaining an irrigation system that will provide water to all of their fields. So tend not to interact with their neighbors even violently. There's just not much face-to-face socializing of any kind; even behavior needed to coordinate their interests is through intermediaries, e.g. some official who visits each household to collect the dues that fund a public good.
As for changes over time, which are better at showing cause and effect, consider the more outgoing and free-wheeling times in the United States of the early 1900s through the early 1930s, as well as the '60s through the early '90s, which reached their peak during the Roaring Twenties and the Go-Go Eighties. Those were also periods marked by a steady erosion of faith in technocrats and centralized authority, which was gradually replaced by a belief in entrepreneurialism and soft libertarianism (not the wacko kind), again peaking during the '20s and the '80s.
In the more cocooning mid-century, the public grew steadily more in favor of Big Business, Big Labor, and Big Government meeting up to hammer out a harmonious plan for the whole society, not quite outright Communism or Fascism, but about as close as America could get to it. (Speaking of which, the mid-century was also the peak of Stalinism and Fascism in Europe, so it wasn't just a pattern here.) The men in white coats had figured out what was best for you, and you had better take their prescriptions seriously -- or ignore them at your own peril.
Then in the cocooning era of the past 20 years, we've seen a steady reversal of Reagan-era beliefs in decentralization, distrust of experts, and action from below. Remember all those vigilante movies that followed Dirty Harry? That was the point -- the system and its experts are too limited in their knowledge, morality, and efficacy, that those lower down the chain of command may have to make more decisions on their own instead of just following orders.
It began with the worship of Clinton after the '90s economic boom got going -- praising the technocrats who set the right dials to the right settings, rather than the lower-down businessmen who actually did the hiring and expansion of their businesses. It only grew under Bush, whose massive top-down programs for widening home-ownership rates, making all the nation's children smart, and spreading democracy to the Middle East, hardly anyone raised an eyebrow at. And of course this airheaded faith in the experts has only gotten worse under Obama, who everyone thought was a magician, or who at least would know who to select as his magician sidekicks. Just look at how many true believers there were in the economic magicianship of Big Government there was during the late 2000s recession compared to the early '80s recession.
Confusion often results surrounding the first part of a rising-crime, more outgoing, more decentralizing period, for example the 1900s or the 1960s. Each of those periods began at the epitome of the previous phase, the Gilded Age and the Fifties. The social-cultural shift away from that is gradual, so for awhile in the early transition there is still a good deal of the previous phase kicking around. The Progressive Era still believed in the power of bureaucracy -- just get the right people in office to pass the right laws, and problem solved. Same with the '60s -- just vote in Johnson, and pass the Great Society programs, and problem solved. That is an inheritance of the Gilded Age (or Victorian era in Europe) and the Mid-century Modern period in America, and it was steadily eroded as people gradually withdrew their faith in technocratic experts.
I won't go in depth for every period of rising and falling homicide rates (the primary link to less vs. more cocooning behavior). But consider also the falling-crime Age of Reason / Enlightenment era, who gave the world the idea of the Enlightened Despot, and whose faith in men of learning to wisely plan society was growing compared to the previous Early Modern period that was marked by the Wars of Religion and the Witch Craze in Europe. Indeed the Early Modern period (ca. 1580 to 1630) saw a revival of the Medieval culture of revenge and dueling, immortalized in all those revenge tragedies of the Elizabethan and Jacobean years. That showed a belief in local, face-to-face solutions to your problems, distrusting the central authorities' ability to solve them for you.
After the Enlightenment, homicide rates began rising during the Romantic-Gothic period, and there was by any look at these societies a more outgoing and free-wheeling culture. They gradually lost their faith in technocracy and placed their hope more in bottom-up, local governance, and an overall revival of all things regional, especially to break away from a huge empire.
And as with the other rising-crime periods, the very beginning of the Romantic-Gothic era still carried a good deal of the previous era -- namely the naivete that ushered in the French Revolution. As with the Progressive Era and the Great Society, that was a relic of the mindset in the previous period (the Enlightenment), which was steadily eroded and built over with the views shown in Frankenstein by Mary Shelley, the unsheltered and clear-sighted daughter of two naive ideologists from the Enlightenment era.
Even by the time people were cheering on Napoleon, it was not because they wanted authoritarian planning and rule within their own society, but because he could conquer foreign nations and enrich the glory of his fellow countrymen back home. It's like the regular high school students who cheer on the jocks at a football game, hoping that they'll crush the rival school's team and give them -- the regular students -- something to be proud of. It's not to celebrate the superior status the jocks will enjoy afterward.
That took a little longer than I expected, but I guess it's better to spend more time documenting the pattern in the first place. The next post proposing two related psychological mechanisms for this pattern will be quite shorter.
Categories:
Cocooning,
Human Biodiversity,
Politics
September 1, 2012
Obligatory dorm-room makeovers
Here is an NYT article about how sheltered the Millennials continue to live, even after they've shipped off to college.
Far from wanting to start their own lives, which necessarily means beginning with little and building up over time through effort, they want their parents to shell out $200 or $300 and up to furnish their dorm rooms, to keep them up to the standards they've been used to at home.
Their helicopter parents are of course only too eager to oblige. I mean, how are Kaylabella and Chaysen supposed to get any work done without a standing mirror, big screen TV, and a stocked fridge nearby?
With that level of material comfort, physical security, liberation from household chores, and lack of direct parental supervision (although now subject to a continual stream of cell phone check-ups), you'd think they'd be in hog heaven and living it up all the time.
In reality, though, college kids have never been in a more ongoing vegetative, joyless state, nor been so averse to the party hardy culture that is supposed to pervade the campus. The girls are plugged into Facebook and texting, while the guys are plugged into video games (alone) and internet porn. On the rare occasion when they do throw a party, they just sit around chit-chatting or huddling around the two beer pong players.
It feels like one of those Parents Weekend parties when kids act all well-behaved so they're none the wiser, only the monitor and spotlight are so internalized that that's their ordinary way of carrying on!
Adolescents need to start off low on the totem pole, to motivate them to get stuff done, and to enjoy and savor a higher state of being once they experience it. It also makes them pool their limited resources and thereby develop social bonds. When your dorm room is nothing to write home about, you enjoy the party that the hosts worked on to make sure that it'd be a blast. Not to mention the anticipation beforehand. And all of that required a team effort, not just running your mommy and daddy's credit cards.
I also think that a drabber dorm room forces them to seek out relief through social means -- even a dopey-looking room comes alive with the right people and the right activities. All this dorm room makeover stuff seems like a way to avoid such social channels, and to seek relief by surrounding themselves with distracting toys.
Then it's just more of the same once they graduate college but move back in with their folks until age 35. How about a tax break for parents who spend less than $10 on their kids' freshman dorm decor? Something. Shit.
Far from wanting to start their own lives, which necessarily means beginning with little and building up over time through effort, they want their parents to shell out $200 or $300 and up to furnish their dorm rooms, to keep them up to the standards they've been used to at home.
Their helicopter parents are of course only too eager to oblige. I mean, how are Kaylabella and Chaysen supposed to get any work done without a standing mirror, big screen TV, and a stocked fridge nearby?
With that level of material comfort, physical security, liberation from household chores, and lack of direct parental supervision (although now subject to a continual stream of cell phone check-ups), you'd think they'd be in hog heaven and living it up all the time.
In reality, though, college kids have never been in a more ongoing vegetative, joyless state, nor been so averse to the party hardy culture that is supposed to pervade the campus. The girls are plugged into Facebook and texting, while the guys are plugged into video games (alone) and internet porn. On the rare occasion when they do throw a party, they just sit around chit-chatting or huddling around the two beer pong players.
It feels like one of those Parents Weekend parties when kids act all well-behaved so they're none the wiser, only the monitor and spotlight are so internalized that that's their ordinary way of carrying on!
Adolescents need to start off low on the totem pole, to motivate them to get stuff done, and to enjoy and savor a higher state of being once they experience it. It also makes them pool their limited resources and thereby develop social bonds. When your dorm room is nothing to write home about, you enjoy the party that the hosts worked on to make sure that it'd be a blast. Not to mention the anticipation beforehand. And all of that required a team effort, not just running your mommy and daddy's credit cards.
I also think that a drabber dorm room forces them to seek out relief through social means -- even a dopey-looking room comes alive with the right people and the right activities. All this dorm room makeover stuff seems like a way to avoid such social channels, and to seek relief by surrounding themselves with distracting toys.
Then it's just more of the same once they graduate college but move back in with their folks until age 35. How about a tax break for parents who spend less than $10 on their kids' freshman dorm decor? Something. Shit.
Categories:
Cocooning,
Over-parenting
August 27, 2012
Growing youth conformity, back-to-school shopping edition
Here is an NYT article about teenagers delaying their back-to-school shopping until a week or so after the beginning of the new year. Several independent sources, including some teenagers themselves, suggest that this is because kids are scoping each other out during a test period to see what everyone else is wearing at school, and then afterward making their choices based on that.
If the consumer behavior is new, then so must be the underlying motivation. When I was in middle school, I certainly don't remember that mindset myself or among anyone else I knew. You bought some new clothes and just hoped that you would look cool on the first day back from summer vacation. Compared to what they're doing now, it was more of a risk-taking approach.
Kids these days are definitely more conformist and afraid of taking risks, so I wouldn't dismiss the inferences of the article. I'd be interested to find out when this trend began. The article says it's been going over the past several years, but I'll bet it goes back to the mid or late 1990s, when young people started (just started) to get a lot more uptight and hive-minded. It's too bad that news articles rarely seek to trace anything farther back than 5 years ago.
It'd also be neat to see if people said this about when the Silent Generation when back to school in the '40s and '50s, compared to the more daring, wanna-stand-out way people dressed in the Roaring Twenties. Wouldn't be a surprise, but I can't get interested enough to pursue it any further than throwing it out there. Sometimes I wish I had a chick sidekick to look into stuff like this, or write up what I'd looked into. (I did read a couple short books on the history of jewelry, but haven't gotten bitten by the bug to write it up yet).
If the consumer behavior is new, then so must be the underlying motivation. When I was in middle school, I certainly don't remember that mindset myself or among anyone else I knew. You bought some new clothes and just hoped that you would look cool on the first day back from summer vacation. Compared to what they're doing now, it was more of a risk-taking approach.
Kids these days are definitely more conformist and afraid of taking risks, so I wouldn't dismiss the inferences of the article. I'd be interested to find out when this trend began. The article says it's been going over the past several years, but I'll bet it goes back to the mid or late 1990s, when young people started (just started) to get a lot more uptight and hive-minded. It's too bad that news articles rarely seek to trace anything farther back than 5 years ago.
It'd also be neat to see if people said this about when the Silent Generation when back to school in the '40s and '50s, compared to the more daring, wanna-stand-out way people dressed in the Roaring Twenties. Wouldn't be a surprise, but I can't get interested enough to pursue it any further than throwing it out there. Sometimes I wish I had a chick sidekick to look into stuff like this, or write up what I'd looked into. (I did read a couple short books on the history of jewelry, but haven't gotten bitten by the bug to write it up yet).
August 24, 2012
Pedestrian paradise in commercial and residential areas
[I hoped to split this into two posts, one for each kind of space, but they're too inter-related. I'll leave this up for awhile to let people read it, and hopefully that'll allow any discussion about this important topic to last longer.]
One of the sadder ironies of the past 20 years is that, while everyone has increasingly been promoting or at least paying lip-service to sustainable green communities that are easily walkable, their consumer behavior has fueled the sprawl of ever more strip centers and big box centers. This is true even among those who are liberal, wealthy, and educated, except that their dumpy strip centers are called "lifestyle centers" that offer a salon/spa instead of Supercuts, and that their alienating big box centers are anchored by a Walmart for yuppies, aka Target.
Make no mistake -- this isn't a case of the majority pushing for sprawl, which then causes a separate minority to push back with policy recommendations for sustainability. That person who complains about suburban sprawl is the same one who makes frequent car trips to their local lifestyle center, and who even hops back in their car after hitting up the Starbucks at one end to drive to the yoga studio at the other end.
The fundamental barrier to a pedestrian-friendly environment is quite simply automobile traffic, hence the more a location is exposed to roadways, and the thicker the width of those roadways, the more the pedestrian becomes fenced in and opts instead to drive. This makes strip centers the worst, and malls the best structures for pedestrians.
It all boils down to the desire of people going to strip centers to be isolated from other people during their trip (else they would congregate in more bustling locations). No given center can be very big because that would draw too many other customers buzzing around them like mosquitoes. The smaller size leads to a less diverse array of choices in any given center. Why? To be profitable, each smallish center will feature mostly low-risk places like fast-food restaurants -- you can always count on people being getting hungry -- and just one or two riskier, more niche stores like those for hardware, books, clothing, etc. This means that a typical patron will have to visit several shopping centers to meet all of their needs and wants.
The resulting archipelago of strip centers is defined by each center having heavy exposure to roadways -- parallel to the length of the center, and flanking both ends. There are generally no pedestrian walkways even behind the center, which may not be hemmed in by a roadway, but is usually backed by another strip center facing the opposite direction. Each center also has its marina-sized parking lot, where cars are not simply on display but creating another source of traffic that disrupts walking flows. And since each center has its own buffer of space that's set back from the curb, the combined footprint of the archipelago becomes a bit more inflated and sprawling, and thus less inviting to walk.
That buffer space also opens up a niche for parasitic bike riders who colonize what was intended as a walkway. A pedestrian only has to watch out for cars at intersections, but if there are bikes traveling in both directions on sidewalks, they can be pestered by wheeled vehicles at any point in their journey. Plus bike riders are usually self-important jackasses, far ruder than a driver, making strip centers even less friendly to walkers.
Now, this extensive, chunky lay-out with lots of buffering spaces around each location can be ideal for residential land, where each "center" would be a house, cluster of houses, or small apartment building, and where the parking lots and setbacks would be more like front lawns, back yards, and breathing room in between houses. Some degree of spaciousness and physical separation from neighbors makes it more comfortable for the occupants.
And since residential areas do not regularly draw large crowds of in-comers like a commercial center does, we don't have to worry about the average person having to traipse over such vast distances, and being threatened by cars, during some non-existent daily trek all over the neighborhood. Also for that reason, even crossing streets is usually no big deal in a suburban neighborhood, not like crossing those that wrap around busy strip centers.
This simple exploration shows that residential and commercial spaces operate according to different, perhaps opposite laws of how human beings think, feel, and behave. We can therefore reject the New Urbanist credo that the future of mankind lies in heavily mixed-use developments, where stacks of apartments and offices rest on a base layer of shops within a single building. Working and living spaces should not be that close together, certainly not within the same building -- the opposing forces of work and leisure would prevent each space from coming fully into its own.
And as with strip centers, no single building will house a diverse enough array of stores to meet someone's needs and wants, so the residents will still have to traverse an archipelago of mixed-use buildings, still across car-filled streets. Only now they'll also have given up the spaciousness and comfort of living in a nice suburban neighborhood where dwellings are not crammed together live cells in a hive. (Bad suburbs, the more Levittownian ones, are sadly very hive-like.) The New Urbanist dream would in practice be the worst of both worlds.
The residential ideal was mostly achieved with the suburban model of lowish-density housing separated into blocks for comfort, which avoided the off-putting endless string of housing that characterizes Levittowns (where the string is horizontal, one house-and-tiny-side-yard adjacent to another), as well as high-rise apartment complexes (where the string is vertical, each floor-room-and-ceiling stacked on top of another).
What about the commercial ideal? That went the opposite way, stemming from the profoundly different natures of the two realms of life. It had components that were highly concentrated, and that were housed within a single over-arching structure -- namely, the mall. It's been fashionable to hate on malls for 20 years now, not just among elite groups who actively dismiss them in articles, books, documentaries, etc., but also among the masses who simply deserted them. That can give their supporters an "under siege" mentality, not to mention those who are merely nostalgic for part of their childhood or adolescence.
Nothing wrong with that of course, but they don't really convey why malls are superior to other commercial structures. Here I'll only stick to why they were better for pedestrians (and to the dorks: yes, we're aware of the double-meaning). I've been meaning to write about malls vs. the alternatives for awhile, so I may go into other areas later on.
Because the mall is architecturally the opposite of a strip center, is counteracts all of the major problems for pedestrianism posed by them. Most obviously and importantly, the shops are housed within a self-contained whole space, so that none of it is carved up by roadways or bike lanes. You don't appreciate how special it is to walk around such an expansive space in three dimensions (if the mall had more than one level) until you're plopped back onto city streets, where regulations and lighted signals attempt to control the antagonism among drivers, and between them and walkers. So much fucking aggravation that the mall-goer is protected from by the fortress-like walls.
Have you ever been to a fake mall? One that was outdoors and that allows vehicle traffic to cut through the space? There's just nothing more disruptive functionally to the flow of pedestrians who are just heading purposefully from point A to point B, and disruptive psychologically to the wanderers who just want to get lost in the moment without being jarred awake by a car zipping in front of them (with honks and curses for added disruptive effect).
It's true that mall-goers have to face traffic in the parking lot, but that's not part of the main journey. It's outside the mall, on the other side of that transitional portal of double doors. You could spend uninterrupted hours inside, and only have to deal with the parking lot once before and once after that fun time. Moreover, since the mall has so many different types of stores -- even stores within stores, like the department store anchors -- you don't have to visit more than one of them, so you don't repeat one parking lot navigation after another.
Owing also to the higher density of shops, malls are frequently build upward, with two or sometimes three levels. The horizontal concentration is already good enough to make three or four would-be strip centers all adjacent to each other, and then with the next level up, you've got another three or four -- and all it takes to visit that other level is walking a flight of stairs, or if you're tired, an escalator or elevator ride. Not walking across multiple city streets.
Similarly, the parking lots that would sprawl out across the strip center archipelago are often stacked into a parking garage for the mall. Occasionally parking garages for malls are built underground, further reducing the footprint taken up by parking, as well as removing the eyesore of parking lots and garages from the ground-level view. Underground lots may not have been the majority, but they were at least feasible for malls, whereas strip centers do not enjoy the same economies of scale and only rarely build large underground parking lots. Submerging parking lots underground would be a great boon to walkers, whether they were patrons of the location or just passing through.
These diverse benefits of (limited) vertical building separate malls from big box centers, which only occasionally stack one layer on top of another for the departments within the big box store. You generally don't ride escalators or elevators in Walmart, although I've been to some Targets that are two stories. For multi-level parking, they seem to be between strip centers and malls. For walkability, big box centers aren't as miserable as strip centers. What makes malls superior to big box centers lies more in the greater diversity of experiences and of creature comforts and amenities that big box stores skimp on, but that malls provided in abundance.
Speaking of creature comforts, we shouldn't overlook those when measuring how walkable an area is. Strip centers leave the pedestrian exposed to the elements -- have fun walking around even a single strip center, let alone several of them, when it's raining, snowing, beating down heat, or blustering winds. Some strip centers, I'd guess built in the more humane 1970s and '80s, had covered walkways that protected pedestrians somewhat, and that also served to unify the center structurally. That's about as good as they got, and they're damn rare to find these days anyway. Big box stores offer protection within each one, although not between the other stores in the center.
The mall, however, kept out inclement weather all the way through, while sumptuous skylights poured sunshine into focal areas without over-heating them. Seating was far more generous in the mall than in any other public space ever built, other than stadiums and amphitheaters -- benches, chairs, upholstered booths, and even the edges of the ubiquitous ponds and fountains were made wide enough to rest on. Escalators and elevators offered some relief for your feet. No such horizontal people-movers are to be found in any of the horizontally sprawling commercial spaces, only in airports.
And don't forget even more basic amenities like water fountains and restrooms, which are important for pedestrians in a way that they are not for those who can shoot off in a car to find one somewhere else, or go home. You don't appreciate how generous the mall was in providing them until you find yourself walking through a strip or big box center where the owners are usually stingy and inhospitable. If they even have a fountain or restroom, you'll have to buy something first. Malls enjoyed economies of scale, so each store didn't need to provide its own, just a couple that were maintained by a tiny contribution each from all the stores.
There are all kinds of aesthetic superiorities that enliven the pedestrian experience for mall-goers, but they aren't that central to getting around comfortably on foot, so I'll save that for later.
Everyone always asks how a certain type of architecture could improve the human condition, i.e. by affecting -- shaping -- how people think, feel, and behave. Unfortunately the causal arrow points the other way around -- there are sea changes in the emotional make-up of the society, and in our social patterns, that gets reflected in the architecture. We build and re-shape the environment to suit our present desires, so designers and planners cannot keep a lid on what they see as some undesirable aspect of human behavior, unless the people themselves are moving away from it as well. In that case, that particular architect/planner is in the right place at the right time. Otherwise, they're out of luck, and a rival with the opposite thinking will enjoy greater success among audiences.
Over the past 20 years we've moved back to a mid-century zeitgeist of suspicion of normal people, cocooning, emotional restraint, and a devotion to rational efficiency optimization by gigantic corporations and the federal government. That was reflected in the everyday architecture back then with their drive-in restaurants, hive-like Levittown residential developments, and strip centers (like this one) that are laid out just like they are in our neo-Fifties world today. (The idea of a bustling mid-century Main Street is mostly a myth.)
Before that, during the Jazz Age, the ideal was a comfortable neighborhood in the suburbs, not houses packed side by side but separated into blocks, where their bungalow would have a front porch expansive enough to socialize with passersby or entertain guests. And the commercial ideal was definitely not a strip center but, in some parts still a Main Street, but increasingly the majestic department store. Department stores are a whole 'nother post, but they were like the mall before there was the mall, both distinguished from big box stores by the variety of atmospheres and experiences within, and again the greater creature comforts.
Squeezed between the two cocooning ages was the New Wave age of the 1960s through the '80s and perhaps early '90s, where the residential ideal moved away from Levittowns and back toward spacious suburban blocks, and where the commercial ideal was epitomized by the mall. The All-American community was lived that way, no matter if it was in southern California (Saved By the Bell), the Midwest (Family Ties), or back East (somebody help me out with a reference here, like a portrayal of the NYC-metro suburbs).
The mid-century infatuation with automobiles has repeated itself in our time, and so have their car shapes. But I trust that the next time the crime rate starts picking up, it'll be tempered down the way it was during the Reagan years, when it went back to the Jazz Age usage as a symbol of independence for adolescents, and a thrill-seeking device for everyone, but that didn't keep people from wanting to mill about in traffic-free public spaces.
One of the sadder ironies of the past 20 years is that, while everyone has increasingly been promoting or at least paying lip-service to sustainable green communities that are easily walkable, their consumer behavior has fueled the sprawl of ever more strip centers and big box centers. This is true even among those who are liberal, wealthy, and educated, except that their dumpy strip centers are called "lifestyle centers" that offer a salon/spa instead of Supercuts, and that their alienating big box centers are anchored by a Walmart for yuppies, aka Target.
Make no mistake -- this isn't a case of the majority pushing for sprawl, which then causes a separate minority to push back with policy recommendations for sustainability. That person who complains about suburban sprawl is the same one who makes frequent car trips to their local lifestyle center, and who even hops back in their car after hitting up the Starbucks at one end to drive to the yoga studio at the other end.
The fundamental barrier to a pedestrian-friendly environment is quite simply automobile traffic, hence the more a location is exposed to roadways, and the thicker the width of those roadways, the more the pedestrian becomes fenced in and opts instead to drive. This makes strip centers the worst, and malls the best structures for pedestrians.
It all boils down to the desire of people going to strip centers to be isolated from other people during their trip (else they would congregate in more bustling locations). No given center can be very big because that would draw too many other customers buzzing around them like mosquitoes. The smaller size leads to a less diverse array of choices in any given center. Why? To be profitable, each smallish center will feature mostly low-risk places like fast-food restaurants -- you can always count on people being getting hungry -- and just one or two riskier, more niche stores like those for hardware, books, clothing, etc. This means that a typical patron will have to visit several shopping centers to meet all of their needs and wants.
The resulting archipelago of strip centers is defined by each center having heavy exposure to roadways -- parallel to the length of the center, and flanking both ends. There are generally no pedestrian walkways even behind the center, which may not be hemmed in by a roadway, but is usually backed by another strip center facing the opposite direction. Each center also has its marina-sized parking lot, where cars are not simply on display but creating another source of traffic that disrupts walking flows. And since each center has its own buffer of space that's set back from the curb, the combined footprint of the archipelago becomes a bit more inflated and sprawling, and thus less inviting to walk.
That buffer space also opens up a niche for parasitic bike riders who colonize what was intended as a walkway. A pedestrian only has to watch out for cars at intersections, but if there are bikes traveling in both directions on sidewalks, they can be pestered by wheeled vehicles at any point in their journey. Plus bike riders are usually self-important jackasses, far ruder than a driver, making strip centers even less friendly to walkers.
Now, this extensive, chunky lay-out with lots of buffering spaces around each location can be ideal for residential land, where each "center" would be a house, cluster of houses, or small apartment building, and where the parking lots and setbacks would be more like front lawns, back yards, and breathing room in between houses. Some degree of spaciousness and physical separation from neighbors makes it more comfortable for the occupants.
And since residential areas do not regularly draw large crowds of in-comers like a commercial center does, we don't have to worry about the average person having to traipse over such vast distances, and being threatened by cars, during some non-existent daily trek all over the neighborhood. Also for that reason, even crossing streets is usually no big deal in a suburban neighborhood, not like crossing those that wrap around busy strip centers.
This simple exploration shows that residential and commercial spaces operate according to different, perhaps opposite laws of how human beings think, feel, and behave. We can therefore reject the New Urbanist credo that the future of mankind lies in heavily mixed-use developments, where stacks of apartments and offices rest on a base layer of shops within a single building. Working and living spaces should not be that close together, certainly not within the same building -- the opposing forces of work and leisure would prevent each space from coming fully into its own.
And as with strip centers, no single building will house a diverse enough array of stores to meet someone's needs and wants, so the residents will still have to traverse an archipelago of mixed-use buildings, still across car-filled streets. Only now they'll also have given up the spaciousness and comfort of living in a nice suburban neighborhood where dwellings are not crammed together live cells in a hive. (Bad suburbs, the more Levittownian ones, are sadly very hive-like.) The New Urbanist dream would in practice be the worst of both worlds.
The residential ideal was mostly achieved with the suburban model of lowish-density housing separated into blocks for comfort, which avoided the off-putting endless string of housing that characterizes Levittowns (where the string is horizontal, one house-and-tiny-side-yard adjacent to another), as well as high-rise apartment complexes (where the string is vertical, each floor-room-and-ceiling stacked on top of another).
What about the commercial ideal? That went the opposite way, stemming from the profoundly different natures of the two realms of life. It had components that were highly concentrated, and that were housed within a single over-arching structure -- namely, the mall. It's been fashionable to hate on malls for 20 years now, not just among elite groups who actively dismiss them in articles, books, documentaries, etc., but also among the masses who simply deserted them. That can give their supporters an "under siege" mentality, not to mention those who are merely nostalgic for part of their childhood or adolescence.
Nothing wrong with that of course, but they don't really convey why malls are superior to other commercial structures. Here I'll only stick to why they were better for pedestrians (and to the dorks: yes, we're aware of the double-meaning). I've been meaning to write about malls vs. the alternatives for awhile, so I may go into other areas later on.
Because the mall is architecturally the opposite of a strip center, is counteracts all of the major problems for pedestrianism posed by them. Most obviously and importantly, the shops are housed within a self-contained whole space, so that none of it is carved up by roadways or bike lanes. You don't appreciate how special it is to walk around such an expansive space in three dimensions (if the mall had more than one level) until you're plopped back onto city streets, where regulations and lighted signals attempt to control the antagonism among drivers, and between them and walkers. So much fucking aggravation that the mall-goer is protected from by the fortress-like walls.
Have you ever been to a fake mall? One that was outdoors and that allows vehicle traffic to cut through the space? There's just nothing more disruptive functionally to the flow of pedestrians who are just heading purposefully from point A to point B, and disruptive psychologically to the wanderers who just want to get lost in the moment without being jarred awake by a car zipping in front of them (with honks and curses for added disruptive effect).
It's true that mall-goers have to face traffic in the parking lot, but that's not part of the main journey. It's outside the mall, on the other side of that transitional portal of double doors. You could spend uninterrupted hours inside, and only have to deal with the parking lot once before and once after that fun time. Moreover, since the mall has so many different types of stores -- even stores within stores, like the department store anchors -- you don't have to visit more than one of them, so you don't repeat one parking lot navigation after another.
Owing also to the higher density of shops, malls are frequently build upward, with two or sometimes three levels. The horizontal concentration is already good enough to make three or four would-be strip centers all adjacent to each other, and then with the next level up, you've got another three or four -- and all it takes to visit that other level is walking a flight of stairs, or if you're tired, an escalator or elevator ride. Not walking across multiple city streets.
Similarly, the parking lots that would sprawl out across the strip center archipelago are often stacked into a parking garage for the mall. Occasionally parking garages for malls are built underground, further reducing the footprint taken up by parking, as well as removing the eyesore of parking lots and garages from the ground-level view. Underground lots may not have been the majority, but they were at least feasible for malls, whereas strip centers do not enjoy the same economies of scale and only rarely build large underground parking lots. Submerging parking lots underground would be a great boon to walkers, whether they were patrons of the location or just passing through.
These diverse benefits of (limited) vertical building separate malls from big box centers, which only occasionally stack one layer on top of another for the departments within the big box store. You generally don't ride escalators or elevators in Walmart, although I've been to some Targets that are two stories. For multi-level parking, they seem to be between strip centers and malls. For walkability, big box centers aren't as miserable as strip centers. What makes malls superior to big box centers lies more in the greater diversity of experiences and of creature comforts and amenities that big box stores skimp on, but that malls provided in abundance.
Speaking of creature comforts, we shouldn't overlook those when measuring how walkable an area is. Strip centers leave the pedestrian exposed to the elements -- have fun walking around even a single strip center, let alone several of them, when it's raining, snowing, beating down heat, or blustering winds. Some strip centers, I'd guess built in the more humane 1970s and '80s, had covered walkways that protected pedestrians somewhat, and that also served to unify the center structurally. That's about as good as they got, and they're damn rare to find these days anyway. Big box stores offer protection within each one, although not between the other stores in the center.
The mall, however, kept out inclement weather all the way through, while sumptuous skylights poured sunshine into focal areas without over-heating them. Seating was far more generous in the mall than in any other public space ever built, other than stadiums and amphitheaters -- benches, chairs, upholstered booths, and even the edges of the ubiquitous ponds and fountains were made wide enough to rest on. Escalators and elevators offered some relief for your feet. No such horizontal people-movers are to be found in any of the horizontally sprawling commercial spaces, only in airports.
And don't forget even more basic amenities like water fountains and restrooms, which are important for pedestrians in a way that they are not for those who can shoot off in a car to find one somewhere else, or go home. You don't appreciate how generous the mall was in providing them until you find yourself walking through a strip or big box center where the owners are usually stingy and inhospitable. If they even have a fountain or restroom, you'll have to buy something first. Malls enjoyed economies of scale, so each store didn't need to provide its own, just a couple that were maintained by a tiny contribution each from all the stores.
There are all kinds of aesthetic superiorities that enliven the pedestrian experience for mall-goers, but they aren't that central to getting around comfortably on foot, so I'll save that for later.
Everyone always asks how a certain type of architecture could improve the human condition, i.e. by affecting -- shaping -- how people think, feel, and behave. Unfortunately the causal arrow points the other way around -- there are sea changes in the emotional make-up of the society, and in our social patterns, that gets reflected in the architecture. We build and re-shape the environment to suit our present desires, so designers and planners cannot keep a lid on what they see as some undesirable aspect of human behavior, unless the people themselves are moving away from it as well. In that case, that particular architect/planner is in the right place at the right time. Otherwise, they're out of luck, and a rival with the opposite thinking will enjoy greater success among audiences.
Over the past 20 years we've moved back to a mid-century zeitgeist of suspicion of normal people, cocooning, emotional restraint, and a devotion to rational efficiency optimization by gigantic corporations and the federal government. That was reflected in the everyday architecture back then with their drive-in restaurants, hive-like Levittown residential developments, and strip centers (like this one) that are laid out just like they are in our neo-Fifties world today. (The idea of a bustling mid-century Main Street is mostly a myth.)
Before that, during the Jazz Age, the ideal was a comfortable neighborhood in the suburbs, not houses packed side by side but separated into blocks, where their bungalow would have a front porch expansive enough to socialize with passersby or entertain guests. And the commercial ideal was definitely not a strip center but, in some parts still a Main Street, but increasingly the majestic department store. Department stores are a whole 'nother post, but they were like the mall before there was the mall, both distinguished from big box stores by the variety of atmospheres and experiences within, and again the greater creature comforts.
Squeezed between the two cocooning ages was the New Wave age of the 1960s through the '80s and perhaps early '90s, where the residential ideal moved away from Levittowns and back toward spacious suburban blocks, and where the commercial ideal was epitomized by the mall. The All-American community was lived that way, no matter if it was in southern California (Saved By the Bell), the Midwest (Family Ties), or back East (somebody help me out with a reference here, like a portrayal of the NYC-metro suburbs).
The mid-century infatuation with automobiles has repeated itself in our time, and so have their car shapes. But I trust that the next time the crime rate starts picking up, it'll be tempered down the way it was during the Reagan years, when it went back to the Jazz Age usage as a symbol of independence for adolescents, and a thrill-seeking device for everyone, but that didn't keep people from wanting to mill about in traffic-free public spaces.
Categories:
Architecture
August 21, 2012
Transparency vs. mystery
I don't know what basic psychological trait this boils down to (maybe tolerance of uncertainty), but it affects all kinds of social and cultural preferences. In a falling-crime culture, the inescapable transparency makes it feel soul-starving; a sense of everyday wonder comes from the cozy pockets of obscurity during rising-crime times. There are too many examples to explore in detail, but here are some big ones, mostly drawn from popular rather than elite culture.
The clearest case of see-throughiness is in architecture. In both the mid-20th century and over the past 20 years, buildings with ceiling-to-floor glass walls exploded in popularity. You see this in down-to-earth places like the drive-in restaurant or the Googie-styled coffee shop, as well as the Apple Store; the Mid-Century Modern house, as well as the Zen Minimalist house; and public libraries and office skyscrapers from both periods. There was also the Crystal Palace of the Victorian era.
John Portman, one of the few sublime and humanistic architects after Art Deco, remarked that these glass walls and doorways rob the entrances of any ceremony or anticipation. With no emotional build-up, there can be no release once you enter the building. Everything that lies within is instantly revealed to the passerby.
This kind of architecture thrives in falling-crime times because people are more paranoid about the threat posed by their fellow man, so they want to see inside to make sure it's totally safe before even getting close to the people in an enclosed public space. Furthermore, they have what to me seem like abnormally restrained emotional systems, so they prefer building features that will prevent any chance of anticipation and release. Just skip right to the end, cross that task off the to-do list, and move on to the next item of business.
In contrast, the higher trust in rising-crime times means that people approaching a restaurant, coffee shop, etc., won't need to inspect it thoroughly in advance. An increasingly dangerous world makes you rely on and support others to get through it, so overblown suspicion of your fellow community members settles down to a more realistic level. Also, being more in a state of preparedness for danger means you're in a state of arousal more often, so your mood or mindset is more congruent with the effects made by bold contrasts that strike an emotional chord, that build up tension and then release it.
Glass is an ancient material, but transparent plastics are not. So, when it comes to product design, the time periods we can compare are more limited to the past 20 years vs. the '60s through the '80s. There were only a couple somewhat popular products that were see-through from the '80s: a Conairphone (and judging from the very bright neon colors, I'd say that was the very late '80s or even the early '90s), and a line of Swatch watches. There may have been other little things like that, but nothing big comes to mind. It was possible to make things that way, but consumer demand must have been low enough to make them a minor novelty at most.
In the past 20 years, all kinds of ordinary stuff has become see-through -- the casing of pens and mechanical pencils, video game devices (both the home console and its controllers, as well as handhelds), inflatable furniture (like those ubiquitous ball-chairs), even one-time-use beverage containers, which used to be aluminum but are now clear plastic (why not opaque plastic?). These are only those that come to mind. Narrow and particular explanations for a change in any one of these categories ignores the larger pattern that product design nowadays is more likely to feature transparency, although that is still not part of most products.
The cocooning behavior of falling-crime times stunts our social and emotional growth, so we move more toward the autistic side of the "people vs. things" spectrum of interests. People then become more curious about knowing what the guts of their gadgets would look like if you dissected them. In rising-crime times, when people are more social, who cares what it would look like? There are more pressing social matters to attend to, like looking out for one another and getting laid.
John Keats, who hailed from the rising-crime world of the Romantic-Gothic era, accused Newton of deflating the wonder we get looking at a rainbow by showing that it could be scientifically explained by light passing through a prism. The accusation may have been misplaced, but it's still the kind of approach that was more common in the falling-crime Age of Reason / Enlightenment, when people were fascinated by Vaucanson's digesting duck automaton and its schematics.
And when Lucas made those terrible new Star Wars movies, he had to "unweave the Force" and explain how Midichlorians cause its effects, like one of those dopey 1950s science reels for schoolchildren.
Speaking of which, every narrative these days, whether it's a novel, movie, or video game, has to have so much back-story. It's basically the same as the gadget whose blueprint is on full display to the user, only now an exotic world is being dissected. Again it's like a neo-Victorian, neo-Enlightenment time, when readers needed to know all kinds of irrelevant shit to make it through the novel. Gothic novels, as long as some are, generally don't use all those pages for back-story or micro-cataloging every detail of the environment: Sublime terror needs mystery to cloak many of the particular details.
Then there's gift cards -- nothing at all left to the imagination of the recipient. You paid a known amount for it, almost certainly picked it up at the check-out aisle, or a nearby display stand, in a supermarket, and might not even wrap it up in anything to delay knowledge of what the gift is. The fact that about half of the shelf space for Christmas cards is for gift card / money holders (often with a part of the front cut out to instantly reveal that there's cash inside), just goes to show how far this transparency thing goes. They only became popular during the '90s, even though gift certificates had existed for a long time before then.
In social relations, there's the hook-up culture. Now, remember that rates of all sexual activity have been plummeting since the early 1990s, so kids these days are a lot less active. However, on the rare occasion when two of them find themselves on a path toward getting it on, they prefer to skip anything emotional in the lead-up, have some joyless sex, and then return to not being in contact with each other.
Those who want us to be more emotionally restrained should be careful what they wish for -- a lack of emotional conductivity leads to robotic transactional relationships. "I'm hot, you're hot, I guess we might as well get each other off, and get that out of the way," so they can go back to their meaningful lives of playing video games and posting inanities on Facebook.
Music of the past 20 years is a lot more obvious, or transparent, about what they're going to deliver. There's no wandering through this corridor, spying some other room, then leaving into yet another inviting space. Norah Jones is so overly cutesy, and the breathiness is laid on so thick, that it doesn't feel like she's guiding you through different places -- just kind of dropping the curtain right away for you to see what's behind it. I think a lot of mid-century pop music is like that too (like "Beyond the Sea"), in whatever other ways it may differ.
Folks had it so much better in the '80s, when good music was mainstream instead of a fringe thing. Even songs in heavy rotation on MTV transport you to a mysterious place, and lead you through a variety of emotional spaces, building up and releasing tension -- "Wrapped Around Your Finger," "Save a Prayer," "La Isla Bonita," "Sweet Child o' Mine," just to name a few.
I'm not a classical music buff, but I got the same impression from the rising-crime Classical period (the Romantic-Gothic era in literature and painting) vs. the falling-crime Romantic period (i.e. the Victorian era). Beethoven and Schubert seem like effortless masters at leading you on a mysterious tour, whereas composers like Brahms or Liszt feel more like they're revealing more. You don't feel that same build-up and release.
I'm not using "mysterious" in the sense of unfamiliar, or mostly mellow with sudden shock scares. From what I've heard, I can't put Wagner in the same group with Beethoven and Schubert. The pop music equivalent is kind of like Radiohead mixed with some dark metal band. It feels more like floating adrift in a heroin haze, and occasionally being jarred awake, than being pulled along this way to see this space, then pulled some other way to see some other space.
Bach was a shining exception from the Age of Reason / Enlightenment period, although I still think Beethoven and Schubert are quite a bit more emotionally nimble and unpredictable.
Jeez, this is going on pretty long now, and I'm drawing on material I don't know so well, so I'd better cap it there. Feel free to add examples in the comments.
The clearest case of see-throughiness is in architecture. In both the mid-20th century and over the past 20 years, buildings with ceiling-to-floor glass walls exploded in popularity. You see this in down-to-earth places like the drive-in restaurant or the Googie-styled coffee shop, as well as the Apple Store; the Mid-Century Modern house, as well as the Zen Minimalist house; and public libraries and office skyscrapers from both periods. There was also the Crystal Palace of the Victorian era.
John Portman, one of the few sublime and humanistic architects after Art Deco, remarked that these glass walls and doorways rob the entrances of any ceremony or anticipation. With no emotional build-up, there can be no release once you enter the building. Everything that lies within is instantly revealed to the passerby.
This kind of architecture thrives in falling-crime times because people are more paranoid about the threat posed by their fellow man, so they want to see inside to make sure it's totally safe before even getting close to the people in an enclosed public space. Furthermore, they have what to me seem like abnormally restrained emotional systems, so they prefer building features that will prevent any chance of anticipation and release. Just skip right to the end, cross that task off the to-do list, and move on to the next item of business.
In contrast, the higher trust in rising-crime times means that people approaching a restaurant, coffee shop, etc., won't need to inspect it thoroughly in advance. An increasingly dangerous world makes you rely on and support others to get through it, so overblown suspicion of your fellow community members settles down to a more realistic level. Also, being more in a state of preparedness for danger means you're in a state of arousal more often, so your mood or mindset is more congruent with the effects made by bold contrasts that strike an emotional chord, that build up tension and then release it.
Glass is an ancient material, but transparent plastics are not. So, when it comes to product design, the time periods we can compare are more limited to the past 20 years vs. the '60s through the '80s. There were only a couple somewhat popular products that were see-through from the '80s: a Conairphone (and judging from the very bright neon colors, I'd say that was the very late '80s or even the early '90s), and a line of Swatch watches. There may have been other little things like that, but nothing big comes to mind. It was possible to make things that way, but consumer demand must have been low enough to make them a minor novelty at most.
In the past 20 years, all kinds of ordinary stuff has become see-through -- the casing of pens and mechanical pencils, video game devices (both the home console and its controllers, as well as handhelds), inflatable furniture (like those ubiquitous ball-chairs), even one-time-use beverage containers, which used to be aluminum but are now clear plastic (why not opaque plastic?). These are only those that come to mind. Narrow and particular explanations for a change in any one of these categories ignores the larger pattern that product design nowadays is more likely to feature transparency, although that is still not part of most products.
The cocooning behavior of falling-crime times stunts our social and emotional growth, so we move more toward the autistic side of the "people vs. things" spectrum of interests. People then become more curious about knowing what the guts of their gadgets would look like if you dissected them. In rising-crime times, when people are more social, who cares what it would look like? There are more pressing social matters to attend to, like looking out for one another and getting laid.
John Keats, who hailed from the rising-crime world of the Romantic-Gothic era, accused Newton of deflating the wonder we get looking at a rainbow by showing that it could be scientifically explained by light passing through a prism. The accusation may have been misplaced, but it's still the kind of approach that was more common in the falling-crime Age of Reason / Enlightenment, when people were fascinated by Vaucanson's digesting duck automaton and its schematics.
And when Lucas made those terrible new Star Wars movies, he had to "unweave the Force" and explain how Midichlorians cause its effects, like one of those dopey 1950s science reels for schoolchildren.
Speaking of which, every narrative these days, whether it's a novel, movie, or video game, has to have so much back-story. It's basically the same as the gadget whose blueprint is on full display to the user, only now an exotic world is being dissected. Again it's like a neo-Victorian, neo-Enlightenment time, when readers needed to know all kinds of irrelevant shit to make it through the novel. Gothic novels, as long as some are, generally don't use all those pages for back-story or micro-cataloging every detail of the environment: Sublime terror needs mystery to cloak many of the particular details.
Then there's gift cards -- nothing at all left to the imagination of the recipient. You paid a known amount for it, almost certainly picked it up at the check-out aisle, or a nearby display stand, in a supermarket, and might not even wrap it up in anything to delay knowledge of what the gift is. The fact that about half of the shelf space for Christmas cards is for gift card / money holders (often with a part of the front cut out to instantly reveal that there's cash inside), just goes to show how far this transparency thing goes. They only became popular during the '90s, even though gift certificates had existed for a long time before then.
In social relations, there's the hook-up culture. Now, remember that rates of all sexual activity have been plummeting since the early 1990s, so kids these days are a lot less active. However, on the rare occasion when two of them find themselves on a path toward getting it on, they prefer to skip anything emotional in the lead-up, have some joyless sex, and then return to not being in contact with each other.
Those who want us to be more emotionally restrained should be careful what they wish for -- a lack of emotional conductivity leads to robotic transactional relationships. "I'm hot, you're hot, I guess we might as well get each other off, and get that out of the way," so they can go back to their meaningful lives of playing video games and posting inanities on Facebook.
Music of the past 20 years is a lot more obvious, or transparent, about what they're going to deliver. There's no wandering through this corridor, spying some other room, then leaving into yet another inviting space. Norah Jones is so overly cutesy, and the breathiness is laid on so thick, that it doesn't feel like she's guiding you through different places -- just kind of dropping the curtain right away for you to see what's behind it. I think a lot of mid-century pop music is like that too (like "Beyond the Sea"), in whatever other ways it may differ.
Folks had it so much better in the '80s, when good music was mainstream instead of a fringe thing. Even songs in heavy rotation on MTV transport you to a mysterious place, and lead you through a variety of emotional spaces, building up and releasing tension -- "Wrapped Around Your Finger," "Save a Prayer," "La Isla Bonita," "Sweet Child o' Mine," just to name a few.
I'm not a classical music buff, but I got the same impression from the rising-crime Classical period (the Romantic-Gothic era in literature and painting) vs. the falling-crime Romantic period (i.e. the Victorian era). Beethoven and Schubert seem like effortless masters at leading you on a mysterious tour, whereas composers like Brahms or Liszt feel more like they're revealing more. You don't feel that same build-up and release.
I'm not using "mysterious" in the sense of unfamiliar, or mostly mellow with sudden shock scares. From what I've heard, I can't put Wagner in the same group with Beethoven and Schubert. The pop music equivalent is kind of like Radiohead mixed with some dark metal band. It feels more like floating adrift in a heroin haze, and occasionally being jarred awake, than being pulled along this way to see this space, then pulled some other way to see some other space.
Bach was a shining exception from the Age of Reason / Enlightenment period, although I still think Beethoven and Schubert are quite a bit more emotionally nimble and unpredictable.
Jeez, this is going on pretty long now, and I'm drawing on material I don't know so well, so I'd better cap it there. Feel free to add examples in the comments.
Categories:
Architecture,
Design,
Dudes and dudettes,
Music,
Pop culture
August 17, 2012
Greater sibling rivalry in more promiscuous times?
Small children can figure out whether the teenagers and adults around them are sexually more out-of-control or more restrained. They probably put out different levels of pheromones, and their voices, facial expressions, mannerisms, and other body language are all in a different direction too. Do the older people seem awkward or hesitant to touch each other (like hover-handing), or is it more like they can't keep their hands off each other (like when every dude used to walk around in public with his hand on his girlfriend's ass)?
More indirectly, when little kids are exposed to popular culture, they can tell what resonates with the majority -- songs that are emotionally hotter or colder, movies that have more or less T&A, and so on.
Although these things, just to name a few, do not make the kid entirely certain that the grown-ups are promiscuous, it does incline their beliefs more in that direction. On that basis, one of the first things they'll do -- or not do -- is start getting practice with courting the other sex (flirting), as well as advancing in what they do physically (kissing, playing "I'll show you mine if you show me yours"). It's like speaking a language: if the community speaks English, then English it is, and if the community is promiscuous, then prepare to enter that way of life too.
One major consequence of living in a more promiscuous group is that there's a higher probability that your siblings are actually half-siblings because your mother had you all by more than one father. Therefore, in time periods when promiscuity is rising, so should sibling rivalry, and both should fall together too.
By the time I was a child in the early-mid 1980s, the tiny spark of promiscuity that started circa 1960 had spread farther and reached a hotter intensity, to the point where it wasn't just marginal sub-cultures behaving more liberally. My memories of sibling rivalry are also that it was very strong back then.
We have a home video of me, around 4 years old, running around the back yard and clothes-lining my little brothers (who were around 2). It was routine enough that my dad, who was filming it, didn't try to stop me or punish me after the fact, but just made me go over and hug my brothers, say I was sorry, bla bla bla, that was totally insincere. My mother tells stories about how I used to reach into their cradles and pull on the blankets they were wrapped in like a rip-cord and send them spinning and spinning. Then there was something else that I can't recall off the top of my head, something about putting things in there or messing with the cradle or stroller so that they'd have trouble breathing. She says she was truly worried that my little brothers might not make it to 4 years old.
But it wasn't just me -- they started all kinds of shit too when they were old enough. I remember being chased around the house with Cutco knives, for one thing. Their harm didn't even have to come from their own hands: once my brother lied to my mother that I'd done something wrong to him and started crying, so she slapped me right across the face. He started laughing right there, which is a rookie mistake because then my mother knew he'd made it up, and slapped him too, after apologizing to me.
You get the idea. I don't see any of that stuff going on anymore, at least among children in public. When I visited my 4 year-old nephew a few weeks ago, I didn't see him do anything to his little brother that was as brutal as what we used to do. And his little brother is actually a half-brother, so that's an even more striking sign of how little sibling rivalry there is these days. Sure, he calls him a stinky baby whenever he starts crying, but nothing close to what I did to mine nearly 30 years ago. Certainly nothing violent or physical.
There are no surveys of how intense sibling rivalry is in each year, but my impression is that it had already begun to subside during the '90s, which would fit with the larger drop in violent crime and promiscuity. I just didn't see the little kids around the neighborhood clothes-lining each other when I was in high school during the second half of the '90s.
In popular culture, the little hell-raiser character disappeared as well. There was a Dennis the Menace cartoon show in the late '80s. Even the original was from the first year of rising-crime times, 1959, although Jay North was relatively tame compared to the kids in the '80s cartoon. (I saw the original series in syndication on Nickelodeon in the late '80s.) Then there were those Problem Child movies from the early '90s before the crime rate began falling. And of course Bart Simpson's early incarnation as the little hell-raiser during that period, before he became just another dorky smart-mouthed kid. This stock character didn't exclusively mistreat his siblings, but he did target them, or at least was shown as the type who would if he had had brothers and sisters. (For example, he might be shown acting like a devil toward the pets, who are surrogate siblings.)
One objection to this line of thinking is that we can reduce it all to how inclined people are to use violence -- the whole of society has gotten a lot less violent in the past 20 years, and less sibling vs. sibling cage matches are just a special case of that. Nah, this is a lot more of a drop than what we see in the violent crime rates -- it's not like little brothers are only 50% less likely to get clothes-lined nowadays, it's like it hardly happens at all.
The decline in violence explains a good deal of the decline in sibling rivalry, but there's a lot more of a drop left unexplained. The main source of sibling rivalry is genetic dissimilarity, so it should ramp up in more promiscuous times, which tend to be more violent times. Taking both of these factors into account, I think we can explain the rest of that mysterious disappearance of sibling rivalry.
More indirectly, when little kids are exposed to popular culture, they can tell what resonates with the majority -- songs that are emotionally hotter or colder, movies that have more or less T&A, and so on.
Although these things, just to name a few, do not make the kid entirely certain that the grown-ups are promiscuous, it does incline their beliefs more in that direction. On that basis, one of the first things they'll do -- or not do -- is start getting practice with courting the other sex (flirting), as well as advancing in what they do physically (kissing, playing "I'll show you mine if you show me yours"). It's like speaking a language: if the community speaks English, then English it is, and if the community is promiscuous, then prepare to enter that way of life too.
One major consequence of living in a more promiscuous group is that there's a higher probability that your siblings are actually half-siblings because your mother had you all by more than one father. Therefore, in time periods when promiscuity is rising, so should sibling rivalry, and both should fall together too.
By the time I was a child in the early-mid 1980s, the tiny spark of promiscuity that started circa 1960 had spread farther and reached a hotter intensity, to the point where it wasn't just marginal sub-cultures behaving more liberally. My memories of sibling rivalry are also that it was very strong back then.
We have a home video of me, around 4 years old, running around the back yard and clothes-lining my little brothers (who were around 2). It was routine enough that my dad, who was filming it, didn't try to stop me or punish me after the fact, but just made me go over and hug my brothers, say I was sorry, bla bla bla, that was totally insincere. My mother tells stories about how I used to reach into their cradles and pull on the blankets they were wrapped in like a rip-cord and send them spinning and spinning. Then there was something else that I can't recall off the top of my head, something about putting things in there or messing with the cradle or stroller so that they'd have trouble breathing. She says she was truly worried that my little brothers might not make it to 4 years old.
But it wasn't just me -- they started all kinds of shit too when they were old enough. I remember being chased around the house with Cutco knives, for one thing. Their harm didn't even have to come from their own hands: once my brother lied to my mother that I'd done something wrong to him and started crying, so she slapped me right across the face. He started laughing right there, which is a rookie mistake because then my mother knew he'd made it up, and slapped him too, after apologizing to me.
You get the idea. I don't see any of that stuff going on anymore, at least among children in public. When I visited my 4 year-old nephew a few weeks ago, I didn't see him do anything to his little brother that was as brutal as what we used to do. And his little brother is actually a half-brother, so that's an even more striking sign of how little sibling rivalry there is these days. Sure, he calls him a stinky baby whenever he starts crying, but nothing close to what I did to mine nearly 30 years ago. Certainly nothing violent or physical.
There are no surveys of how intense sibling rivalry is in each year, but my impression is that it had already begun to subside during the '90s, which would fit with the larger drop in violent crime and promiscuity. I just didn't see the little kids around the neighborhood clothes-lining each other when I was in high school during the second half of the '90s.
In popular culture, the little hell-raiser character disappeared as well. There was a Dennis the Menace cartoon show in the late '80s. Even the original was from the first year of rising-crime times, 1959, although Jay North was relatively tame compared to the kids in the '80s cartoon. (I saw the original series in syndication on Nickelodeon in the late '80s.) Then there were those Problem Child movies from the early '90s before the crime rate began falling. And of course Bart Simpson's early incarnation as the little hell-raiser during that period, before he became just another dorky smart-mouthed kid. This stock character didn't exclusively mistreat his siblings, but he did target them, or at least was shown as the type who would if he had had brothers and sisters. (For example, he might be shown acting like a devil toward the pets, who are surrogate siblings.)
One objection to this line of thinking is that we can reduce it all to how inclined people are to use violence -- the whole of society has gotten a lot less violent in the past 20 years, and less sibling vs. sibling cage matches are just a special case of that. Nah, this is a lot more of a drop than what we see in the violent crime rates -- it's not like little brothers are only 50% less likely to get clothes-lined nowadays, it's like it hardly happens at all.
The decline in violence explains a good deal of the decline in sibling rivalry, but there's a lot more of a drop left unexplained. The main source of sibling rivalry is genetic dissimilarity, so it should ramp up in more promiscuous times, which tend to be more violent times. Taking both of these factors into account, I think we can explain the rest of that mysterious disappearance of sibling rivalry.
Categories:
Violence
August 16, 2012
Strawberry Switchblade
Found these cute Celtic harmonizing girls on vol. 4 of the awesome (and mostly out of print) Living in Oblivion series. There's a re-issue of their only album, which I'll have to check out.
They were a bit ahead of their time in mixing gloomy lyrics with bubblegummy melodies, which would become a little more popular when college rock hit its peak around 1988 with the Cure, Voice of the Beehive, Transvision Vamp, the Primitives, etc. But, the good part of hailing from 1985, just as the heyday of new wave was winding down, is that it's a lot more body-moving. Even the goths back then were bouncy and upbeat!
They were a bit ahead of their time in mixing gloomy lyrics with bubblegummy melodies, which would become a little more popular when college rock hit its peak around 1988 with the Cure, Voice of the Beehive, Transvision Vamp, the Primitives, etc. But, the good part of hailing from 1985, just as the heyday of new wave was winding down, is that it's a lot more body-moving. Even the goths back then were bouncy and upbeat!
Subscribe to:
Posts (Atom)