May 20, 2015

Generational splits in being assertive, passive, or just plain awkward

Those who have spent much time interacting with Millennials have noticed how withdrawn they are. The average member won't initiate anything, whether social (getting to know new people) or mechanical (that bamboo is starting to look gnarly in the back yard, better clear it out).

This has lead casual observers to describe the generation as passive, but that term really means that the person will pitch in and perform various tasks once someone else — the initiator or instigator — has gotten the ball rolling first. They are willing, perhaps even eager to join in an activity — they just can't start it.

Yet Millennials are not only incapable of kicking something off, they fumble the ball once it has been perfectly thrown to them. Beyond being anxious about introducing themselves to new people, they don't know how to respond to someone else introducing themselves first, let alone how to keep the back-and-forth going so that the result is a relationship rather than a mere encounter.

They don't know how to act, but they don't know how to react either. They're just plain awkward, and it keeps them from developing a normal system of relationships.

"Passive" would actually be a better description of the average Gen X-er. As long as there's an instigator around, X-ers are perfectly comfortable joining in the mischief. Or accepting a subordinate role in a hierarchy, under a leader, mentor, or guide. Do you want to go bowling? "If you guys are going, sure." Where do you want to go tonight? "I dunno, I'm cool with whatever." Yeah, me too.

It normally doesn't devolve into the blind leading the blind because despite the majority tendency, there's always at least one leader or instigator in their social circle.

That leaves the Boomers as the assertive ones. There's a lot more playful, half-serious ribbing and joshing among them because they're all trying to assert themselves and have the others in the group be subordinate. They're more willing to be creators, while Gen X prefers to be fans.

You see this clearly in stereotypes about husbands. The stereotypical Boomer husband was cheating on his wife with a secretary or waitress, endangering his marriage to assert his libido. Gen X husbands are more likely than Boomers to see their role as the dopey dad and the henpecked husband, whether they resent that role or are cool with it, y'know, as long as the wife is cool with it.

The stereotypical Millennial husband is neither an assertive nor a passive partner in the marriage. Millennial husbands and wives are more like gender-non-specific housemates who occasionally have genderless sex. None of the household tasks get taken care of because neither is capable of being the leader or the follower in getting them done. Maybe if we both ignore the bamboo jungle in the back yard, it will be nice and just go away to infest some other home.

What underlies these differences seems to be how much of their social development, say ages 5 to 25, took place in an outgoing (1955-1990) vs. a cocooning period (since 1990). Boomers, particularly the later ones, developed entirely within an outgoing climate, which allowed them to reach an adult level of assertiveness.

Gen X developed partly during an outgoing climate, but also during a cocooning climate in early adulthood or even adolescence. That allowed them to mature beyond childish awkwardness, although still retaining more of an adolescent approach of "I'm up for it if you are". That effect is more pronounced among the later births in the generation.

The poor Millennials who grew up entirely in cocooning times, under helicopter parents no less, never even made it to the adolescent stage. Now that they're nearing 30, they realize that they're supposed to be able to take part in the back-and-forth, following either an assertive or passive role, and some of them are making a conscious effort to practice. But at a gut level, their instinct is still to just stand there and go, "OK, so now I guess we, uh.... well, this is awkward..."

May 15, 2015

Drawing generational boundaries from slang and other meaningless traits: An evolutionary view

The standard intellectual approach to defining generations is to lump together those individuals who all went through some key event or series of events during the same stage of their life. The point is that what shaped the members of a generation, and what binds them together, is meaningful — growing up in Postwar prosperity, as children of divorce, as digital natives, etc.

In an informal setting, though, the shared traits are not so meaningful. What's your slang word for "very good" — is it "peachy keen," "groovy," "sweet," or "amazing"? (Those are for the Silents, Boomers, Gen X, and Millennials.) What were the popular colors for clothing when you were in high school? What are your favorite pop songs of all time? Who was your first huge celebrity crush?

These are not the kinds of people-molding forces that social scientists propose.

Sometimes the two approaches will draw similar boundaries — if you were a child of the Postwar prosperity, and born during a time of rising fertility rates, you are also familiar with the phrase "groovy," at some point in high school you wore orange and blue on the same day, one of your favorite pop songs is "Happy Together," and you had a huge crush on Mary Ann or Ginger.

But other Powerful Societal Forces don't affect people for a limited time to produce a tightly bound generation who underwent the effects. They are long-term trends along which people born in one year or another simply came of age during an earlier or later phase of the single ongoing process. Suburbanization, ethnic diversity, mass media saturation, and so on.

Gen X and Millennials, for example, don't look like distinct generations when you look at those three factors — they look fairly similar to each other, and different from the Greatest Gen, Silents, and even Boomers. And yet their group membership badges are totally different — slang, "you can't listen to that, that's our song", or tastes in food (way more toward Mexican and Chinese slop among Millennials).

When the X-ers and Millennials vote against the "boo taxes" government of the Silents and Boomers, it will be a case of politics making strange generational bedfellows, not similarly shaped generations on either side warring against their antitheses.

The choice of whether to carve out groups based on a meaningful or arbitrary trait shows up in evolutionary biology and historical linguistics. Both fields prefer shared traits to be arbitrary. If lots of individuals share something seemingly arbitrary, it's probably because they come from a common origin where that just happened to be the norm. That's why geneticists look at neutral DNA to determine ancestry.

If lots of people share something meaningful, like dark skin, that could be due to similar meaningful pressures acting on two unrelated groups, like the Africans and New Guineans both evolving dark skin as an adaptation to tropical climates, despite being distantly related on the whole genetically.

Likewise in the history of language, if two groups share a word for something functional like "internet," that could be because two unrelated groups both adopted the phrase when they adopted the technology, in recent times. If they share a word for the number "four," that can't be chalked up to similar pressures making the groups speak similarly. They must descend from some common ancestor where the word for that number just happened to be pronounced "four".

Aside from the theoretical motivation for using arbitrary traits, they're also the most palpable in real life. If you overhear someone in line at the store saying they agree with gay marriage, that is most likely to be an airhead Millennial, but could very well be an X-er or even a Boomer. Ditto if they make an offhand joke about their parents' divorce, their student loan burden, and the like.

But if you overhear your fellow supermarket shopper say, "Listen, they're playing "Footloose" — isn't this song suh-WEET?!" then they're definitely Gen X. If it's, "Oh my gosh, I'm like obsessed with "Blank Space" — not gonna lie, that song's actually kind of amazing!" then they're literally definitely Millennials.

The sociologist's large-scale impersonal forces make individuals similar, but not necessarily in a social dynamic way that cements group membership. Children of prosperity turn out this way, children of austerity turn out that way, whether they ever interacted with other members of their group to create a shared culture.

It's the seemingly trivial stuff that serves as shibboleths, food taboos, folk tales ("urban legends"), and totem animals to distinguish Us from Them. Those things only became popular by individuals accepting them rather than any of their alternatives to signal what group they belonged to. They get closer to what creates a community, beyond what creates a group-of-similar-individuals.

May 10, 2015

Broken homes epidemic reversed since the '90s babies?

Now that the data from the 2014 General Social Survey are online, we can look into some other trends that were too hazy to study before. The sample size of Millennials used to be too small, but now with another wave of the survey including them, they can be better investigated.

One thing I've been wondering about for awhile is if the children of divorce are sticking together when they themselves became parents, or if they're just going to perpetuate a climate of broken homes.

In an earlier post, I looked at the trend by birth cohort. Growing up without both parents became more common starting with people born in the late 1950s, and only grew more and more prevalent with each cohort afterward, right up through those born in the late 1980s. The sample size was too small for '90s births to see whether it continued or reversed.

Now that there is a large enough sample size, though, it looks like the trend did reverse. I grouped respondents into five-year age cohorts, and no matter how you move that five-year window around, those born in the early-mid-'90s were more likely be living with both parents at age 16. The difference is only a few percentage points, but that's still remarkable considering that every previous cohort since the late Boomers showed a steady and notable decline of growing up in an intact family.

These results showed through for whites, blacks, and "other" races. Race could not have anything to do with the overall results anyway, since more recent cohorts are blacker and Mexican-er, and those groups have higher rates of broken homes than whites do. Simple demographic projections would have predicted a steady decline, but it looks like the return toward the bi-parental household succeeded in spite of racial demographic trends against it. It is therefore like the falling rates of violent and property crimes over the past 20-25 years, despite a blacker and browner population.

The reversal also held for upper, middle, and lower classes (I used number of years of education as a proxy), although it was stronger and came somewhat earlier for those higher up on the social pyramid. This is unlike the pattern that Charles Murray details in Coming Apart, where for example the lower class continues to get divorced at higher rates over time, while the upper class has returned to marital normalcy after the initial perturbation back in the '70s.

This makes me doubt the earlier explanation I gave that linked broken homes to the status-striving and inequality cycle -- things that have steadily gotten worse since sometime in the '70s or early '80s.

If the first cohort to be struck by the broken homes epidemic was born in the late '50s, and the direction began to reverse with the cohort born in the early '90s, that suggests a link to the cocooning-and-crime cycle. Being a child of divorce became more common among those who were small children during the rising-crime period of circa 1960 to 1990. If you were a little kid during a falling-crime period -- most Silents and Boomers, who grew up in the Midcentury, and later the '90s babies -- you were more and more likely to grow up in an intact family.

One effect of a rising-crime climate is giving less weight to the future and living more in the now -- not surprising when rising crime rates make a safe and secure future look less and less likely. This is a "facultative" response, one that responds to current conditions. If those conditions are present long enough over time, people will evolve an "obligate" response: their discounting of the future becomes more wired-in where the environment is violently unstable.

So perhaps the parents splitting up and telling their kids good luck was part of the greater pattern of impulsiveness or discounting of the future. Abortion rates took off until circa 1990 as well -- hard to think of a more callous attitude toward your child's future than that.

Once the crime rate started falling in the '90s, parents projected a safer and more secure future, and began weighing the future more heavily. As one sign, they became less likely to opt for abortion or divorce as a solution to the "don't feel like raising kids" problem.

GSS variables: family16, cohort, educ, race

May 8, 2015

Prepping for cataclysms, neglecting ordinary emergencies

Our increasingly paranoid and status-striving society has passed a point of no return, where there are more young adults who carry around paracord bracelets and pocket knives to "prep" for disaster, than those who know how to change a tire and carry the basic tools to do so in their trunk.

You'd think that if they're paranoid enough to be prepping for a shit-hits-the-fan scenario, they would also be planning for the smaller and more predictable disasters that a well-adjusted person would worry about -- flat tire, burnt out headlight / brake light, cuts bleeding enough to need a bandage, and so on.

Yet they don't behave like people who are going above and beyond the scenarios that normal people already have covered, they are prepping for the apocalyptic instead of the ordinary.

You might try to rationalize their neglect of mundane duties by saying that the apocalypse trumps everything -- however small the probability, the magnitude of destruction will be more or less infinite, so it deserves sole focus as "what to be ready for".

But Haidt's research on moral reasoning shows that it is typically a post-hoc rationalization of a gut-level intuition. Thus, the preppers have a gut-level aversion to stewardship of everyday affairs, and develop a conceptual excuse afterward -- they're not negligent, they're actually prepping, for, uh, lemme think... for a far more disastrous scenario than those that trouble normal folks. Yeah, that's it.

So, scrupulously carrying a pocket knife, and updating their paracord bracelet to the newest model, serves to pardon them from, say, cleaning out the lint and debris that's clogging their fan or computer, learning CPR, and getting practice as a handyman.

As an example of how frivolous their priorities are, consider what they include in their EDC -- everyday carry, or things that are on them no matter what. Googling "edc" and "first-aid" gives half a million results; likewise for averaging the results for "edc" and "band-aid" with "edc" and "band-aids". Less than half a million hits for "edc" and "multi-tool". Yet "wallet" and "knife" get over a million, and "light" and "watch" get over 50 million.

It's hard to think of something more useless in a doomsday world with no tight schedules to keep, than a wristwatch. If you need to tell time, just look up at the fucking sky like people have for millions of years. Are you really incapable of telling whether it's morning, afternoon, evening, or night by opening your eyes outdoors? And if you don't have a good intuition for whether something happened five minutes ago or five hours ago, you are braindead and won't need to worry about surviving the apocalypse anyway.

Yeah, but how are we supposed to start a fashion contest over looking up at the sky? Wristwatches FTW.

Focusing on the cataclysmic also serves their impulse toward status-striving: prepping for the apocalypse is Real Serious Shit, requiring Advanced Tactical Gear, whereas any fuddy duddy can learn how to test their gas pipes for a leak by spraying soapy water, or carry a first-aid kit in their car in case someone gets cut. Pursuing the fantastic and spectacular is more attention-getting than tending to duties that are realistic and mundane.

Of course that also means that these preppers are just LARP-ers, having little to no training, practice, or experience. But hey, they watched a YouTube series by some guru who served in Gulf War, as though that were tantamount to downloading his brain a la The Matrix. Indeed, for all their rugged outdoors posturing, Neo is closer to their true hero -- someone who can become the ultimate urban survivalist badass by passively and instantly receiving the "content" of some cyber-guru, without having to put in any practice, go through any boot camp, or pass through any other rite of passage. Consumerism doesn't count ("purchasing my first multi-tool").

Perhaps that's another reason why they're so obsessed with watches -- they wouldn't be spending time doing anything real, and would have to engage in some pointless repetitive activity to assuage their anxiety and make them feel like they were getting shit done. Let's just keep glancing down at our watches, and hopefully that will allow us to just wait out the end of the world as we know it. Their "gear" is simply a collection of talismans and fetishes being stroked by the impotent in an attempt to feel capable and powerful.

Normal people recognize how useless these posers would be in a real disaster, but the preppers reckon rank by the upvotes they receive from one another.

Sadly this phenomenon generalizes to all sub-cultures in a striving climate -- ordinary duties are neglected in the pursuit of vanity points in some circle-jerking status contest.

Related post: doomsday prepping in the civic Midcentury vs. anarchic Millennial eras

May 6, 2015

Where did all the annoying bumper stickers go?

From the 1990s through the mid-2000s, it wasn't unusual to see a car whose bumper, or even entire back side, was encrusted with stickers of confrontational whining, smug slogans, adolescent humor ("Your mom's hot"), and/or a list of your favorite "edgy" bands. Cars with only a handful of stickers were more common still.

You don't see that anymore, and on the rare occasion that you do, it's clearly a fossil from that earlier era -- "Impeach Bush," "Fukengr├╝ven," "COEXIST," "Mean People Suck," "Phish," etc. What happened?

Flashing to the world all of your annoying opinions and obsessions ("interests"), from behind a wall of anonymity, in a drive-by fashion, trying though often failing to smugly troll strangers -- sounds an awful lot like what the internet was made for. Or rather the web 2.0, when comments sections and social media were born. Only now the technology allowed you to annoy people all over the world -- get more bang for your broadcasting buck.

Although seemingly trivial, the case of bumper stickers illustrates an important point I keep making about technology and social life: technology doesn't make us use it any particular way, and a technology may only become widely adopted because users were already heading in a new social direction.

This is complementary to the standard view that technology colonizes our society, and we are unwillingly affected by it for better or worse. I don't deny that that happens, but it relies on the assumption that the users didn't really want it -- why not ask them first and see how enthusiastic they were to adopt it?

Thus, anonymous comments and social media did not tempt people into blabbing their confrontational, smug, gotcha! slogans to the rest of the world. That attitude and behavior was already highly visible back in the '90s, and even the 2000s -- right up until the web 2.0 opened its doors. Twitter did not set off the battle between SJWs and their counter-trolls; that existing culture war simply shifted arenas, from car bumpers to social media sites.

It also goes to show how little the difference between today and the '80s has to do with technological changes. People didn't have anonymous comments and Twitter back then, but they had bumper stickers and decals -- why didn't they plaster dozens of stickers on their bumper, using them for hostile crusading like they would come to do during the '90s? Quite simply because they didn't have that attitude.

The primary change between the get-along '80s and today is one of attitude, social stance, worldview, and so on, not technology. The '90s is the crucial decade to resolve the matter. Like the '80s, it lacked an internet with anonymous comments and social media sites. Unlike the '80s, people's attitudes had shifted toward cocooning and anxiety or hostility in social situations.

The social mood trumped technological constraints, with people of the '90s making do with bumper stickers for socially anxious confrontations: to wage SJW crusades (or to troll the SJWs in return), to blab their obsessions to the world, and to try out one-liners on an audience that can't respond by rejecting them.

May 3, 2015

Gay marriage will move on to further gay crusades, not letting other weirdo groups get married

The most common response from conservatives who raise the troubling matter of where gay marriage will lead to, is that it will lead other wacko kinds of marriages to be sanctioned -- polygamy, bestiality, incest, pedophilia, whatever.

But here's the real deal (from a comment I left here):

The next radical social-political experiment won’t have to do with marriage — that’s falling for the con that the gay marriage issue is about marriage first, and secondarily about whom it’s letting into the institution.

Clueless conservatives, or rather befuddled reactionaries, respond with, “Well, if you’re going to let ridiculous group X get married, why not ridiculous group Y? And ridiculous group Z? Where will the desecration of marriage end?”

But the culture war is not about marriage — it’s about giving fags all sorts of privileges that they don’t deserve, ignoring and indeed obscuring and denying the fact that they are fundamentally abnormal rather than normal, which justifies discrimination against them (i.e. treating them differently under the law).

That’s what makes the idea of them being married and committed such a joke, or the idea that two giddy Peter Pan homos are just as maternal and nurturing toward children as a mature woman. Or that what makes them *them* is no less healthy and wholesome than what makes heteros *hetero* — just don’t ask about how many diseases are devouring their beleaguered half-corpses.

Therefore the next big crusade in the culture war will be about “what other ways can we propagandize homosexuality as ‘just like us’ and give them goodies accordingly?” Not “what other risible group should we allow to get married?”

Look at blacks in the Civil Rights era — it didn’t move to “what other group should we allow to enter our wholesome white schools?” They didn't send in the National Guard to forcibly integrate the Mexicans, Orientals, American Indians, etc. Rather, it moved to “what other privileges, set-asides, and quotas can we shower on the blacks?”

The culture war is based around sacralizing a victim group (blacks, fags), not desecrating a particular institution (a side effect, not a sustained target).

May 1, 2015

Cosplay remakes and the uncanny valley (video for "Fancy" by Iggy Azalea)

The cosplay fanfic approach of the new Star Wars movie will strike normal people as weird and off-putting, though in a way that's hard to explain. A gut revulsion suggests a role for disgust, rather than a conscious list of reasons why it looks bad.

I still couldn't put my finger on what is (mildly) disgusting about it, so I looked for another example of the cosplay fanfic approach to pop culture.

Here is the music video for "Fancy" by Iggy Azalea, the song of the summer for last year, with over half a billion views on YouTube. Its set design, locations, clothing, hair, and plot vignettes are ripped from the 1995 movie Clueless, probably the last coming-of-age teen movie with likeable characters. Yet everything about the words, intonation, facial expressions, body language, and general attitude of the girls in the music video is the polar opposite of the characters in the movie.

In Clueless, the protagonist Cher is a well-meaning ditz who occasionally bumbles in her nurturing attempts at playing matchmaker. (The movie is based on Emma by Jane Austen.) She tries to make over the new student Tai, a free-spirited, socially awkward naif who becomes more savvy and popular, acts too big for her breeches, but ultimately reconciles and acts humbly around her friends. They show a basic concern with doing right by others in order to fit in. They want to be liked and accepted into a group, not to be worshiped by fans and feared by haters, both groups being socially distant from the diva at the center of attention.

See the trailer here, although it focuses more on dishing out one-liners than establishing character traits.

Fast-forward to Iggy Azalea and Charli XCX aping Cher and Tai in the "Fancy" video. Both are hyper self-aware pose-strikers, unlike the ditzy and spacey characters from Clueless. Their attitudes are smug, bratty, and decadent rather than uncertain, seeking to please, and wholesome. They're self-aggrandizing and condescending rather than other-regarding. They aspire to being distant divas and icons, rather than friends accepted into a clique. And they give off an overly sexualized persona, whereas the appeal of the original characters was not simply to gawk at their ass and thighs.

The contrast for anyone who remembers the movie is so harsh (way harsh, Tai) that it creates an uncanny valley reaction, where something lies between two opposites and leaves the viewer disturbed. Most CGI human beings provoke such a response -- neither human enough, nor robotic enough, but more like a freak of nature.

It gets worse. Seeing actors play totally against what we associate with their clothing, environment, and overall zeitgeist leaves us asking, "What happened to the real people who wore those clothes? Went through those vignettes? Lived in that place?" It feels like the impostors are not just try-hard wannabes, but body-snatchers who have killed what is familiar and replaced it with something alien. It's like that scene in Silence of the Lambs where the he-she serial killer is donning a wig and twirling around in his lady-flesh-suit.

Iggy Azalea has killed Cher from Clueless and is wearing her skin.

Earlier examples of LARP-ing in popular culture at least tried to remain as consonant as possible with the original -- Grease, Back to the Future, Forrest Gump. Now the point is simply to body-snatch the sympathetic original characters and assimilate them into the loathsome present, like some kind of pop-cultural Borg. It is appropriation not out of affection and nostalgia, but simply to claim more and more territory of the good old days for idiotic, imperial trends.

I know -- BFD if it's some throwaway music video. But remember that this is what's going to unfold during the entirety of the new Star Wars movie. And it will only grow from there: the decision of the Star Wars brand sets a binding precedent.

Earlier remakes and reboots tried to distinguish themselves from the original by using a different visual style, exploring other parts of the narrative and character development, and so on. Always boringly, but they were different. Now the rehash movies are going to move into cosplay mode -- "It looks just like real thing!" (don't ask how it tastes, though). Expect pop culture to get even more off-putting in the near future.

April 29, 2015

Cocooning still continuing through new General Social Survey data

Now that the 2014 data for the General Social Survey have been released, we can see if recent social trends are continuing or reversing. I'll be focusing on those that relate to the cocooning-and-crime cycle, which plays out on the local and interpersonal level, where we typically only have impressions rather than hard data.

Here is an earlier post that lays out the dynamics of crime and cocooning behavior. Briefly, when people come out of their shells and let their guard down, it makes them more vulnerable to manipulation or predation by criminals and con artists. An outgoing social mood leads to rising crime rates. As crime rates get higher and higher, people begin to worry more about who they can trust. Rising crime slows down trust.

Ultimately they figure it's not worth the risk to be socially open around strangers and begin to close themselves off. That leaves slim pickings for criminals, so that cocooning causes falling crime rates. With the environment becoming so safe, people reassess how necessary it is to cocoon themselves from seemingly non-existent danger. Ultimately, low crime rates lead people to re-emerge from their cocoons, which begins the cycle all over again.

Violent and property crime rates have been falling since a peak around 1992. They fell dramatically during the '90s, looked like they would bottom out during the 2000s, but have continued a steady descent over the past five or so years.

That should have been enabled by the continuing of the cocooning trend, and indeed the new GSS data show no reversal in any of the key signals of people closing themselves off to others.

The main psychological trait here is trust, and it continues to fall. The GSS asks if other people can generally be trusted, or if you can't be too careful. A high was reached in the late '80s, with around 40-45% of Americans trusting strangers. After a decline, it appeared to hold steady at 32% from 2006 onward. In the 2014 survey, though, it took an extra dip down to only 30%.

This withdrawal of trust cuts across every demographic group, so I'm not controlling for any of them. Race, sex, age, education, class, marital status, region, size of local population, political orientation -- everyone is noticeably less trusting of strangers than they were 25 years ago.

One of the most dramatic drops I noticed was among young people. The only age group that is about as trusting as it used to be is 60-somethings. Every other group shows the decline, but the drop is steeper the younger the group. Among people aged 18-24, trusting others dropped from 35% to 14% from the early '90s to 2014. But even among 40-somethings, trust levels fell from 48% to 28% during the same period (that's the same size of a decline, but relatively smaller compared to how high it began).

I interpret that as younger people being more susceptible to cocooning because not trusting strangers and wanting to just play by yourself is a natural part of immaturity. Young people being as socially open as they were back in the '80s was more of a radical departure from what you'd expect based on their age, so it snapped back harder once cocooning set in (regression toward the mean).

You may be thinking, "Well, there's still at least 30% of Americans who trust others -- they're a minority, but it isn't like they're non-existent. And the maximum was only 40-45% before. How big of a change can that be?"

The difference is that trust is not part of isolated, individual behavior -- it relates to interactions among pairs of individuals, or larger groups still. Pick two people at random, throw them together, and see if both of them are trusting. If so, they can sustain a getting-to-know-you interaction. If only one is trusting, the interaction will sputter out. If neither one is trusting, it won't even be initiated.

The chance that two randomly chosen people are both trusting is proportional to the square of their fraction in the population. (Pick one, pick another, multiply the probabilities.) Squaring a fraction makes it much smaller, so looking at just the trust level among individuals is underestimating how fragmented society has become.

In a world where 45% are trusting, the chance that any two strangers who run into each other will both be trusting is 20%. In a world where only 30% are trusting, those two strangers have only a 9% chance of both being trusting.

Thus, even though a trusting disposition has "only" fallen by one-third, from 45% to 30%, trust-based interactions between a pair of strangers have fallen by half, from 20% to 9%.

It's even worse for those youngsters. When their trust levels fall from 35% to 14%, successful interactions between a pair of strangers who run into each other fall from 12% to just 2%. Of course, folks can make small talk without having to trust each other, but I'm talking about the ability of people who haven't met before to open up and connect with each other right off the bat. It may have been difficult before, but it's nearly impossible now. They might as well be toddlers who think everyone other than mommy and daddy are dangerous, or who are at least not worth trusting to share their toys with.

If you've wondered why you never see young people letting it all hang out and feeding off each other's energy, that's why. They simply don't trust anyone.

Going out to a bar on a somewhat frequent basis is also less common than it was back in the '80s. It's most pronounced among younger age groups, and only those who are 55 and older are more likely to go out to a bar or nightclub than their counterparts used to be. That's the Boomers refusing to age gracefully, and not really a sign of an outgoing disposition. They're there to engage in a contest of "who's still got it?" rather than to open up and have fun.

Spending an evening with a neighbor is still declining since a peak in the late '80s.

Both men and women continue to have less frequent sex. Doing it only once a month, or less frequently, afflicted nearly 40% of women in the early '90s, but nearly 50% in 2014. For men, infrequent sex rose from around 30% to around 40%.

Those questions establish that people are still cocooning. What about gradually realizing that the world isn't so dangerous anymore? Fear of walking around your neighborhood at night tracks the crime rate, lagging behind it by a few years (just to be safe). In 1994, 45% of Americans were afraid, and in 2014 it continued to drop, down to 31%.

Predicting how long this period of cocooning and falling crime will last is not an exact science. The last time, it was about 25 years, from a peak of crime in 1933 to a bottom in 1958. Keeping tabs on the social mood is more important: once we see a steady rise in trust and open behavior, we can expect crime rates to start rising shortly after. So far, though, that doesn't appear to be around the corner.

GSS variables: year, trust, socbar, socommun, sexfreq, fear

Neighborhood-level diversity prevents rioting among blacks and SWPL decadence among whites

Robert Putnam's research on diversity and civic participation shows that the more diverse an area is, the less likely the individuals are to coordinate their shared interests at a larger scale. That not only affects relations among individuals from different ethnic groups -- blacks and whites won't cooperate -- but even within the same group -- whites don't cooperate with each other, and blacks don't cooperate with each other.

Could there be an upside to the failure of individuals to coordinate their collective behavior? Yes -- if their purpose were anti-social or decadent. The decay on display in Baltimore provides a great case study.

It's no surprise that the rioting and looting are taking place in neighborhoods that are nearly 100% black. Blacks are more impulsive and inclined toward both violent crime and property crime. What is unusual, however, is that the neighborhoods that are closer to 50-50 black and white are not merely afflicted by rioting to a lesser degree than the 100% black areas, but are hardly affected at all. (See the maps at the end of this post.)

What gives?

Rioting and looting are collective behaviors, however fleeting and decentralized. They do not require sustained interest and permanent institutions to carry out the collective will, but they do rely on a minimal level of in-group cohesion and trust in order to keep the snowball growing rather than flaking into pieces or turning the members against one another.

In fact, with a little more regular participation and a bit more of an "honor among thieves" relationship, coordinated crime by an organized ethnic group could sustain a gang or mafia, again provided the area belongs entirely to that ethnic group.

The mafia operated in neighborhoods that were predominantly Italian, not in those that were half-Italian and half-non-Italian. Black gangs controlled South Central L.A. back when it was all black; Mexican gangs control it now that it's all Mexican. If the neighborhood was only half-Italian or half-black, the mafia and gang problem was not simply half as bad as in the fully Italian or black areas, but could not get going in the first place. (Of course, they would have still been subject to individual-level crime, just not collectively organized crime.)

White enclaves in large cities tend not to be stricken by rioting and looting, because anti-social whites express their deviance in non-violent ways. When they coordinate their deviance at the collective level, they take over the local education system and ban peanuts from school grounds, they carve out bike lanes for non-existent bike riders while clogging the narrowed streets for actually-existing drivers, and they take over any business area that once supported a variety of normal utilitarian shops and turn them all into arenas for decadent status contests (quirky bars, quirky coffee shops, quirky doggie yoga day-spas).

Yet as in the case of black rioting, these collective insanities only infect neighborhoods that are nearly 100% white. If it's only 50-50, the hipsters and yuppies don't feel emboldened enough to organize their decadent impulses. They don't have the sense that their ethnic group totally owns the place and can do whatever they want with it, for better or worse.

Overall, diversity is corrosive to society at any scale. But there is a silver lining: it also prevents anti-social collective behavior from catching fire.

Maps of diversity and rioting in Baltimore

Here is a map of racial diversity around Baltimore. Blue dots are blacks, red dots are whites. (From a series of similar maps here.)


The core of the city is where the dots are the densest, more or less in the center of the image.

There are two stark areas that are mostly black -- West Baltimore and East Baltimore, close to the core. Farther away from the core (for example, toward the northeast), the blue dots overlap red dots, showing a more mixed area than a pure-black ghetto.

There are three main areas that are mostly white -- North Baltimore (the large wedge pointing south that separates the black areas to the west and east), and two smaller but denser enclaves that lie just south (on a tiny peninsula) and just southeast of the core (in a red ring).

Yuppie and hipster decadence is concentrated in the all-white areas, such as Hampden in the North Baltimore wedge, Downtown near the center, and Fell's Point in the red ring lying southeast of the core. To the northeast, there are still plenty of whites, but they live in more diverse neighborhoods. SWPL decadence in a "nice boring" place like Belair is not at half the level of Fell's Point, but barely there at all.

As for black decadence, here is a map of the major riots in 1968, overlaid with the riots in 2015 (which are much smaller -- though wait until 2020). They come from this article at Vocativ.


The scale on this map is more zoomed-in than the map of diversity. The major and minor riots have afflicted the all-black areas of West and East Baltimore, close to the core. There are plenty of blacks living out to the west and southwest, as well as out toward the northeast, but they find themselves in more diverse neighborhoods.

In these diverse neighborhoods, would-be rioters apparently don't feel they can trust their fellow blacks enough to carry out an afternoon and evening of looting, trashing windows, and setting cars on fire. If only they owned the whole neighborhood, "shit would git real". But with nobody trusting anybody else in a mixed area, they're going to just watch the riots on Wurl Stah and vent their aggression on Twitter.

When the neighborhood might otherwise be burning down, here's one cheer for diversity-induced atomization.

April 27, 2015

Movie trailers as serial drama (STAAAAARRRRR WAAAAAARRRSSSS)

On the last episode of "Agnostic reacts to Star Wars trailers," we learned what the new trilogy will amount to -- a cosplay fanfic sequel for Millennials.

And now that they've released the next installment of "Trailers for That New Star Wars Movie," that assessment is certain. You can almost see the Millennial in stormtrooper costume walking up to Harrison Ford and nervously asking for his autograph. I wonder whether that'll be relegated to a making-of sequence during the credits, or be included in the main narrative itself.

("Gee Mr. Solo, you're some legend around these parts... It sure would do me the honors if you'd, uh, do me the honor of signing my toy lightsaber!")

I still don't know what the hell the movie is going to be about, but contemporary audiences don't want any SPOILERS whatsoever.

Trailers are no longer meant to reel you in on the first viewing. They have become a serial drama form unto themselves. The first reveals a tiny bit, and leaves the audience on a cliffhanger. The next one recaps the last one (barren desert landscape, speeder bike battle, lightsabers), but reveals a little more (Vader helmet, Han and Chewie, TIE fighter pilots).

Who knows how many more episodes there will be before the series finale -- the trailer that tells you what the hell the movie is going to be about.

Not following the hype cycle of modern movies, I was unaware of the trend of trailers as soap operas (gossip about them online when the new episode comes out!). I'm even more out of touch with video games, but their hype cycle is so huge that even someone who doesn't play them anymore may know about it. First there's a hint from the developers, then a spectacle teaser during E3, then a beta version, then a playable demo, and finally two years later, the actual game.

I remember when the movie trailer was a terse stand-alone format, and when new video games were announced once they were released, not years ahead of time.

But, that was back when people still had a life. Folks in outgoing times have too much of a dynamic social life to tolerate a serial format stringing them along and keeping them waiting. Soap operas were huge in the Midcentury, but were marginal by the '80s. Short film serials were popular at theaters in the Midcentury, but were also absent during the '80s, whose climate was similar to the Roaring Twenties. Only since the cocooning climate returned during the '90s did serial dramas return to mass entertainment, this time on TV.

They could have made a string of teaser trailers for movies back in the '80s, to be shown on TV commercials or in theaters, but they didn't. Those are a new development -- since when exactly, I don't know, although I have a hunch the Lord of the Rings movies had serial trailers.

Cocooners are bored out of their minds, so they crave a steady and regular fix of anything meant to wake them up. Previously, on "dissecting popular culture," we looked at entertainment as a mood stabilizer vs. experimentation, making the link to stabilizing vs. destabilizing types of drugs.

The stabilizing kind were popular in the Midcentury and have become popular again since the dawn of Prozac circa 1990. Ward Cleaver had Kellogg's Pep and Geritol, while his grandson has Monster energy drinks and Viagra. The destabilizing kinds like LSD are meant to be taken in stand-alone sessions, as though each trip were to somewhere different.

Movie trailers have clearly joined the mood stabilizer family of entertainment. Life is boring, but don't worry, another teaser trailer for Whatever Part Four comes out next week. And don't worry, it won't contain any spoilers -- which would ruin the fix you ought to get from the next trailer after that one.

Spoilers may not answer every question about who, what, when, where, why, and how, but they do close off certain paths through which the trailer-makers could have strung you along. And now that the function of trailers is to provide a regular dose of stimulation to bored nerds, they no longer tell you what the hell the movie is going to be about.

April 25, 2015

Exotic cuisine, status-striving, and achieved vs. ascribed status

What role does the increasing popularity of foreign food play in the larger trend of status-striving over the past 30 or so years?

The usual view, which I had bought into without giving much thought to it, is that it has to do with signaling how esoteric your tastes are, and by extension how erudite you are in the foodie world. Everybody knows about "Mexican food," but do you know what the "cuisine of Oaxaca" is like?

In this view, the players in the status contest are trying to one-up each other by discovering, obsessing over, and then abandoning one exotic cuisine after another. Each cuisine goes through a fashion cycle, and the larger contest is jumping from one to another, each cuisine less obvious than the last.

And yet, after three decades of fashionable foodie-ism, Asian restaurants are still basically Chinese, Japanese, and Thai. Japanese has not fragmented into increasingly esoteric sub-cuisines -- Okinawa, Hokkaido, Tokyo vs. Osaka, etc. Thai-mania has not led to obsessions over Vietnamese, Cambodian, Burmese, Laotian, Malaysian, or other Southeast Asian food.

North African is still Moroccan and Ethiopian, leaving out giants like Egypt as well as tiny places like Eritrea.

Caribbean food is still Cuban and Jamaican, leaving out dozens of smaller and more obscure islands.

South American food is still Brazilian. Central American is still Mexican, and still catch-all Mexican rather than dozens of sub-cuisines finding their own success.

Middle Eastern is still Lebanese and Persian.

"Indian" is still northern and western Indian, not Tamil, Bengali, or Nepali.

Eastern Europe is still totally avoided and unexplored, outside of the Mediterranean food of Greece, which has not led to a trend-hopping chain to Serbian, Bulgarian, Albanian, etc., after the initial novelty of Greek wore off.

This is not to overlook the occasional exception that finds a niche audience, like Mongolian barbeque. The point is that if the goal of the contest were to burn through ever more exotic and esoteric cuisines, Thai food should have been done by the end of the '80s, and Tibetan restaurants should have enjoyed a burst of success at some point along the way. Its no-show status is even more puzzling when you look at how much the elite likes to show its sympathy to the culture of Tibet.

If it's not a case of trend-hopping, how does the foodie phenomenon tie into the status-striving climate after all?

It looks more like it ties into the switch from cultural identities being ascribed status to achieved status, to use some sociological concepts. When some aspect of cultural identity is acquired by being ascribed, it's beyond the individual's choice and is usually inherited from parents or community upbringing. Your parents were Baptists, so you're brought up Baptist, and you remain Baptist in adulthood. If that piece of cultural identity were achieved, it's through a more effortful choice from the individual. For example, if your parents were Catholic and raised you that way, but you convert to Baptism as an adult.

An earlier post explored the link between the status-striving climate and identity as achieved status, as opposed to identities as ascribed status in an anti-striving or accommodating climate.

In short, if the impulse is to climb up the status ladder, to reside wherever you need to do so, to behave however you need to, then the norms must favor identity as something that you can choose and craft to suit your needs and preferences. If the impulse is to rein in the competitive war of all against all, then the norms must make identity something that is beyond the individual's ability to mess around with, and keep people more or less where they already are.

Thus, dynamism is supported by norms of laissez-faire, with collectively destructive competition as the side effect, while stasis is supported by norms of reining-it-in, with collective harmony as the side effect (or rather the intended goal).

Food has been part of ethnic identity forever, seen most clearly in food taboos that distinguish Us from Them. Incorporating foreign food into your regular diet tells others that your cultural identity is constructed rather than handed down. That signal lets them know that you're a serious contestant in the status-striving competition. Once you've identified one another, you get to feel a status boost over those who are not eating foreign food on a regular basis. It also lets you identify who your micro-competitors are -- everyone who is into Indian food can now begin the contest over who knows the best Indian places.

The broader importance of signaling your diet of exotic food, though, seems to be telling or reminding others that they shouldn't try to regulate anything you do. In a climate of greater regulation, a white person seen eating Indian food every night would be looked at funny until they started to eat what is normal for someone of their cultural descent. In an anything-goes climate, there are few ways to more convincingly flout the norms about regulating the self on behalf of group cohesion.

Even better, it's not a very flagrant, aggressive, or offensive way to let others know not to bother trying to regulate your behavior, unlike punk-y clothing and hairstyles that are unabashedly giving society the middle finger. Indian food isn't inherently anti-social, unlike shredded clothing, tongue piercings, green mohawks, etc. It doesn't offend us at the most basic gut level, as though we saw someone eating bugs (notice that the inherently gross stuff in exotic cuisines is strongly avoided). Bug-eating is offensive no matter whether that's native to their culture or a foreign adoption.

But what's so gross about palak paneer, an Indian dish of spinach and cheese? Nothing, and it wouldn't seem so out of place in European cuisine, except for its distinctly Indian flavor. Making it a regular part of your diet is not designed to offend the norms that regulate us away from eating inherently gross things, but those that steer us toward what our culture does and away from what a foreign culture does.

Broadcasting your taste for exotic cuisine makes your message of "don't try to regulate my behavior" a bit more palatable, as it were, since it requires conscious thought to construe your behavior as rule-breaking, rather than a gut reflex to see so. It's one of the most pro-social ways you could go about signaling your lack of social constraints.

That probably explains why the phenomenon is biased toward the elites, who want to appear superficially polite and civilized, whereas the bird-flippers at the lower-middle-class level will just buy some obviously offensive t-shirts, chains, and piercings from Hot Topic.

April 23, 2015

"Problematic faves"

I remember when having a problematic fave meant you were into Culture Club despite the singer being a cross-dressing faggot.

April 22, 2015

"I Really Like You" by Carly Rae Jepsen

Contrary to what everyone is saying, this song doesn't sound like the '80s, but it has a refreshing emotional tone nonetheless. It isn't bratty, emo, or self-absorbed. It's basically sincere, uncomplicated, and other-pleasing.

For 20 years, female pop singers have been broadcasting how little they depend emotionally on men. Either they're scum and don't deserve attention ("No Scrubs," "We Are Never Ever Getting Back Together"), or they're fleeting conquests of empowerrrd womynnn ("Shoop," "Blank Space"). Two sides of the same slutball coin (both types ironically sung by a virgin who only "dates" fags, Taylor Swift).

The songs that are supposedly about being in a loving stable relationship don't ring true and sound forced ("I Wanna Love You Forever," "Umbrella"). Perhaps that's because there aren't any songs about the initial infatuation that establishes the couple's chemistry as a prelude to love. (Again, not talking about the shallow "I'm hot, you're hot, let's do it" songs about lust at first sight.) If we're not convinced of the organic nature of their first encounters, then hearing about pop singers' steady relationships will sound staged and going-through-the-motions.

Wholesome, bouncy songs about the initial stages of courtship used to be a dime a dozen back in the '80s and early '90s -- "I Think We're Alone Now," "Shake Your Love," "I Love Your Smile" -- but there are notable differences from today's "I Really Like You".

The singers from the good old days were teenagers, who sound more believable than the nearly 30 year-old Jepsen when it comes to feeling butterflies in the stomach. They also sounded more mature back then, as though they'd been infatuated and in a relationship several times already, whereas Jepsen sounds more like a sixth-grader getting her first crush. Another case of Millennial stunting caused by helicopter parents socially sheltering them.

And of course they don't sound anything alike. The older songs are melodic, the verses are sung rather than mumbled-and-shouted, the drumbeat is more elaborate than a metronomic thud, and the instrumentation is rich rather than sparse.

It goes to show how superficial music critics are, that they lump songs together that use the same family of instruments, rather than, y'know, how it actually sounds. "Synths + drum machine = SO '80S!!!" It's more like a contempo pop song wearing an '80s costume. The video is also a dressing up as an '80s video, with Tom Hanks replacing Chevy Chase as the comedic actor who lip-syncs the lyrics while acting goofy.

Even the tone, while unlike the typical self-absorbed or self-conscious tone of today's music, isn't at an '80s level of letting your guard down. It's more like the atmosphere of the mid-to-late '50s, although I can't think of a good comparison song off the top of my head. Something in between the forced sound of the Chordettes, but not as sincere as the girl groups of the early '60s.

In general, the people looking to make the Next Big Thing should stop trying to copy the '80s and look more to the late '50s and early '60s. That was the beginning of the outgoing and rising-crime climate that would reach its culmination in the '80s. It's hard to imitate an apex, but less daunting to recreate the simple inchoate beginnings.

Once we finally do shift from a cocooning to outgoing social mood, it'll only be at the level of the last shift circa 1960. We're not going to skip straight to the end. Our mindsets, both the musicians' and the audience's, will be more aligned with those of 1960 than 1980 or 1990.

April 19, 2015

Millennial moms and dads reversing helicopter parent trend?

We've covered this topic last year (here and here), but I'm starting to see hints of it in real life now.

Yesterday afternoon I stopped at a park to eat, and the picnic tables were near a playground, where about ten children were playing. If it had been just three years ago, every kid would have had a parent shadowing their tiniest moves, serving as their playmate rather than one of the other kids, and any time a child came near a stranger (whether a child or grown-up), the parent would swoop in to block the potential contamination / abduction / whatever they thought was going to happen.

I glanced over a few times out of curiosity about how ridiculous helicopter parenting has become this year. But I was surprised to only see one obvious helicopter parent among the ten kids -- one of those overly involved goofball dads who thinks his kid would rather play with a grown-up goofball than one of the other kids. Just let them play by themselves -- except this time they were!

There was a group of much older adults, probably the grandparents, and being early-mid Boomers they were hands-off just as they were when they were new parents. But where were the other hoverers and smothering mothers? One group of children looked to be semi-supervised by a teenager, but not by an adult. These were all white kids, by the way, not the Mexican kids who are allowed to go out and play by themselves. That really stood out as unusual.

Then as one mother was leading her son back to the car, he jumped up on a picnic table, walked to the other end, and leapt off. A helicopter parent wouldn't have allowed any of those actions to take place (jumping off a table = skinned knee alert), and would've flown into containment / safety landing mode right away. Not out of respect for public picnic tables in a public park, but because she'd be paranoid about her son's safety, and embarrassed from her son making her look like a negligent parent in front of the other parents, simply by letting kids be kids.

She didn't encourage his behavior; she just went along with it, apparently thinking "boys will be boys." No parent would've thought that in this situation just a few years ago.

Aside from the teenager, these children were all about 3 to 7 years old. Their parents must be in their late 20s and early 30s, i.e. Millennials. The nonchalant mom with the up-up-and-away son didn't look old enough to be a late Gen X-er.

The small sample size here is not a problem, since there has been almost no variation in the basic parenting style for years now. Any break from uniformly 100% helicopter parenting is highly out of the ordinary.

I've heard Millennials on the internet and on TV say they're going to be less hovering when they're parents, but had yet to observe it in real life. Now that their kids are old enough to be seen on the playground, you might start to notice a change back toward the good old days of hands-off parenting from now on.

Don't expect it to jump right to the '80s kind of environment, when children went to the playground with no adults at all. It'll be more like the late '50s and early '60s, when the Dr. Spock and drive-in cocooning trends were just beginning to loosen up.

I have no delusions about how hilarious it's going to be watching the Millennials attempt to raise children. But I am still glad that the community-fragmenting trend of helicopter parenting is finally going to come to an end, and that kids around the neighborhood will once more be part of an organic connected peer group, without having to route all interaction through their parental delegates.

April 16, 2015

The no-show of Jews in dance music: A survey of disco, new wave, synthpop, and dance-pop

While Jews may dominate the business side of the music industry, their accomplishment on the creative side has been more uneven.

They have always had an outsized influence among rock groups, although typically in the angsty misfit genres, such as heavy metal, punk, and alternative (Lou Reed, Kiss, Quiet Riot, Twisted Sister, the Ramones, NOFX, Bad Religion, etc.). Despite their participation in adversarial African-derived genres like rap (the Beastie Boys) and ska (the Selecter, the Specials), as well as aloof / too-cool black genres like Midcentury jazz (Stan Getz), they scarcely took part in the more agreeable genres within black music like R&B and disco. And where mainstream pop ranges in tone from cheerful to longing, the range of Jewish crooners is less sympathetic to the listener, ranging instead from schmaltzy to complaining (Barry Manilow, Barbara Streisand, Bette Midler, Neil Sedaka, Carly Simon, etc.).

They are well represented in genres where the relationship between the performer and the audience takes the form of spectator and spectacle, but are no-shows in genres where the performer is more of a background instigator trying to work the audience members up into a participatory activity among themselves, such as dancing.

Aside from reflecting the Tribe's well known tendencies toward neurosis, these differences also show their inclination toward the verbal and psychological (whether cerebral or emotional) and away from the corporeal and kinesthetic. Dancing takes as much basic body coordination as other salt-of-the-earth pastimes like playing sports, hunting, fishing, and camping -- all activities that the mentally oriented Jews find awkward and off-putting.

You really notice the absence of Jews in cheerful, danceable pop music when you listen to an '80s compilation. I usually listen to albums by a single group, where broad patterns in the genre are not so evident. But with the much larger sample size on the compilation I was listening to the other day, I was struck by how few of the groups I'd seen on lists of Jewish cultural figures.

Pursuing that hunch, I perused several lists (such as this one and this one), and did my own search of musicians whose Wikipedia articles mention them being Jewish and being a singer or musician in the new wave, synthpop, disco, or dance-pop genres. This restricts the focus from roughly the '70s through part of the '90s, when dancing was a popular activity.

The hunch panned out, with hardly any Jews in the more dance-oriented genres, unlike their heavy influence in rock and crooner pop.

In all of disco, there was only a single Jew -- Steven Greenberg, who founded the multiracial act Lipps Inc., the one-hit wonder known for "Funkytown".

Likewise in new wave, I could only find one confirmed Jew -- Nick Feldman, the bass player and half of the core duo of Wang Chung, who had a string of hits but are best known for "Everybody Have Fun Tonight". Jon Moss, the drummer for Culture Club, was adopted by a Jewish family from a Jewish-run orphanage, but I couldn't find a source that said his birth parents were themselves Jewish. Indeed, when asked in a recent interview if the orphanage accepted goys, he replied only with, "Probably, yeah," as though he himself is unsure of his genetic background.

Nor did the gods of synthpop treat the Jews as their chosen people. There's only one, and a halfie at that -- Pete Burns from Dead or Alive, who had a few hits but are best known for the dance classic "You Spin Me Round (Like a Record)". One of the members of Army of Lovers, whose biggest hit was "Crucified" in 1991, comes from an Algerian Jewish family, but I'm talking about the Ashkenazim here.

Paula Abdul was a dance-pop star throughout the late '80s and early '90s, though she too is Sephardic on her Syrian father's side (and Ashkenazi on her mother's side). I suspect her success owes more to the part of her blood that comes from the belly-dancing world rather than the tax-farming world. Taylor Dayne, however, is fully European Jewish; her song "Tell It to My Heart" from 1987 is the beginning and end of the story of Askhenazi dance-pop.

My search also turned up a handful of Jewish musicians listed under "new wave," but they're from the bands that were mostly playing rock, punk, and ska, with only a hint of disco, dance, and synth-rock -- the Knack, Rob Hyman and Eric Bazilian from the Hooters, Susanna Hoffs from the Bangles, and Danny Elfman from Oingo Boingo.

A tougher case to call is Blondie, whose guitarist (Chris Stein) was Jewish. They started off as a stripped-down punk and power pop band, and gradually evolved into a more eclectic style that mixed in synth-rock, reggae, disco, and rap. They were more of a bridge between the punk and new wave scenes, maybe proto-new-wave. Whatever you want to classify them as, they deserve an honorable mention in this survey.

Throughout human history, dance and music were two sides of the same coin, and only relatively recently has music become primarily passive on the audience's part, whether it's elite classical music or generic radio-friendly crap. Dancing is a group activity that bonds members together, giving music a key role in creating and maintaining a sense of community. Contemporary pop music that sets the stage for carefree dancing is an attempt to preserve those traditional roles of music.

Thus, the relative absence of Jews in dance music is part of their broader hesitation as culture-makers to create a more cohesive group-iness among their host population. (Please no retarded comments about the debt that Gentiles owe to all those schmaltzy Jewish winter-time tunes that don't have anything to do with Christmas.) They don't mind making a buck off of it as managers and record label executives, but actually creating it themselves -- too awkward, yucky, and shameful. Moving your body around in dance is fit only for the half-animal goyim, beneath what appeals to the mind of the mensch.

April 10, 2015

Are cops more likely to harm innocent whites in places of high diversity?

Like the Rodney King video of the early 1990s, the recent over-reaction of a white cop who shot an unarmed fleeing black suspect in the back in South Carolina will provoke much discussion about white cops and black victims.

Too many whites settle into the view of "Well, whatever the police have to do to keep the violent blacks at bay." But it is not realistic that a cop who is that callous toward blacks will somehow transform into a respectful servant when he's dealing with whites. The cop sees himself as pest control, and whether he has to unload his bug spray on hornets, termites, or your pet dog who didn't get out of the way like he was ordered to, makes no difference to him. All those different species of pests had it coming.

One of the key findings on ethnic diversity, from Robert Putnam's research, is that it erodes trust. The "no duh" outcome is that diversity makes people of one race lower their trust in people of a different race. But the surprising and disturbing outcome is that diversity even makes people of one race lower their trust in fellow members of their own race.

In Los Angeles, not only do whites not trust the Mexicans, they don't even trust the other whites, and remain fragmented and impotent to organize for their own collective good. It's the polar opposite from white civic participation in a homogeneous part of the country like North Dakota or Iowa.

In short, when an individual is confronted with a Tower of Babel environment, which offers no possibility of coordinating a group's interests at the collective level, he withdraws from communal life and focuses only on his nuclear family, or perhaps just himself.

I suspect there's a strong influence of this dynamic at work in the growing and unregulated police state around the country. You tend to only hear about it in places with high levels of diversity.

The apologetic white response is that white cops in such areas would prefer to stick to their preference of targeting only blacks and Mexicans, and leave the nice whites alone, but are compelled by The Powers That Be to appear less racist, and therefore go after innocent whites to "narrow the gap" and avoid harassment, firing, and shakedowns.

When you look into what white cops are up to, though, you don't see people who love their own group and hate different groups. You see people who are in a hunkering-down, under-siege mentality just like Putnam's research would predict for folks living in areas of high ethnic diversity. Only these paranoids are armed to the teeth and don't even have to let you know you're about to be raided.

Thus, the more likely reason behind white cops over-targeting white folks in highly diverse areas is not to appear to be closing the gap, avoid harassment by the federal Department of Diversity, etc. Those white cops simply don't trust their fellow white citizens.

Contrary to liberal propaganda, these types do not put "white pride" bumper stickers on their car, but ones that say, "I'm not racist -- I hate everyone equally". Again, they are not trying to avoid harassment by the anti-racism squads: they honestly perceive members of their own group as potential bugs that may need to get squashed if they act too uppity, like sleeping below the window that you lobbed a flashbang grenade through.

This is impressionistic, but I think on the right track. Unfortunately the data that could resolve these questions are not collected, let alone published -- over-reactions by police, broken down by race of cop and race of victim, and broken down by geography.

Here are a few suggestive maps, though. The first comes from the Cato Institute's effort to map out botched SWAT-style raids (see full details by using their interactive map here). The second is USA Today's index of diversity, showing the chance that two randomly chosen people will belong to different ethnic groups.



The raid map would need to be made into one showing per capita rates, but I don't think that'll make such a big difference. Also bear in mind that the pin marks look crowded and exaggerate how far north the signal goes, since it's only the point at the bottom that they are measuring.

Highly homogeneous states like Ohio and Michigan are in the top 10 US states by population size, yet there are few pin marks on the raid map, and most of them are near the few hotspots of diversity in the region, like Detroit and Cleveland. Smaller but more diverse states like Colorado have more pin marks. So do similar-sized but highly diverse states like Georgia.

Leaving aside the marks that represent the killing of a cop, and focusing only on harm from cops to citizens, Ohio has 7 pin marks and Michigan just 4. Their population size is 11.6 million and 9.9 million, respectively. Colorado has 10 pin marks, about as much as both states combined, yet it has only about half as many people as either state alone (5.4 million). Georgia also has 10 pin marks, while being comparable in population (10.1 million). What Colorado and Georgia share, and what distinguish them from Michigan and Ohio, is a much higher level of ethnic diversity.

Zooming into the city level, the Columbus metro area has 0 marks involving harm to citizens, whereas similarly sized metro areas that are highly diverse like Las Vegas and Orlando have 2 and 7 marks. The 90% white Pittsburgh metro area only has 1 mark involving citizens, despite being similar in size to highly diverse metro areas like Baltimore and Charlotte, both of which have 4 marks against them.

A more exhaustive list of incidents would have to be made, and a more fine-grained analysis performed, to settle the matter. But at first glance, it does appear that a higher level of ethnic diversity is linked to a greater tendency of callous over-reaction by cops.

Still, are the victims of these over-reactions white or black? Again we need better data. Sticking with the topical location of Charleston, SC, there is a pin mark on the raid map showing a lockdown style raid of Stratford High School in 2003, with the aim of busting up drug deals. Today that school is 60% white, and so back then would probably have been more like 65-70% white. Yet video from the school's surveillance cameras show whites as well as blacks being treated like bugs by the pest control.

Diversity not only corrodes civic participation from citizens, it also leads to callous aggressive harassment of those citizens by the police. This problem is compounded by the difficulty of citizens organizing in highly diverse areas -- they can't coordinate an effort to de-escalate the increasingly paramilitary tactics of their own police forces.

Whites and blacks both got harassed by The Man in Stratford High School, but blacks and whites can't team up on anything, so The Man is free to continue his SWAT-style raids into the future. See also the poor labor history of the South, where whites and blacks couldn't coordinate to collectively bargain with owners and managers. Worse: whites can't even coordinate with their fellow whites and fight a one-team battle against the elites.

This ought to be the focus of the anti-diversity movement for the 21st century -- not the obvious conflicts that will erupt between different ethnic groups, but the corrosive and authoritarian effects it will have within the white group itself. Putnam's research and these various real-world phenomena show that there is no silver lining at all to diversity, not even an emboldened "Us vs. Them" mentality. Instead it results in "every man for himself," the worst possible scenario.

April 7, 2015

Big glasses babe du jour

Jan Smithers as Bailey Quarters, from WKRP in Cincinnati (circa 1980).

In more outgoing times, even the shy types wanted to connect with others, leading them to wear glasses with large inviting frames. Something like the awkward but well-meaning girl in the freshman dorm leaving her door open in the hopes that someone will drop by and interact with her.

Contrast with the narrow, beady-eyed glasses that are preferred by the mousier introverts of today (and during the last cocooning era as well, epitomized by the cat eye glasses of the '50s).

Some pictures from back when "geek chic" aimed to look welcoming rather than repellent:





April 3, 2015

Why do butch dykes copy the hair-do's of twink fags rather than men?

A popular but misguided view of homosexuality is the "opposite sex role" theory -- that gays are feminized and lesbians are masculinized.

I've shown in earlier posts that this theory fails to explain the full behavioral syndrome of gays, who are infantilized rather than feminized, and who only appear feminine in some ways because females are more neotenous (childlike).

The defining female traits of nurturing babies, keeping house, settling down, being a wet blanket, being a worry-wart, giving time to small local charities, etc., are alien to the male homosexual, who in fact behaves like a bratty girl-hating 5 year-old with a turbo-charged sex drive.

Normal men respond to gays not as though they were feminine, but as though they were an annoying and creepily over-eager toddler trying to join the big kids, one who can only be bullied away because he's too socially retarded to take a hint.

What about lesbians being masculinized? I find them harder to study because they don't stand out quite as much. But the butch dyke types sure do. Over Christmas I was standing behind a pair of lesbian parents and their utterly undisciplined children at the airport. The more feminine one had normal-looking medium length hair, and I expected the masculine one to have a man's haircut. I could tell from behind that it was short and parted, so that much checked out.

When she turned around, though, she had one of those severe sideways-pointing hair-do's with the sides and back shaved. The technical name is "undercut," although I find "gay whoosh" more descriptive.

Here are a few examples of this distinctly gay haircut on real-life gays:




And here are only a handful of many, many examples of twink haircuts worn by butch dykes:








If butch lesbians were simply masculinized, why wouldn't they look more like normal men? Why do they copy so specifically the grooming and even clothing habits of gay men, who look and act kiddie? Nothing kiddie can be masculine or macho, including that "I'm such a little stinker" smirk on the dyke at the end.

Maybe they want to look recognizably male, but only a degenerate and abnormal kind of male, to give the middle finger to straight society. And what more familiar model of abnormal male do they have ready to imitate than the faggot?

How ironic that in emphasizing the rejection of hetero patriarchy, the butch dyke winds up looking like a goofy little kid rather than the strong warrior she imagines herself to be.

April 2, 2015

Blame Jewish residents for awful foodie scene in Manhattan's Upper West Side?

A top-featured article from the NY Post reviews how bland, generic, and flavorless the restaurant scene is in the Upper West Side of Manhattan, and places blame on the demand side with the residents themselves. Restaurants that ought to do great business flounder in the UWS, while run-of-the-mill Chinese take-out will never die. Any place that tries to do something bold is immediately watered down to appeal to dull taste buds.

You don't have to read between the lines very carefully to see who the problem is among the residents -- it's primarily the Jewish palate that the Italian-American critic is blasting.

Savvy readers may have suspected this already, given that about 1/3 of the neighborhood's residents are Jewish. But the critic can't come right out and say that in the mainstream media (for similar reasons that he would not be able to discuss openly). He did manage to drop a rather big hint toward the end, though, while quoting some other source (my emphasis):

And a new place that sticks to its guns must put up with what [Jewish restaurant manager Ed] Schoenfeld calls the “kvetch factor.”

On Christmas at RedFarm [Chinese food on Christmas], “A lady at the bar was counting people and seats to see who should get a table next.” She made a loud stink and “made my manager cry,” Schoenfeld recalls ruefully.

He asked her to leave — “I basically fired my customer,” he laughs. “You’d never see that downtown.”

Perhaps locals share lingering nostalgia for the days of Mexican beaneries and dairy cafeterias. Call it Karl Marx’s revenge on a neighborhood that prefers Gray’s Papaya to the eats that make this city the most famous dining destination in the world.

And this related hint:

Restaurants that bravely open with creative menus quickly dumb them down for proletarian tastes left over from the age when bearded “intellectuals” debated Sino-Soviet relations over refried beans, and “fine dining” struck West End Avenue sages as capitalist decadence.

Propagating and magnifying capitalist decadence is a Jewish specialty. Hence their sneering at "fine dining" is a sour-grapes defense mechanism to keep the world from noticing how sub-functional the taste centers in their brains are.

You saw something similar in their sneering at representational art, which had to be dumbed down into color field painting and the like. Or decorative motifs in buildings, which must be eliminated and exploded in the deconstructionist approach to, or rather retreat from architecture.

This suggests that the lack of Jewish accomplishment in a domain of taste stems from a more fundamental weakness in basic perception, akin to a blind man who cannot paint. (Their low scores on tests of visual-spatial cognition have been documented and accepted for awhile now.)

Why, though, do they insist on ugly art, brain-hurting buildings, and food meant for the barfbag (Mexican)? Why not just go with the flow and not make a big display out of your rejection of fine taste? It all traces back to their characteristically antagonistic stance in interpersonal relations, reflecting their genetic and cultural adaptation over the centuries to an economic niche as tax farmers, financiers, and other middleman roles.

Being upstaged by a bunch of dumb goyim is too threatening to the Jewish ego, so they turn it around and call it an abomination what is delightful, and seek delight in what is abominable.

April 1, 2015

Homelessness and rootlessness out West

From a post at Movoto, here's a map of how the states rank on the size of their homeless population per capita:


The West Coast, Nevada, Hawaii, and Alaska are all in the top 10. All the Mountain states are in the top half of the nation, except for Mormon Utah. The northern Plains states are doing poorly too.

There are pockets of heavily homeless states back East, but not entire regions. Massachusetts, Vermont, and Maine are up there, but not so much New Hampshire, Rhode Island, or Connecticut. New York is plagued by homeless, but the other Mid-Atlantic states aren't even in the top 20. Aside from New York, the only other centers of homelessness back East are in Florida and Georgia, and perhaps Tennessee.

Overall, though, the Deep South, Appalachia, the Midwest, and the southern Plains regions have comparatively small homeless populations.

These patterns reflect how deeply rooted the people are, with most places west of the Mississippi River having shallower roots than folks who still live where the original American settlers lived.

My hunch is that it's not just due to the shiftless, transient, and footloose tendencies of Frontier people, which would only apply to the professionally homeless. There's also those who are only temporarily homeless, and they ought to at least have family and friends to rely on for temporary relief, if the alternative is to live out of a car or on the street.

But where roots are shallow, people are less likely to have those connections. With less slack in the social system, a small accident is more likely to bring the whole thing down. Where roots are deeper, the norm is that "We take care of our own".

The data these maps were drawn from come from HUD, and reflect total homeless numbers. About 15% of the total homeless population falls into the "chronically homeless" group that we associate with drifters, bums, and the like. The rest are down on their luck, poor, lazy, addicted, or something else that makes them prone to occasional homeless living, without it being a full way of life.

I've downloaded the HUD data for myself, so if there's time, I'll re-do the rankings looking only at the chronically homeless, to see where the bum problem is the worst. Glancing over the numbers, it looks like that will tilt the rankings even more toward the West.

The data are also broken down into smaller geographic units below the state level, so we can see which cities and metro areas are more over-run.