July 25, 2015

Being local vs. being rooted

The rise of the "buy local" phenomenon is understandable given the omnipresence of chain stores across the nation and corporate control over the economy. Normal people distrust mega-sized entities having so much influence over their everyday lives. Supporting a local business, even if they sell mass market products, or purchasing local products, even if it's at a chain store, allows ordinary people push back against wealthy and powerful outsiders who are trying to gobble up the local ecosystem.

At the same time, we shouldn't mistake being local to a community with being rooted in that community. Depending on where you are in the country, a majority of the "buy local" businesses may not have been around for even five years, and may not exist another five years into the future.

This is part of the broader pattern where businesses catering to trendy upper-middle-class tastes go through a much more rapid and volatile churning, as yesterday's stairmaster farm gives way to today's yoga studio, or as one shabby chic design store becomes too predictable and gets replaced by another, more refreshing shabby chic design store.

These locally owned and operated businesses do keep outside corporate influence at bay, but they do not provide local residents with a sense of belonging to a community. Feeling anchored within a place requires that place to be fairly stable itself. No one can feel secure within a house that becomes unglued from its foundation and keeps shifting all over the landscape.

Chain stores, too, may be here today, gone tomorrow — especially if they're chasing after those with large disposable incomes. In one of the main shopping centers where I grew up during adolescence, there used to be a Whole Foods that arrived sometime during the 2000s, but has since moved 20 to 30 minutes away to a wealthier suburb. I don't think it lasted even 10 years.

As for places that are rooted, some may be local and quirky — such as the few mom & pop grocery stores still left — but others may be nondescript chain stores that have been there forever.

If it's been in place long enough, even a bland generic chain store may serve as an anchor for local residents' experiences and memories, in effect making it a one-of-a-kind and distinctly local place. Particular experiences of particular groups of people have unfolded there across several generations. Only to an outsider would the place look and feel "just like any other chain of its kind," blind to the social history within and around the built environment.

The inability to recognize one's outsider ignorance of local history and conditions stems from autism, which runs high among people who like to gab about architecture, geography, space, place, etc. And of course the handful of longtime residents who whine about nearby rooted businesses most likely never fit into the community in the first place.

"Why can't the city council just knock down those ugly two-story brick apartment buildings next to that lame barber shop and that useless hardware store? It could literally be a mid-rise tower with luxury condos above a Starbucks and a Panera on the ground floor. Oh wait, I forgot — the tacky locals here can't conceptualize how epic a mixed-use space would be. I swear, I'm getting out of this dump and moving to Minneapolis!"

It's disturbing to think of how much influence over commercial and civic life is wielded by this coalition of tone-deaf autistic outsiders and bratty local misfits.

Cohesive communities where folks have deep roots are not so easily destabilized, as the intentional lack of dynamism encourages the local brats to leave and makes it unattractive — BORING — as a target for colonization by trendoid transplants. Likewise, you can easily spot which areas to avoid living in, where none of the shops have been in place for more than five to ten years, mirroring the flux-driven anonymity among the residents.

On the practical side, how can these patterns be used to try to cement an otherwise rootless area? Rootedness would have to be an intentional plan at first. Start with something as "simple" as passing municipal legislation about turnover rates for commercial real estate. "Simple," assuming the will is there to begin with.

That would also go a long ways toward solving the problem of high turnover among residential real estate, as only those residents who prefer stability will stay or move in.

Attacking residential turnover is hard to accomplish directly. People will rightfully chafe at having their residency being regulated so closely by the government, even if it is in the best interests of communal stability. Regulating businesses for the greater community's interests is far more palatable politically.

Moreover, businesses, churches, libraries, and the like can remain in place indefinitely into the future and serve as anchors across the generations. A particular person living in a particular home cannot do that. Those who were closed to them may remember their house as "the house where Mrs. Baumgartner used to live," but that doesn't spread out too far as second-hand or third-hand knowledge, and will not be understood by future generations who don't know who Mrs. Baumgartner was to begin with. But they all may have worshiped in the same church, spent summer days in the same pool, and eaten at the same mom & pop cafeteria across the street.

July 22, 2015

The Trump phenomenon

Finally, a populist candidate speaking truth to power. His campaign has only started to take off, so there will be plenty of interesting developments ahead. Here are a few observations so far.

There's always a worry with populist candidates that they'll back down or water things down once they're in office. Not with Trump, who is not a career politician. Being a winner to a politician means getting elected to the highest possible office for the longest possible time. Whether you do a terrible job or not doesn't matter.

Trump has little room left to climb on the status pyramid — why not shoot for leader of the free world? Well, what kind of status boost would it be to rule over an increasingly third-world shithole of a country? It would be an embarrassment. Trump's ego works in our favor, since he couldn't stand being known as the leader of a loser country, which is what we're going to wind up as if current trends continue. His megalomania wouldn't be satisfied until he made the nation something worth bragging about again.

Even if he ends up not winning the Presidency, his assault on PC bullshit of all types will have tremendous immediate results. It's like those classic experiments on conformity that Solomon Asch ran. You're asked which of two lines is longer, and one is very obviously longer than the other. However, before you get to answer, about a dozen other people give the wrong answer. Faced with the pressure of sticking out, a good portion of people (by no means all) will quietly go along with the wrong answer.

But when there's just one other person who speaks the truth before it's your turn, it completely shatters the conformity effect. You are guaranteed to give the right answer, ignoring all the other wrong answers and latching onto the single right answer. "Phew, so I'm not the only one who thinks everyone else here is nuts!"

Sure, there are a handful of people who will straight-talk like Trump in real life, but they don't have much of a megaphone and are not very numerous. They can shatter the conformity effect for a good deal of folks who they come into contact with, but not the entire society. And in case you haven't been paying attention, it's been a good 20 years (the culture wars of the '90s) since any major public figure made a point of declaring that the Emperor wasn't wearing any clothes.

The media and political establishment have no clue how to react, since Trump is pointing out a very clear and very public error that they've all made — all said that the shorter line was longer, and based policies off of that. They're used to everyone having drunk the kool-aid, and only arguing about, say, how much longer the actually-short line is.

Trump is not nitpicking within the range of the received error, he's pointing out that at the basic level, they've all got it backwards — that's the shorter line, dummy, not the longer one! What kinda moron can't see that? The establishment is either stupid, incompetent, blind, deceitful, lying, or something — how else can they explain failing such an easy question like illegal immigration or free trade deals?

What the whining classes take to be mudslinging and name-calling is just the opposite: it's revealing that the Emperor is wearing no clothes. They're used to a charge of "incompetent" meaning that the person bungled the job of flooding more immigrants into the country, or bungled the job of securing more power for corporations than for nations. The elites all have the same goals, and only name-call one another for not living up to their shared ideal.

Trump is interrupting the conversation to ask, point-blank, what the hell kinda ideal is that? — Mexican criminals pouring over the border, and manufacturing sent to China so we can buy a bunch of cheap junk? He isn't targeting a particular individual for failing to live up to an ideal that he shares with them. He's targeting the whole lot of them for sharing such a warped and twisted ideal in the first place!

Shifting the direction that society is headed in — what ideals it will pursue — is far more worrisome to the establishment than mere character assassination. Of course, they don't want to open up a whole debate about whether their ideals are worth holding or not, since hardly any ordinary citizens would take their side. They are left only with mud-slinging (something they do out of habit anyway), which only helps Trump out.

"Do you see? They know I'm right on the issues about where the country is headed, so they ignore that and try to attack my character instead. Cowards!"

Lastly, one common view is that the WASP-y founding stock here has grown too timid, bored, or brainwashed into liberalism that it will take someone on the periphery of white America to deliver us from the plague, a la Giuliani in New York during the '90s. No self-respecting Italian is gonna let some buncha darkie bums get in his way of cleaning up The City.

And yet ethnic / peripheral whites vote liberal. The Celto-Germanic mass of the population may be in a slumber, but they have a keen sense of how much has been lost from recent demographic changes. Italians, Poles, etc., may (or may not) be outraged at the influx of corrosive Mexicans, but that's more of a turf war over who makes a better group of immigrants. They aren't from the founding stock.

Trump is Scottish on his mother's side and Bavarian on his father's. His personality is certainly more on the Celtic side, and he's Presbyterian rather than Catholic. Hard to find a more mainline American candidate than that.

He's also born and raised in New York, where he has stayed into adulthood. That's way more than you can say for the rootedness of the likes of George W. Bush, a New England preppy who LARPs as a Texan, or John McCain, a DC-area native who took up his Wild West cosplay act in middle age. Yosemite Sam also chose his running-mate for cosplay points ("Alaska: The Final Frontier"), and all it did was prove to voters how flakey and shallow people are out West. Romney was born and raised in Michigan, but his heritage is Mormon — more weird Westerners.

I haven't voted since Nader in 2000 (volunteered during the campaign on campus, too), but now there's a far more likely populist candidate to get behind. I honestly never thought I'd be voting again until old age. Voting Trump in 2016 could be a once-in-a-lifetime opportunity to kick the Establishment right in the balls, and end their chances of reproducing.

June 28, 2015

Whites who are not transplants: Their prevalence across regions

To help us understand where different parts of the country are demographically, as well as where they're going in the future, we need a better picture than simply race.

Not that race doesn't tell us a lot -- we know the heavily white regions won't be plagued by as much violent and property crime as the heavily black and Hispanic regions.

But for a true sense of communal stability, the white people can't have all just shown up yesterday, from all over the country. Boomtowns that act as transplant magnets have shallow roots in the present, and may never develop roots into the future. Only if everyone there today stays put, and not too many newcomers flood in.

If the boomtown keeps drawing in wave after wave of latter-day gold-miners, it hits a great big reset button and never allows existing roots to grow. And if the boom goes bust, the place turns into a ghost town. Either way, merely waiting a long time is not good enough for roots to be put down.

I've combined these two factors into a single measure: of the region's population, what percent of them are non-Hispanic whites who are native to their region?

The data come from the General Social Survey, which asks where you were living at age 16. The question about Hispanic background was only asked starting in 2000, so the picture here is valid for the 21st century. I restricted the population to those aged 25 and above, since transplants tend to take awhile to finally move out to wherever they're staking a claim.

"Transplant" here means someone who's living in an entirely different region than where they grew up, not simply moving from L.A. to San Diego, or from New Jersey to New York. Small-scale migration won't be that destabilizing, but having a large share of your population coming from a different region does not bode well for local stability.

This chart ranks the Census regions by their share of non-Hispanic, white, regional natives:


Here is a map of which states are included in which region:


Far and away, the eastern part of the Midwest has the highest concentration of rooted whites. Quite simply, a large share of the population is white, and whites are most likely to be natives in this region. Plenty of folks may move out of Michigan, but nobody moves into Michigan.

New England and the western part of the Midwest also have highly white populations -- in fact, slightly more white than E.N. Central -- but they are twice as likely to be transplants (24% vs. 12%).

The Mid-Atlantic suffers from only 70% being white, although they make up for that somewhat by having the second-highest rate of staying put.

Southern Appalachia (E.S. Central) has a decent-sized white population at 80%, but over one-quarter of them are transplants.

The other Southern and Western states are all about tied for last place, for varying reasons.

The South Atlantic has a decent share of blacks and Hispanics, and about one-third of the whites are transplants. The greater Texas region has even more blacks and Hispanics, but their whites are somewhat less transplanted (over one-quarter).

The Mountain states suffer from the highest rate of transplant-ism among whites (barely 50% of whom are native to the region), although non-Hispanic whites do form three-fourths of the population. And of course the West Coast is bound to be dead last: just under 60% are non-Hispanic whites, and of these, only 67% are regional natives.

For now, I'm not going to use this geographic pattern to explain other patterns, or try to account for the present one. Before doing that, it's worth wrapping your mind around a simple description of which parts of the country have higher and lower concentrations of rooted white people.

GSS variables: regtrans (from reg16 and region), region, race, hispanic, age, year

June 25, 2015

New horizons on the dissection of deviance: latent transgenderism and female bisexuality

A couple years ago I solved the puzzle of what male homosexuality boils down to — arrested social-emotional development during childhood, while still undergoing bodily hormonal changes during adolescence. The end result is someone who's sex-crazed but still thinks, feels, and acts like a little boy, who of course finds girls yucky and directs his newly emerging libido toward the non-yucky sex (other boys). All other facets of the male homosexual syndrome stem from this psychological stunting mixed with hormonal maturation.

To review the evidence, search the older posts for terms like gay Peter Pan-ism, gay pedomorphy, gay neoteny, etc.

But gays are so 2013. This year it's time to dissect other sacred victim groups in order to see what they're really like, too.

Several upcoming posts will look into female bisexuality, where the upshot seems to be that it is the same kind of syndrome as male homosexuality, only lesser in severity and duration, perhaps owing to female biology being more robust against environmental insults. In other words, the same agent that causes male homosexuality causes female bisexuality, although females show a better prognosis just as they do when males and females both get infected with the flu virus.

This inverts the common but whacko conclusion that female sexuality is more easily disturbed or warped — more "fluid" — because it supports a continuum of hetero, bi, and homosexuality, whereas the supposedly more stable male sexuality is either gay, straight, or lying ("bi").

In fact, bisexuality in girls shows how much more relatively stable their nature is to perturbations from pathogens, toxins, or whatever else. They don't become full-blown gay, don't show the same level of deviance as male homosexuals, and largely recover back to heterosexuality during their 30s.

The dismissive way of describing this is that their bisexuality is just a personality phase they go through during their hormone-crazed youth. But why don't the majority of girls go through that personality phase? Clearly it's not just any old phase but a stage of a profound disturbance. It is more accurate and more humanizing to view it as an illness that they thankfully recover from fairly early.

The next area of deviance to be dissected will be what I term "latent" transgenderism, for lack of a better term. J Michael Bailey has already written extensively on the homo and non-homo forms of transgenderism that typically accompany transvestism (cross-dressing), where the person feels as though they have a distinct "gender identity" (which they may feel compelled to "come out" about to others). I'd probably have to dig down further in all of his writings to see if there's anything for me to add.

However, there seems to be a rising level of latent transgenderism, where someone has erotic fantasies about being a member of the opposite sex, with a partner of the opposite sex. That is, a guy's fantasy involving two or more girls — and not a single man — or a girl's fantasy involving two or more guys — and not a single woman.

These are the types of guys who masturbate to girl-on-girl pornography, and the types of girls who get off on reading slash fiction — two related genres that were virtually absent even within pornography and erotica just a few decades ago, when it was all male-female (or male-male for a gay audience, though not for a female audience).

Unlike overt transgender cases, the latent cases don't feel as though they have a distinct gender identity or sexual orientation, and therefore have nothing to hide from others, feel shame about, or have to confess / reveal / come out about. They aren't going to call together a family meeting and say:

"Mom, Dad, there's something I think you need to know about who I am inside... I've been struggling about how to formulate this, both to myself and to the rest of you. But there's no point in denying it any longer — I fantasize about being a hot nubile babe who gets to make out with other hot nubile babes. No, you bigots, I have no plans to dress up as a girl or try to pass myself off as one. That's a cross-dresser, or transvestite, you clueless old farts. I simply get off on the thought of being the girl in a girl-on-girl affair."

Whether it is the male or female case, the latent transgender seems to be primarily driven by a paralyzing fear of approaching, interacting with, and consummating a relationship with someone of the opposite sex, as a member of their own sex. That is to say, the prospect of a male-female courtship, mating dance, etc., scares them to death. However, they are still attracted to the opposite sex and have urges to be physically intimate with the opposite sex. The only solution, to their temperament, is to approach a girl as a girl, or a guy as a guy.

It is as though these guys think that girls will let their guard down around other girls, and (at least some of them) will be open to making out with their fellow girls, particularly if both are attractive, hormone-crazed, and novelty-seeking youngsters.

The emotional release comes not from the thought of penetrating and climaxing inside of the other girl — impossible when you imagine yourself as or identify with the first girl. Rather, it comes from the relief of not having to go through all the normal pas-de-deux moves, not having to overcome even the slightest set of obstacles. Granted, you only get to imagine yourself fondling, making out with, going down on, or circle-jercking the other girl, but this lower reward is more than made up for by the thought of not having to put any effort into the seduction.

It is thus a symptom of profound risk-aversion, crippling social awkwardness, and self-centeredness (not having to adapt, improve, or contribute anything of one's own in order to couple with another).

While normal male fantasies involve being a more attractive, charismatic, and aggressive version of yourself, latent transgenderism is a change of kind and not only an exaggeration of degree. If they were only driven by a desire for minimal obstacles in the way of getting laid, then they would fantasize about being the pretty boy lead singer of a famous rock band, who throngs of eager groupies would be throwing themselves to every night.

Instead, they are so sensitive to rejection that they fixate on not even being perceived as male in the first place — that way, the girl's behavioral shield will not go up, and her warning system will not alert her to yet another random guy trying to get into her pants.

For the latent transgender male, it is not enough for the girl to register that there's a guy wanting to get into her pants, yet wanting him to do so, as in the fantasy about groupies, or something more normal like one where that cute girl from math class reveals that she's always had a crush on you and wants you to make your move as the guy. His paranoia toward girls means that he must fly completely under the radar, requiring him to assume female form in his fantasy.

The same analysis can be applied to the female reader of slash fiction. She is too awkward, paranoid, and unattractive to fantasize about a normal boy-girl affair, perhaps one in which she's more pretty and confident than in real life. She fixates on the hot guy not even perceiving her as a girl, which might set off his warning system about some random girl trying to make a pass at him. If she were a hot guy herself, then she could slip right past his detection grid.

Naturally, not all guys will be open to experimenting, but that goes for the male fantasy about girl-girl hook-ups as well. They just need to fantasize about that one unusual girl who would make out with a girl, or that one guy who would make out with a guy.

This latent form of transgenderism is important not only because it is far more common than the overt form, but because latent fantasies may influence overt identity. Jerking off to girl-on-girl porno scenes is fundamentally a dream about achieving perfect sexual mimicry in order to deceive the target girl and reach climax without her suspecting that you had any sexual intentions on her all along.

Yet the more you fantasize about becoming a perfect sexual mimic, the more you are likely to incorporate distinctly opposite-sex traits into your own gender identity. Maybe that will lead only to a vaguely androgynous personality and set of behaviors, or maybe it will lead to a deeper dysphoria and anxiety that will disrupt their own life and the stability of those around them.

Such fantasies are seemingly banal because they don't involve transvestism, exhibitionism, or narcissism in the way that overt cases do. It would not even make sense to call these latent cases "trannies" since they don't make a point out of looking or acting like the opposite sex.

But the easily credible potential for latent transgender fantasies to gradually erode a normal and healthy gender identity and sexual orientation means that we ought to give them a more serious clinical look, again even more so when you consider how common they have become among young developing minds in the past generation.

June 7, 2015

Gay take on erotic thriller genre reveals mental illness at root of homosexuality

Here are some comments I left at this review of a movie, Stranger by the Lake, that tries to homo-fy the erotic thriller genre. It touches on many aspects of the gay deviance syndrome -- Peter Pan-ism, power and humiliation, extreme negligence of personal health, etc.

The basic plot of the movie is that the main character is so cock-hungry toward a man whom he has secretly witnessed killing a previous lover, that he keeps coming back for more, despite the obvious danger to his own life. The tension arises from him, and us, not knowing whether the next time will be his last.

But the similarities with normal erotic thrillers are only superficial. The gay version is marked by its fundamentally abnormal psychology.

- - - - -

“The film also won the Queer Palm award.[5]”

I see they’ve gone and re-branded the Palm on the Underage Ballsack award. Is nothing sacred anymore?

About it being an erotic thriller — not at all, and its failings reveal the profoundly warped nature of male homosexuality.

In an erotic thriller like Basic Instinct, the tension arises from the male protagonist’s curiosity about a woman who seems as capable of violence as a man, and wanting to square off against her toe-to-toe. Her violent tendencies intrigue him rather than frighten him — it’s not like she’d be able to take on me.

He likes to scratch, she claws his back. He half-rapes his girlfriend, the killer babe ties him up in bed and aggresses against him. Worthy fucking adversary.

That’s the erotic thriller: alpha or usually wannabe alpha male seeks his thrills by competing against the femme fatale, uncertain of which combatant will ultimately one-up the other for good. It’s the guys who get a rush from taunting a girl to “hit me with your best shot, honey”.

The fraidy-cat twink in this queer-directed movie doesn’t play that role at all. He doesn’t see the killer as his equal, nor does he want to get his kicks from jockeying for position, as it were. He’s frightened by him, realizes he could be the next victim, but is so empty and desperate that he’ll pursue a quick fix at any cost, having to block out those rational fears for the couple of minutes it takes for the killer to blow his diseased wad up the sissy’s butt.

So, completely opposite of the contest between equals in the erotic thriller, the gay killer fantasy is based on one of them having total power and the other showing total submission, perhaps to the extreme of being killed by the other.

Dudes fantasizing about sexually wrastlin’ with women is shameful, but it’s not a sign of being severely fucked in the head. Gay fantasies, on the other hand, always reveal profound mental illness. The winner of the Queer Palm award is trying to romanticize what is soulless, and to aestheticize what is disgusting and ugly.

Thus it’s possible for the protag in an erotic thriller to be tragic, his downfall stemming from arrogantly tempting fate by daring the femme fatale to take off the kid gloves and hit him for real. I don’t know of an example that actually tries to make him tragic, let alone succeed at it, but at least it’s possible, and the basic idea comes across in any good erotic thriller, like Basic Instinct.

The victim in the gay killer fantasy flick is not brought down by any kind of hubris, but by an extreme form of negligence. He knows full well how violent the other guy is, how likely he is to wind up as his next victim, but he’s just gotta have his cock fix.

That’s no more tragic than some junkie continuing to shoot up knowing damn well what the substance will ultimately do to him. It’s pathetic, disturbing, and makes a normal person want to lock him up in a supervised facility where he can no longer harm himself.

We don’t respond that way to the arrogant tempter of the femme fatale — arrogance implies a certain degree of maturity, so it’s his own fault that he got killed by the psycho (“I tole you dat bitch crazy”).

But pretending that a real and imminent danger will somehow magically go away, is just infantile. Our reflex is that this person isn’t totally responsible for what’s happened to them, because their mental development has been arrested or retarded.

We don’t get satisfaction from seeing them met their demise — satisfaction in the sense of righteous vindication. Maybe we’re generically sad, maybe we’re just glad the junkie has kicked the bucket and won’t be around to bother us with his self-destruction any longer. Either way, there’s no happy ending to the gay killer movie.

June 1, 2015

Tranny fakeness revealed by Bruce Jenner's inauthentic nom de poon "Caitlyn"

I dropped by Blind Gossip to learn that Kim Kardashian is upset at someone named Caitlyn Jenner for upstaging her news about being pregnant again, on account of landing the cover of Vanity Fair.

"Caitlyn" Jenner? Never heard of that one before, let's just have a look and see why her cover shoot would be such an — OHHH MY GAAAHD.

Everybody's cracking jokes about the name because it's so ridiculously young for a 65 year-old. There was a Caitlin in my grade, but she was ahead of the curve; it's a distinctly Millennial name (so is anything with -lynn or -line).

If Bruce wanted to pass as authentic, why not choose some Boomer name like Brenda or Barbara? I'll bet he was thinking of Briana, another distinctly Millennial name, but thought better of making his new name sound too similar to the old one.

Choosing a currently trendy name is common among performers who are assuming an alternate persona.

Take '80s dance-pop singer Taylor Dayne — she has yet another distinctly Millennial name, but she was really born Leslie Wunderman in 1962. Taylor was a popular baby girl's name in 1987 when she hit it big. Leslie Wunderman wanted to sound young and trendy, so she jumped on the Taylor bandwagon.

Young women trying to make a name for themselves in porno also go with what's trendy, even if it means a 20 year-old having the name of a small child — Aubrey, Chloe, Isabella, etc. But all the action in naming fashions takes place among mothers of newborns, so striving for trendiness will make your name sound juvenile, even if you're already fairly young to begin with.

Back to Bruce Jenner, these other examples show that he's just acting out as a performer who's whoring for attention from an audience. It's all a great put-on, and he knows it.

If he had always truly felt as though he were female, he would've chosen a secret female name way back in the day, and it would necessarily sound old all these years later. But he picked "Caitlyn" yesterday, not when he was a kid.

For all we know, he's been cycling through secret female names for decades, swapping out the old ones according to fashion — Chrissy Jenner in the '80s, Lauren Jenner in the '90s, Ashley Jenner in the 2000s, and now Caitlyn Jenner. What a pathetically unconvincing fraud and weirdo.

May 27, 2015

Transplants more disconnected from family-by-marriage

An earlier post showed that inter-regional transplants are less connected to their extended blood relations. What about their extended family through marriage?

The General Social Survey asks "how often you have been in contact with" various people you're related to, by blood or by marriage, during the past four weeks. I graded responses as simply having any contact or having no contact at all, to keep the findings unambiguous -- no contact at all within the past four weeks is pretty socially disconnected. (The differences are even starker when looking at those with more frequent vs. less frequent contact.)

I've restricted the sample to whites in order to keep kinship norms similar across respondents, and I've compared natives to transplants within three separate class levels (shown by years of education: 0-12, 13-16, and 17-20). I only looked at respondents who actually had a living relative of the type asked about.

First, people who have moved across Census regions between adolescence and adulthood are far less likely to be in contact with their brothers- and sisters-in-law:

Percent in contact with sibling-in-law

Class: % natives ___ % transplants

Lower: 72 ___ 54

Middle: 71 ___ 50

Upper: 73 ___ 69

Transplants are also much less likely to be in contact with their parents-in-law, and the magnitude of the difference is the same as with siblings-in-law:

Percent in contact with parent-in-law

Class: % natives ___ % transplants

Lower: 67 ___ 56

Middle: 72 ___ 49

Upper: 78 ___ 76

You'd think that transplants would be better able to keep in contact with their family-in-law than their blood family. Their blood relations were left behind by the very act of transplanting, but their spouse's family may be from the same region that the transplant moved to.

Instead, there seems to be a greater general aversion that transplants have toward extended family, whether they are by blood or by marriage.

Not wanting to be rooted by geography goes along with not wanting to be rooted by kinship either. Let me do whatever I want, with whomever I want, wherever I want. It's no wonder the rootless West is so libertarian and on the brink of collapse.

GSS variables: sibinlaw, parslaw, regtrans (reg16, region), race, educ

May 26, 2015

Broken homes more likely for children of transplants

An earlier post showed that transplants are less connected with their extended blood relations than natives. Does that hold even for their closer relations in the nuclear family? Let’s look now at whether transplants are more likely to be raising their children in a broken home, i.e. without both birth parents.

The General Social Survey allows us to study transplants in a purer form than simply “raised in the ‘burbs, moved to the nearest city”. The GSS only asks for the respondent’s Census region -- New England, Pacific, West North Central, etc. “Transplant” here implies a greater degree of deracination.

I’ve limited to focus to whites since family structures vary wildly among whites, blacks, Asians, and Hispanics. I’ve also made the comparisons between natives and transplants within three separate class levels, based on number of years of education (0-12, 13-16, and 17-20).

Looking at respondents with children, the marital status of regional transplants is identical to natives.

1. Percent married among those with kids

Class: % natives ___ % transplants

Lower: 75 ___ 75

Middle: 78 ___ 79

Upper: 85 ___ 85

At first glance, then, children appear equally likely to grow up with a set of married parents, regardless of their parents being transplants or not.

However, the GSS also asks respondents who say they’re married, if they’ve ever been divorced. Now the differences show up.

2. Percent divorced among those married with kids

Class: % natives ___ % transplants

Lower: 20 ___ 26

Middle: 19 ___ 25

Upper: 16 ___ 17

Combining these two tables into one, we see that transplants are more likely to be married with children yet previously divorced (although not for the upper class).

3. Percent married, never divorced among those with kids

Class: % natives ___ % transplants

Lower: 60 ___ 56

Middle: 63 ___ 59

Upper: 71 ___ 71

So despite the initial impression of children of transplants growing up with married parents, it turns out that their household is more likely to include a step-parent than the households where parents are natives.

Were the transplants themselves more likely to have grown up in a broken home, and perhaps they’re just passing along a genetic predisposition? Somewhat, but not entirely. The next table shows the likelihood of having grown up in a broken home for natives vs. transplants, where the classes are now based on the education level of the respondent's father.

4. Percent growing up in broken home

Class: % natives ___ % transplants

Lower: 12 ___ 14

Middle: 11 ___ 13

Upper:  9  ___ 13

Transplants are more likely to have grown up in a broken home, but the differences are only half as big as in the previous table. Partly, the transplants are passing on a tendency toward raising children in broken homes that would have happened whether they stayed put in their region or not. But just as much of their kids growing up in broken homes is an effect of the parents being transplants.

Transplants with advanced degrees (17-20 years of education) are an exception here, as they were more likely to grow up in a broken home yet are just as likely to be raising their own kids in an intact home.

Overall, though, having moved to a different region than the one you grew up in increases the risk of your children growing up in a broken home. Thus the destabilizing effects of migration on the bonds of kinship are not limited only to the more distant, extended family ties, but even to those between parents and children, albeit to a lesser degree than the damage done to extended family ties.

GSS variables: family16, marital, divorce, childs, regtrans (region, reg16), race, paeduc, educ

May 24, 2015

Housing bubble fueled by transplants

Analyses of the housing bubble have looked at homeowners' race and ethnicity, age, income, and other standard demographic variables as factors that contributed to the boom and bust in home prices. Giving a dirt-poor Mexican strawberry picker a loan for a million-dollar home? Probably not going to ever get that back.

And yet all these studies ignore the major trend in living patterns over the same period — being a transplant. It's a phenomenon that everyone knows about, and which is confirmed by the data on migration between one's birth state and current state. But since most of the folks who think about these big problems are transplants themselves, they are prevented by cognitive dissonance from exploring the destabilizing effects of migration.

The General Social Survey asks respondents if they own or rent the place where they live. It also asks what Census region they were living in at age 16, and where they are living currently. I made a transplant variable that simply spots any difference between the region where they grew up and where they're living now. Note that this definition of "transplant" is not merely someone from the 'burbs moving to the nearest city, but someone who hails from a completely different region of the country — say, raised in New England but living along the Pacific.

I've restricted the respondents to whites only, since we already know about the outsized role that Hispanics and especially immigrant Hispanics played. If there were only a racial / ethnic angle, studying natives vs. transplants among whites shouldn't show much of a difference. If on the other hand Hispanic immigrants were just a special case of a more general pattern about transplants, then we'll see something after all looking just at whites.

I re-ran the comparisons looking only at native-born whites, and native-born non-Hispanic whites, and the conclusion did not change. So I'm only narrowing it down to just "whites" to keep sample sizes as large as possible.

First, the homeownership rates for regional natives vs. transplants, surveying the nation as a whole (natives in blue, transplants in orange):


The long-term baseline for regional natives seems to be about 70%, and about 63% for transplants, a rate that is 10% lower. This shows that transplants are not just switching regions and then planning to stay put in their adoptive region. Mostly they are, but they're more likely than natives to only be renting — just in case they have to bail and switch locations again.

At any rate, the data for the natives shows only one survey year of dramatic rise in homeownership — 2006 — and a pretty quick return to the baseline by 2010. The burst went from 72% to 80%, or an 11% jump. The boom-and-bust cycle is evident, but not very extreme.

The case of transplants, however, could not be more dramatic. Their burst shows up already by the 2004 survey, and does not return to baseline until 2014. Their boom lasted far longer than for natives, indeed the better part of a decade. Moreover, it was a wilder departure from the historical norm — soaring from about 64% to 80%, or a 25% jump. So the sudden burst for transplants was more than twice as great as it was for natives.

Now let's zoom in on the most heavily inflated and then heavily devastated region, the Pacific (the vertical scale is now twice is big as before):


Overall natives and transplants out West have closer rates of homeownership than back East. And it's not because of higher rates out West among transplants, but much lower rates out West among natives. It's one of the few places where natives and transplants are equally, and minimally, invested in continuing to live in their region. "San Francisco, Portland, and Seattle: Even our natives are fickle."

In any case, the boom and early bust were not so different between natives and transplants out West, although the decline has been much steeper among transplants. The major difference are the beginning and end points of the entire data series: transplants in 1985 had homeownership rates just above 40%, and despite a long detour toward 75%, they are now right back to where their counterparts started 30 years ago.

Of course, out West the white transplants played less of a role because there were hordes of Mexican transplants eager to take the unpayable mortgages that Americans just wouldn't take.

The upshot is that the housing bubble was primarily a transplant phenomenon. Natives to a region experienced a much shorter and smaller increase in homeownership rates. The ludicrous boom and bust activity was driven by transplants to a region — Mexicans out West where there were plenty nearby, or white transplants in the rest of the country where there were few Mexicans.

I don't read much economic literature, but judging from what I've been exposed to over the past seven years on the internet, this is the first look into homeownership rates over time for natives vs. transplants to a region, and the first to lay the blame more on migration in general for the effects of rootless boom-towns.

This doesn't explain why transplants were more susceptible to the boom-and-bust cycle, but it's easy to see some of the reasons. They're more likely to be strivers, for one thing, hence more willing to jump on a bandwagon.

Their main weakness in my view, though, is their ignorance of local conditions and their history. I remember right at the peak of the housing bubble, my mother said she couldn't believe how high the home prices were getting in her neighborhood, and that she didn't believe the homes were really worth that much. By that point, she'd been living there for nearly 15 years and had a long-enough history of impressions to judge from, even if she lacked a rigorous time series of real estate data.

A couple from outside the region who bought a home just across the street from her in 2007, on the other hand, didn't arrive with any lasting memories for an intuition to emerge from. As far as they knew, it was just the price of moving into that neighborhood — maybe a little high historically, but nothing really weird, and hey, maybe the neighborhood had been under-valued before, and the transplants like them were simply revealing what it was truly worth.

Sadly, they will never get out of the house what they paid for it. They would probably lose around $70-80,000 if they sold and moved, and that's with home prices having recovered and gone up somewhat since the nadir of the early 2010s.

Let that be a cautionary tale about the value of knowing a place like the back of your hand. The footloose gold-rush lifestyle will pay well for a tiny handful of lucky ones, but it will ruin most of the strivers, who simply do not know what they're getting into.

GSS variables: dwelown, year, regtrans, region, reg16, race, hispanic

May 23, 2015

Another cosplay fanfic approach to music videos ("Bad Blood" by Taylor Swift)

Earlier we saw the empty and jarring results of the cosplay fanfic approach to making music videos in "Fancy" by Iggy Azalea, where a character from the original movie Clueless is aped by a singer whose persona is the exact opposite.

Now there's the video for "Bad Blood" by Taylor Swift. It mimics Mission Impossible, Charlie's Angels, The Matrix, The Fifth Element, Tron (the Daft Punk one), Sin City, and pretty much anything else that the director got a boner to over the past 20 years.

Does Taylor Swift's persona lend itself to a femme fatale / film noir role? Of course not: she's an awkward virgin who only "dates" high-profile fags. Nothing seductive or man-eating about that. Ditto for the other girls and women, who are self-consciously striking poses like cosplay attendees at a nerd convention.

Does anything in the song's lyrics lend itself to an over-the-top apocalyptic spectacle? Nope: it's about some gay little tiff that she and some other frenemy have gotten into. Only to middle schoolers is that the end of the world as we know it.

Since no one wants to do anything cool and new anymore, we can expect to see more of this approach -- throwing together a bunch of references and allusions to pop culture that the audience has already masturbated to. Only now they get to masturbate to it in an unexpected setting -- a Taylor Swift video, a Family Guy episode, a new Star Wars movie, etc.

Rather than add to the variety of things you enjoy, the point here is to multiply and maximize the masturbatory value of the things you already like -- to obsess over them, over and over again.

Pop culture is quickly becoming one great big breath of stale air.

May 20, 2015

Generational splits in being assertive, passive, or just plain awkward

Those who have spent much time interacting with Millennials have noticed how withdrawn they are. The average member won't initiate anything, whether social (getting to know new people) or mechanical (that bamboo is starting to look gnarly in the back yard, better clear it out).

This has lead casual observers to describe the generation as passive, but that term really means that the person will pitch in and perform various tasks once someone else — the initiator or instigator — has gotten the ball rolling first. They are willing, perhaps even eager to join in an activity — they just can't start it.

Yet Millennials are not only incapable of kicking something off, they fumble the ball once it has been perfectly thrown to them. Beyond being anxious about introducing themselves to new people, they don't know how to respond to someone else introducing themselves first, let alone how to keep the back-and-forth going so that the result is a relationship rather than a mere encounter.

They don't know how to act, but they don't know how to react either. They're just plain awkward, and it keeps them from developing a normal system of relationships.

"Passive" would actually be a better description of the average Gen X-er. As long as there's an instigator around, X-ers are perfectly comfortable joining in the mischief. Or accepting a subordinate role in a hierarchy, under a leader, mentor, or guide. Do you want to go bowling? "If you guys are going, sure." Where do you want to go tonight? "I dunno, I'm cool with whatever." Yeah, me too.

It normally doesn't devolve into the blind leading the blind because despite the majority tendency, there's always at least one leader or instigator in their social circle.

That leaves the Boomers as the assertive ones. There's a lot more playful, half-serious ribbing and joshing among them because they're all trying to assert themselves and have the others in the group be subordinate. They're more willing to be creators, while Gen X prefers to be fans.

You see this clearly in stereotypes about husbands. The stereotypical Boomer husband was cheating on his wife with a secretary or waitress, endangering his marriage to assert his libido. Gen X husbands are more likely than Boomers to see their role as the dopey dad and the henpecked husband, whether they resent that role or are cool with it, y'know, as long as the wife is cool with it.

The stereotypical Millennial husband is neither an assertive nor a passive partner in the marriage. Millennial husbands and wives are more like gender-non-specific housemates who occasionally have genderless sex. None of the household tasks get taken care of because neither is capable of being the leader or the follower in getting them done. Maybe if we both ignore the bamboo jungle in the back yard, it will be nice and just go away to infest some other home.

What underlies these differences seems to be how much of their social development, say ages 5 to 25, took place in an outgoing (1955-1990) vs. a cocooning period (since 1990). Boomers, particularly the later ones, developed entirely within an outgoing climate, which allowed them to reach an adult level of assertiveness.

Gen X developed partly during an outgoing climate, but also during a cocooning climate in early adulthood or even adolescence. That allowed them to mature beyond childish awkwardness, although still retaining more of an adolescent approach of "I'm up for it if you are". That effect is more pronounced among the later births in the generation.

The poor Millennials who grew up entirely in cocooning times, under helicopter parents no less, never even made it to the adolescent stage. Now that they're nearing 30, they realize that they're supposed to be able to take part in the back-and-forth, following either an assertive or passive role, and some of them are making a conscious effort to practice. But at a gut level, their instinct is still to just stand there and go, "OK, so now I guess we, uh.... well, this is awkward..."

May 15, 2015

Drawing generational boundaries from slang and other meaningless traits: An evolutionary view

The standard intellectual approach to defining generations is to lump together those individuals who all went through some key event or series of events during the same stage of their life. The point is that what shaped the members of a generation, and what binds them together, is meaningful — growing up in Postwar prosperity, as children of divorce, as digital natives, etc.

In an informal setting, though, the shared traits are not so meaningful. What's your slang word for "very good" — is it "peachy keen," "groovy," "sweet," or "amazing"? (Those are for the Silents, Boomers, Gen X, and Millennials.) What were the popular colors for clothing when you were in high school? What are your favorite pop songs of all time? Who was your first huge celebrity crush?

These are not the kinds of people-molding forces that social scientists propose.

Sometimes the two approaches will draw similar boundaries — if you were a child of the Postwar prosperity, and born during a time of rising fertility rates, you are also familiar with the phrase "groovy," at some point in high school you wore orange and blue on the same day, one of your favorite pop songs is "Happy Together," and you had a huge crush on Mary Ann or Ginger.

But other Powerful Societal Forces don't affect people for a limited time to produce a tightly bound generation who underwent the effects. They are long-term trends along which people born in one year or another simply came of age during an earlier or later phase of the single ongoing process. Suburbanization, ethnic diversity, mass media saturation, and so on.

Gen X and Millennials, for example, don't look like distinct generations when you look at those three factors — they look fairly similar to each other, and different from the Greatest Gen, Silents, and even Boomers. And yet their group membership badges are totally different — slang, "you can't listen to that, that's our song", or tastes in food (way more toward Mexican and Chinese slop among Millennials).

When the X-ers and Millennials vote against the "boo taxes" government of the Silents and Boomers, it will be a case of politics making strange generational bedfellows, not similarly shaped generations on either side warring against their antitheses.

The choice of whether to carve out groups based on a meaningful or arbitrary trait shows up in evolutionary biology and historical linguistics. Both fields prefer shared traits to be arbitrary. If lots of individuals share something seemingly arbitrary, it's probably because they come from a common origin where that just happened to be the norm. That's why geneticists look at neutral DNA to determine ancestry.

If lots of people share something meaningful, like dark skin, that could be due to similar meaningful pressures acting on two unrelated groups, like the Africans and New Guineans both evolving dark skin as an adaptation to tropical climates, despite being distantly related on the whole genetically.

Likewise in the history of language, if two groups share a word for something functional like "internet," that could be because two unrelated groups both adopted the phrase when they adopted the technology, in recent times. If they share a word for the number "four," that can't be chalked up to similar pressures making the groups speak similarly. They must descend from some common ancestor where the word for that number just happened to be pronounced "four".

Aside from the theoretical motivation for using arbitrary traits, they're also the most palpable in real life. If you overhear someone in line at the store saying they agree with gay marriage, that is most likely to be an airhead Millennial, but could very well be an X-er or even a Boomer. Ditto if they make an offhand joke about their parents' divorce, their student loan burden, and the like.

But if you overhear your fellow supermarket shopper say, "Listen, they're playing "Footloose" — isn't this song suh-WEET?!" then they're definitely Gen X. If it's, "Oh my gosh, I'm like obsessed with "Blank Space" — not gonna lie, that song's actually kind of amazing!" then they're literally definitely Millennials.

The sociologist's large-scale impersonal forces make individuals similar, but not necessarily in a social dynamic way that cements group membership. Children of prosperity turn out this way, children of austerity turn out that way, whether they ever interacted with other members of their group to create a shared culture.

It's the seemingly trivial stuff that serves as shibboleths, food taboos, folk tales ("urban legends"), and totem animals to distinguish Us from Them. Those things only became popular by individuals accepting them rather than any of their alternatives to signal what group they belonged to. They get closer to what creates a community, beyond what creates a group-of-similar-individuals.

May 10, 2015

Broken homes epidemic reversed since the '90s babies?

Now that the data from the 2014 General Social Survey are online, we can look into some other trends that were too hazy to study before. The sample size of Millennials used to be too small, but now with another wave of the survey including them, they can be better investigated.

One thing I've been wondering about for awhile is if the children of divorce are sticking together when they themselves became parents, or if they're just going to perpetuate a climate of broken homes.

In an earlier post, I looked at the trend by birth cohort. Growing up without both parents became more common starting with people born in the late 1950s, and only grew more and more prevalent with each cohort afterward, right up through those born in the late 1980s. The sample size was too small for '90s births to see whether it continued or reversed.

Now that there is a large enough sample size, though, it looks like the trend did reverse. I grouped respondents into five-year age cohorts, and no matter how you move that five-year window around, those born in the early-mid-'90s were more likely be living with both parents at age 16. The difference is only a few percentage points, but that's still remarkable considering that every previous cohort since the late Boomers showed a steady and notable decline of growing up in an intact family.

These results showed through for whites, blacks, and "other" races. Race could not have anything to do with the overall results anyway, since more recent cohorts are blacker and Mexican-er, and those groups have higher rates of broken homes than whites do. Simple demographic projections would have predicted a steady decline, but it looks like the return toward the bi-parental household succeeded in spite of racial demographic trends against it. It is therefore like the falling rates of violent and property crimes over the past 20-25 years, despite a blacker and browner population.

The reversal also held for upper, middle, and lower classes (I used number of years of education as a proxy), although it was stronger and came somewhat earlier for those higher up on the social pyramid. This is unlike the pattern that Charles Murray details in Coming Apart, where for example the lower class continues to get divorced at higher rates over time, while the upper class has returned to marital normalcy after the initial perturbation back in the '70s.

This makes me doubt the earlier explanation I gave that linked broken homes to the status-striving and inequality cycle -- things that have steadily gotten worse since sometime in the '70s or early '80s.

If the first cohort to be struck by the broken homes epidemic was born in the late '50s, and the direction began to reverse with the cohort born in the early '90s, that suggests a link to the cocooning-and-crime cycle. Being a child of divorce became more common among those who were small children during the rising-crime period of circa 1960 to 1990. If you were a little kid during a falling-crime period -- most Silents and Boomers, who grew up in the Midcentury, and later the '90s babies -- you were more and more likely to grow up in an intact family.

One effect of a rising-crime climate is giving less weight to the future and living more in the now -- not surprising when rising crime rates make a safe and secure future look less and less likely. This is a "facultative" response, one that responds to current conditions. If those conditions are present long enough over time, people will evolve an "obligate" response: their discounting of the future becomes more wired-in where the environment is violently unstable.

So perhaps the parents splitting up and telling their kids good luck was part of the greater pattern of impulsiveness or discounting of the future. Abortion rates took off until circa 1990 as well -- hard to think of a more callous attitude toward your child's future than that.

Once the crime rate started falling in the '90s, parents projected a safer and more secure future, and began weighing the future more heavily. As one sign, they became less likely to opt for abortion or divorce as a solution to the "don't feel like raising kids" problem.

GSS variables: family16, cohort, educ, race

May 8, 2015

Prepping for cataclysms, neglecting ordinary emergencies

Our increasingly paranoid and status-striving society has passed a point of no return, where there are more young adults who carry around paracord bracelets and pocket knives to "prep" for disaster, than those who know how to change a tire and carry the basic tools to do so in their trunk.

You'd think that if they're paranoid enough to be prepping for a shit-hits-the-fan scenario, they would also be planning for the smaller and more predictable disasters that a well-adjusted person would worry about -- flat tire, burnt out headlight / brake light, cuts bleeding enough to need a bandage, and so on.

Yet they don't behave like people who are going above and beyond the scenarios that normal people already have covered, they are prepping for the apocalyptic instead of the ordinary.

You might try to rationalize their neglect of mundane duties by saying that the apocalypse trumps everything -- however small the probability, the magnitude of destruction will be more or less infinite, so it deserves sole focus as "what to be ready for".

But Haidt's research on moral reasoning shows that it is typically a post-hoc rationalization of a gut-level intuition. Thus, the preppers have a gut-level aversion to stewardship of everyday affairs, and develop a conceptual excuse afterward -- they're not negligent, they're actually prepping, for, uh, lemme think... for a far more disastrous scenario than those that trouble normal folks. Yeah, that's it.

So, scrupulously carrying a pocket knife, and updating their paracord bracelet to the newest model, serves to pardon them from, say, cleaning out the lint and debris that's clogging their fan or computer, learning CPR, and getting practice as a handyman.

As an example of how frivolous their priorities are, consider what they include in their EDC -- everyday carry, or things that are on them no matter what. Googling "edc" and "first-aid" gives half a million results; likewise for averaging the results for "edc" and "band-aid" with "edc" and "band-aids". Less than half a million hits for "edc" and "multi-tool". Yet "wallet" and "knife" get over a million, and "light" and "watch" get over 50 million.

It's hard to think of something more useless in a doomsday world with no tight schedules to keep, than a wristwatch. If you need to tell time, just look up at the fucking sky like people have for millions of years. Are you really incapable of telling whether it's morning, afternoon, evening, or night by opening your eyes outdoors? And if you don't have a good intuition for whether something happened five minutes ago or five hours ago, you are braindead and won't need to worry about surviving the apocalypse anyway.

Yeah, but how are we supposed to start a fashion contest over looking up at the sky? Wristwatches FTW.

Focusing on the cataclysmic also serves their impulse toward status-striving: prepping for the apocalypse is Real Serious Shit, requiring Advanced Tactical Gear, whereas any fuddy duddy can learn how to test their gas pipes for a leak by spraying soapy water, or carry a first-aid kit in their car in case someone gets cut. Pursuing the fantastic and spectacular is more attention-getting than tending to duties that are realistic and mundane.

Of course that also means that these preppers are just LARP-ers, having little to no training, practice, or experience. But hey, they watched a YouTube series by some guru who served in Gulf War, as though that were tantamount to downloading his brain a la The Matrix. Indeed, for all their rugged outdoors posturing, Neo is closer to their true hero -- someone who can become the ultimate urban survivalist badass by passively and instantly receiving the "content" of some cyber-guru, without having to put in any practice, go through any boot camp, or pass through any other rite of passage. Consumerism doesn't count ("purchasing my first multi-tool").

Perhaps that's another reason why they're so obsessed with watches -- they wouldn't be spending time doing anything real, and would have to engage in some pointless repetitive activity to assuage their anxiety and make them feel like they were getting shit done. Let's just keep glancing down at our watches, and hopefully that will allow us to just wait out the end of the world as we know it. Their "gear" is simply a collection of talismans and fetishes being stroked by the impotent in an attempt to feel capable and powerful.

Normal people recognize how useless these posers would be in a real disaster, but the preppers reckon rank by the upvotes they receive from one another.

Sadly this phenomenon generalizes to all sub-cultures in a striving climate -- ordinary duties are neglected in the pursuit of vanity points in some circle-jerking status contest.

Related post: doomsday prepping in the civic Midcentury vs. anarchic Millennial eras

May 6, 2015

Where did all the annoying bumper stickers go?

From the 1990s through the mid-2000s, it wasn't unusual to see a car whose bumper, or even entire back side, was encrusted with stickers of confrontational whining, smug slogans, adolescent humor ("Your mom's hot"), and/or a list of your favorite "edgy" bands. Cars with only a handful of stickers were more common still.

You don't see that anymore, and on the rare occasion that you do, it's clearly a fossil from that earlier era -- "Impeach Bush," "Fukengr├╝ven," "COEXIST," "Mean People Suck," "Phish," etc. What happened?

Flashing to the world all of your annoying opinions and obsessions ("interests"), from behind a wall of anonymity, in a drive-by fashion, trying though often failing to smugly troll strangers -- sounds an awful lot like what the internet was made for. Or rather the web 2.0, when comments sections and social media were born. Only now the technology allowed you to annoy people all over the world -- get more bang for your broadcasting buck.

Although seemingly trivial, the case of bumper stickers illustrates an important point I keep making about technology and social life: technology doesn't make us use it any particular way, and a technology may only become widely adopted because users were already heading in a new social direction.

This is complementary to the standard view that technology colonizes our society, and we are unwillingly affected by it for better or worse. I don't deny that that happens, but it relies on the assumption that the users didn't really want it -- why not ask them first and see how enthusiastic they were to adopt it?

Thus, anonymous comments and social media did not tempt people into blabbing their confrontational, smug, gotcha! slogans to the rest of the world. That attitude and behavior was already highly visible back in the '90s, and even the 2000s -- right up until the web 2.0 opened its doors. Twitter did not set off the battle between SJWs and their counter-trolls; that existing culture war simply shifted arenas, from car bumpers to social media sites.

It also goes to show how little the difference between today and the '80s has to do with technological changes. People didn't have anonymous comments and Twitter back then, but they had bumper stickers and decals -- why didn't they plaster dozens of stickers on their bumper, using them for hostile crusading like they would come to do during the '90s? Quite simply because they didn't have that attitude.

The primary change between the get-along '80s and today is one of attitude, social stance, worldview, and so on, not technology. The '90s is the crucial decade to resolve the matter. Like the '80s, it lacked an internet with anonymous comments and social media sites. Unlike the '80s, people's attitudes had shifted toward cocooning and anxiety or hostility in social situations.

The social mood trumped technological constraints, with people of the '90s making do with bumper stickers for socially anxious confrontations: to wage SJW crusades (or to troll the SJWs in return), to blab their obsessions to the world, and to try out one-liners on an audience that can't respond by rejecting them.

May 3, 2015

Gay marriage will move on to further gay crusades, not letting other weirdo groups get married

The most common response from conservatives who raise the troubling matter of where gay marriage will lead to, is that it will lead other wacko kinds of marriages to be sanctioned -- polygamy, bestiality, incest, pedophilia, whatever.

But here's the real deal (from a comment I left here):

The next radical social-political experiment won’t have to do with marriage — that’s falling for the con that the gay marriage issue is about marriage first, and secondarily about whom it’s letting into the institution.

Clueless conservatives, or rather befuddled reactionaries, respond with, “Well, if you’re going to let ridiculous group X get married, why not ridiculous group Y? And ridiculous group Z? Where will the desecration of marriage end?”

But the culture war is not about marriage — it’s about giving fags all sorts of privileges that they don’t deserve, ignoring and indeed obscuring and denying the fact that they are fundamentally abnormal rather than normal, which justifies discrimination against them (i.e. treating them differently under the law).

That’s what makes the idea of them being married and committed such a joke, or the idea that two giddy Peter Pan homos are just as maternal and nurturing toward children as a mature woman. Or that what makes them *them* is no less healthy and wholesome than what makes heteros *hetero* — just don’t ask about how many diseases are devouring their beleaguered half-corpses.

Therefore the next big crusade in the culture war will be about “what other ways can we propagandize homosexuality as ‘just like us’ and give them goodies accordingly?” Not “what other risible group should we allow to get married?”

Look at blacks in the Civil Rights era — it didn’t move to “what other group should we allow to enter our wholesome white schools?” They didn't send in the National Guard to forcibly integrate the Mexicans, Orientals, American Indians, etc. Rather, it moved to “what other privileges, set-asides, and quotas can we shower on the blacks?”

The culture war is based around sacralizing a victim group (blacks, fags), not desecrating a particular institution (a side effect, not a sustained target).

May 1, 2015

Cosplay remakes and the uncanny valley (video for "Fancy" by Iggy Azalea)

The cosplay fanfic approach of the new Star Wars movie will strike normal people as weird and off-putting, though in a way that's hard to explain. A gut revulsion suggests a role for disgust, rather than a conscious list of reasons why it looks bad.

I still couldn't put my finger on what is (mildly) disgusting about it, so I looked for another example of the cosplay fanfic approach to pop culture.

Here is the music video for "Fancy" by Iggy Azalea, the song of the summer for last year, with over half a billion views on YouTube. Its set design, locations, clothing, hair, and plot vignettes are ripped from the 1995 movie Clueless, probably the last coming-of-age teen movie with likeable characters. Yet everything about the words, intonation, facial expressions, body language, and general attitude of the girls in the music video is the polar opposite of the characters in the movie.

In Clueless, the protagonist Cher is a well-meaning ditz who occasionally bumbles in her nurturing attempts at playing matchmaker. (The movie is based on Emma by Jane Austen.) She tries to make over the new student Tai, a free-spirited, socially awkward naif who becomes more savvy and popular, acts too big for her breeches, but ultimately reconciles and acts humbly around her friends. They show a basic concern with doing right by others in order to fit in. They want to be liked and accepted into a group, not to be worshiped by fans and feared by haters, both groups being socially distant from the diva at the center of attention.

See the trailer here, although it focuses more on dishing out one-liners than establishing character traits.

Fast-forward to Iggy Azalea and Charli XCX aping Cher and Tai in the "Fancy" video. Both are hyper self-aware pose-strikers, unlike the ditzy and spacey characters from Clueless. Their attitudes are smug, bratty, and decadent rather than uncertain, seeking to please, and wholesome. They're self-aggrandizing and condescending rather than other-regarding. They aspire to being distant divas and icons, rather than friends accepted into a clique. And they give off an overly sexualized persona, whereas the appeal of the original characters was not simply to gawk at their ass and thighs.

The contrast for anyone who remembers the movie is so harsh (way harsh, Tai) that it creates an uncanny valley reaction, where something lies between two opposites and leaves the viewer disturbed. Most CGI human beings provoke such a response -- neither human enough, nor robotic enough, but more like a freak of nature.

It gets worse. Seeing actors play totally against what we associate with their clothing, environment, and overall zeitgeist leaves us asking, "What happened to the real people who wore those clothes? Went through those vignettes? Lived in that place?" It feels like the impostors are not just try-hard wannabes, but body-snatchers who have killed what is familiar and replaced it with something alien. It's like that scene in Silence of the Lambs where the he-she serial killer is donning a wig and twirling around in his lady-flesh-suit.

Iggy Azalea has killed Cher from Clueless and is wearing her skin.

Earlier examples of LARP-ing in popular culture at least tried to remain as consonant as possible with the original -- Grease, Back to the Future, Forrest Gump. Now the point is simply to body-snatch the sympathetic original characters and assimilate them into the loathsome present, like some kind of pop-cultural Borg. It is appropriation not out of affection and nostalgia, but simply to claim more and more territory of the good old days for idiotic, imperial trends.

I know -- BFD if it's some throwaway music video. But remember that this is what's going to unfold during the entirety of the new Star Wars movie. And it will only grow from there: the decision of the Star Wars brand sets a binding precedent.

Earlier remakes and reboots tried to distinguish themselves from the original by using a different visual style, exploring other parts of the narrative and character development, and so on. Always boringly, but they were different. Now the rehash movies are going to move into cosplay mode -- "It looks just like real thing!" (don't ask how it tastes, though). Expect pop culture to get even more off-putting in the near future.

April 29, 2015

Cocooning still continuing through new General Social Survey data

Now that the 2014 data for the General Social Survey have been released, we can see if recent social trends are continuing or reversing. I'll be focusing on those that relate to the cocooning-and-crime cycle, which plays out on the local and interpersonal level, where we typically only have impressions rather than hard data.

Here is an earlier post that lays out the dynamics of crime and cocooning behavior. Briefly, when people come out of their shells and let their guard down, it makes them more vulnerable to manipulation or predation by criminals and con artists. An outgoing social mood leads to rising crime rates. As crime rates get higher and higher, people begin to worry more about who they can trust. Rising crime slows down trust.

Ultimately they figure it's not worth the risk to be socially open around strangers and begin to close themselves off. That leaves slim pickings for criminals, so that cocooning causes falling crime rates. With the environment becoming so safe, people reassess how necessary it is to cocoon themselves from seemingly non-existent danger. Ultimately, low crime rates lead people to re-emerge from their cocoons, which begins the cycle all over again.

Violent and property crime rates have been falling since a peak around 1992. They fell dramatically during the '90s, looked like they would bottom out during the 2000s, but have continued a steady descent over the past five or so years.

That should have been enabled by the continuing of the cocooning trend, and indeed the new GSS data show no reversal in any of the key signals of people closing themselves off to others.

The main psychological trait here is trust, and it continues to fall. The GSS asks if other people can generally be trusted, or if you can't be too careful. A high was reached in the late '80s, with around 40-45% of Americans trusting strangers. After a decline, it appeared to hold steady at 32% from 2006 onward. In the 2014 survey, though, it took an extra dip down to only 30%.

This withdrawal of trust cuts across every demographic group, so I'm not controlling for any of them. Race, sex, age, education, class, marital status, region, size of local population, political orientation -- everyone is noticeably less trusting of strangers than they were 25 years ago.

One of the most dramatic drops I noticed was among young people. The only age group that is about as trusting as it used to be is 60-somethings. Every other group shows the decline, but the drop is steeper the younger the group. Among people aged 18-24, trusting others dropped from 35% to 14% from the early '90s to 2014. But even among 40-somethings, trust levels fell from 48% to 28% during the same period (that's the same size of a decline, but relatively smaller compared to how high it began).

I interpret that as younger people being more susceptible to cocooning because not trusting strangers and wanting to just play by yourself is a natural part of immaturity. Young people being as socially open as they were back in the '80s was more of a radical departure from what you'd expect based on their age, so it snapped back harder once cocooning set in (regression toward the mean).

You may be thinking, "Well, there's still at least 30% of Americans who trust others -- they're a minority, but it isn't like they're non-existent. And the maximum was only 40-45% before. How big of a change can that be?"

The difference is that trust is not part of isolated, individual behavior -- it relates to interactions among pairs of individuals, or larger groups still. Pick two people at random, throw them together, and see if both of them are trusting. If so, they can sustain a getting-to-know-you interaction. If only one is trusting, the interaction will sputter out. If neither one is trusting, it won't even be initiated.

The chance that two randomly chosen people are both trusting is proportional to the square of their fraction in the population. (Pick one, pick another, multiply the probabilities.) Squaring a fraction makes it much smaller, so looking at just the trust level among individuals is underestimating how fragmented society has become.

In a world where 45% are trusting, the chance that any two strangers who run into each other will both be trusting is 20%. In a world where only 30% are trusting, those two strangers have only a 9% chance of both being trusting.

Thus, even though a trusting disposition has "only" fallen by one-third, from 45% to 30%, trust-based interactions between a pair of strangers have fallen by half, from 20% to 9%.

It's even worse for those youngsters. When their trust levels fall from 35% to 14%, successful interactions between a pair of strangers who run into each other fall from 12% to just 2%. Of course, folks can make small talk without having to trust each other, but I'm talking about the ability of people who haven't met before to open up and connect with each other right off the bat. It may have been difficult before, but it's nearly impossible now. They might as well be toddlers who think everyone other than mommy and daddy are dangerous, or who are at least not worth trusting to share their toys with.

If you've wondered why you never see young people letting it all hang out and feeding off each other's energy, that's why. They simply don't trust anyone.

Going out to a bar on a somewhat frequent basis is also less common than it was back in the '80s. It's most pronounced among younger age groups, and only those who are 55 and older are more likely to go out to a bar or nightclub than their counterparts used to be. That's the Boomers refusing to age gracefully, and not really a sign of an outgoing disposition. They're there to engage in a contest of "who's still got it?" rather than to open up and have fun.

Spending an evening with a neighbor is still declining since a peak in the late '80s.

Both men and women continue to have less frequent sex. Doing it only once a month, or less frequently, afflicted nearly 40% of women in the early '90s, but nearly 50% in 2014. For men, infrequent sex rose from around 30% to around 40%.

Those questions establish that people are still cocooning. What about gradually realizing that the world isn't so dangerous anymore? Fear of walking around your neighborhood at night tracks the crime rate, lagging behind it by a few years (just to be safe). In 1994, 45% of Americans were afraid, and in 2014 it continued to drop, down to 31%.

Predicting how long this period of cocooning and falling crime will last is not an exact science. The last time, it was about 25 years, from a peak of crime in 1933 to a bottom in 1958. Keeping tabs on the social mood is more important: once we see a steady rise in trust and open behavior, we can expect crime rates to start rising shortly after. So far, though, that doesn't appear to be around the corner.

GSS variables: year, trust, socbar, socommun, sexfreq, fear

Neighborhood-level diversity prevents rioting among blacks and SWPL decadence among whites

Robert Putnam's research on diversity and civic participation shows that the more diverse an area is, the less likely the individuals are to coordinate their shared interests at a larger scale. That not only affects relations among individuals from different ethnic groups -- blacks and whites won't cooperate -- but even within the same group -- whites don't cooperate with each other, and blacks don't cooperate with each other.

Could there be an upside to the failure of individuals to coordinate their collective behavior? Yes -- if their purpose were anti-social or decadent. The decay on display in Baltimore provides a great case study.

It's no surprise that the rioting and looting are taking place in neighborhoods that are nearly 100% black. Blacks are more impulsive and inclined toward both violent crime and property crime. What is unusual, however, is that the neighborhoods that are closer to 50-50 black and white are not merely afflicted by rioting to a lesser degree than the 100% black areas, but are hardly affected at all. (See the maps at the end of this post.)

What gives?

Rioting and looting are collective behaviors, however fleeting and decentralized. They do not require sustained interest and permanent institutions to carry out the collective will, but they do rely on a minimal level of in-group cohesion and trust in order to keep the snowball growing rather than flaking into pieces or turning the members against one another.

In fact, with a little more regular participation and a bit more of an "honor among thieves" relationship, coordinated crime by an organized ethnic group could sustain a gang or mafia, again provided the area belongs entirely to that ethnic group.

The mafia operated in neighborhoods that were predominantly Italian, not in those that were half-Italian and half-non-Italian. Black gangs controlled South Central L.A. back when it was all black; Mexican gangs control it now that it's all Mexican. If the neighborhood was only half-Italian or half-black, the mafia and gang problem was not simply half as bad as in the fully Italian or black areas, but could not get going in the first place. (Of course, they would have still been subject to individual-level crime, just not collectively organized crime.)

White enclaves in large cities tend not to be stricken by rioting and looting, because anti-social whites express their deviance in non-violent ways. When they coordinate their deviance at the collective level, they take over the local education system and ban peanuts from school grounds, they carve out bike lanes for non-existent bike riders while clogging the narrowed streets for actually-existing drivers, and they take over any business area that once supported a variety of normal utilitarian shops and turn them all into arenas for decadent status contests (quirky bars, quirky coffee shops, quirky doggie yoga day-spas).

Yet as in the case of black rioting, these collective insanities only infect neighborhoods that are nearly 100% white. If it's only 50-50, the hipsters and yuppies don't feel emboldened enough to organize their decadent impulses. They don't have the sense that their ethnic group totally owns the place and can do whatever they want with it, for better or worse.

Overall, diversity is corrosive to society at any scale. But there is a silver lining: it also prevents anti-social collective behavior from catching fire.

Maps of diversity and rioting in Baltimore

Here is a map of racial diversity around Baltimore. Blue dots are blacks, red dots are whites. (From a series of similar maps here.)


The core of the city is where the dots are the densest, more or less in the center of the image.

There are two stark areas that are mostly black -- West Baltimore and East Baltimore, close to the core. Farther away from the core (for example, toward the northeast), the blue dots overlap red dots, showing a more mixed area than a pure-black ghetto.

There are three main areas that are mostly white -- North Baltimore (the large wedge pointing south that separates the black areas to the west and east), and two smaller but denser enclaves that lie just south (on a tiny peninsula) and just southeast of the core (in a red ring).

Yuppie and hipster decadence is concentrated in the all-white areas, such as Hampden in the North Baltimore wedge, Downtown near the center, and Fell's Point in the red ring lying southeast of the core. To the northeast, there are still plenty of whites, but they live in more diverse neighborhoods. SWPL decadence in a "nice boring" place like Belair is not at half the level of Fell's Point, but barely there at all.

As for black decadence, here is a map of the major riots in 1968, overlaid with the riots in 2015 (which are much smaller -- though wait until 2020). They come from this article at Vocativ.


The scale on this map is more zoomed-in than the map of diversity. The major and minor riots have afflicted the all-black areas of West and East Baltimore, close to the core. There are plenty of blacks living out to the west and southwest, as well as out toward the northeast, but they find themselves in more diverse neighborhoods.

In these diverse neighborhoods, would-be rioters apparently don't feel they can trust their fellow blacks enough to carry out an afternoon and evening of looting, trashing windows, and setting cars on fire. If only they owned the whole neighborhood, "shit would git real". But with nobody trusting anybody else in a mixed area, they're going to just watch the riots on Wurl Stah and vent their aggression on Twitter.

When the neighborhood might otherwise be burning down, here's one cheer for diversity-induced atomization.

April 27, 2015

Movie trailers as serial drama (STAAAAARRRRR WAAAAAARRRSSSS)

On the last episode of "Agnostic reacts to Star Wars trailers," we learned what the new trilogy will amount to -- a cosplay fanfic sequel for Millennials.

And now that they've released the next installment of "Trailers for That New Star Wars Movie," that assessment is certain. You can almost see the Millennial in stormtrooper costume walking up to Harrison Ford and nervously asking for his autograph. I wonder whether that'll be relegated to a making-of sequence during the credits, or be included in the main narrative itself.

("Gee Mr. Solo, you're some legend around these parts... It sure would do me the honors if you'd, uh, do me the honor of signing my toy lightsaber!")

I still don't know what the hell the movie is going to be about, but contemporary audiences don't want any SPOILERS whatsoever.

Trailers are no longer meant to reel you in on the first viewing. They have become a serial drama form unto themselves. The first reveals a tiny bit, and leaves the audience on a cliffhanger. The next one recaps the last one (barren desert landscape, speeder bike battle, lightsabers), but reveals a little more (Vader helmet, Han and Chewie, TIE fighter pilots).

Who knows how many more episodes there will be before the series finale -- the trailer that tells you what the hell the movie is going to be about.

Not following the hype cycle of modern movies, I was unaware of the trend of trailers as soap operas (gossip about them online when the new episode comes out!). I'm even more out of touch with video games, but their hype cycle is so huge that even someone who doesn't play them anymore may know about it. First there's a hint from the developers, then a spectacle teaser during E3, then a beta version, then a playable demo, and finally two years later, the actual game.

I remember when the movie trailer was a terse stand-alone format, and when new video games were announced once they were released, not years ahead of time.

But, that was back when people still had a life. Folks in outgoing times have too much of a dynamic social life to tolerate a serial format stringing them along and keeping them waiting. Soap operas were huge in the Midcentury, but were marginal by the '80s. Short film serials were popular at theaters in the Midcentury, but were also absent during the '80s, whose climate was similar to the Roaring Twenties. Only since the cocooning climate returned during the '90s did serial dramas return to mass entertainment, this time on TV.

They could have made a string of teaser trailers for movies back in the '80s, to be shown on TV commercials or in theaters, but they didn't. Those are a new development -- since when exactly, I don't know, although I have a hunch the Lord of the Rings movies had serial trailers.

Cocooners are bored out of their minds, so they crave a steady and regular fix of anything meant to wake them up. Previously, on "dissecting popular culture," we looked at entertainment as a mood stabilizer vs. experimentation, making the link to stabilizing vs. destabilizing types of drugs.

The stabilizing kind were popular in the Midcentury and have become popular again since the dawn of Prozac circa 1990. Ward Cleaver had Kellogg's Pep and Geritol, while his grandson has Monster energy drinks and Viagra. The destabilizing kinds like LSD are meant to be taken in stand-alone sessions, as though each trip were to somewhere different.

Movie trailers have clearly joined the mood stabilizer family of entertainment. Life is boring, but don't worry, another teaser trailer for Whatever Part Four comes out next week. And don't worry, it won't contain any spoilers -- which would ruin the fix you ought to get from the next trailer after that one.

Spoilers may not answer every question about who, what, when, where, why, and how, but they do close off certain paths through which the trailer-makers could have strung you along. And now that the function of trailers is to provide a regular dose of stimulation to bored nerds, they no longer tell you what the hell the movie is going to be about.