October 20, 2014

The geography of striver boomtowns, AKA future ghost towns

An NYT article reviews a report on how recent college grads are multiplying like cancer cells, I mean fueling the engine of growth in cities across America, particularly in the most central areas of the city. They are shying away from Establishment cities and gentrifying second and third-tier cities where the rents are cheaper -- until the word gets out and the next twenty waves of transplants bid up the rents to Establishment levels.

The article and report refer to 25-34 year-olds with a B.A. as talented, creative, etc., without any proof required other than the fact that their brains are young, that their credential being bought and paid for (rather than earned) allowed them to goof off for four years, and that they spent that time cultivating unique quirky tastes shared by 90% of their age-mates (craft breweries bla bla bla).

These cases illustrate some of the themes I've started to develop here about the geographic and generational differences in status-striving. The Gen X and Millennial subjects they're tracking are moving away from Establishment cities because the Silent and Boomer incumbents refuse to vacate the prime real estate and above-poverty-level jobs.

More broadly (see this post), materialist and career competition have become too saturated by Silents and Boomers, leaving X-ers and Millennials to pursue lifestyle competition instead. That will not only affect which cities they flock to, but the character of their lives once they arrive -- they will turn the place into one great big playground for lifestyle status contests, lots of drinking, and the occasional random drunken hook-up. Or, College: The Sequel (total run-time to be determined).

And they aren't picking just any old sub-Establishment cities but ones that are already growing at a fair clip, and are already and have historically been quite large in population. When they look for a city in the Northeast to take the place of New York or Boston, do they settle on Albany? No, it has to be the second-largest city in New York -- Buffalo. Feeling squeezed out of Chicago and Dallas in the Midwest and Philadelphia and DC in the east? Well, you could shoot for Wheeling, WV or Grand Rapids, MI -- but why not aim as high as you can (within your budget), and resurrect the Gilded Age empires of Pittsburgh and Cleveland?

Fortunately, nobody involved at the grassroots or the academic and journalistic levels has any knowledge of history, so what's to temper the enthusiasm for pushing Buffalo and Cleveland as up-and-coming boomtowns (at least among the Creative Class)? It's not as though they've already been through the over-hype and hollowing-out cycle before. But if you want more sustainable long-term growth, you'd have to settle for cities that are smaller, historically less important, and culturally less thrill-seeking. Off-the-radar cities.

On the whole, though, the striver boomtowns are not in the Rust Belt but in the Sun Belt, i.e. where mainstream America has already been heading for decades. There are now enough earlier transplants who can actually run a business and create jobs, that the Creative Class can play catch-up and jump on the Sun Belt bandwagon without starving and "living outside". Will the Sun Belt soon turn into the next Rust Belt? Impossible -- growth only increases, worst-case at a slowing rate, but there will never be a mass desertion of entire swaths of the country that had been over-hyped, over-built, and over-indulged.*

It comes as no surprise, then, that the cities with the greatest percentage growth in 25-34 college grads fail the test of egalitarianism outlined in this post -- not having a pro sports team. Only Austin passes (I didn't claim the test was perfect). That proves that they are not simply seeking refuge from the rising competitiveness and widening inequality that blight the Establishment cities. Otherwise they'd be heading to some place where it would never occur to the locals to allow their public coffers to be parasitized by a big-league team that could PUT THEM ON THE MAP.

Among the ranks of Millennial boomtowners, is there any awareness of how illusory all this rapid growth is? Let's ask one of them living in Denver:

“With lots of cultural things to do and getting away to the mountains, you can have the work-play balance more than any place I’ve ever lived,” said Colleen Douglass, 27, a video producer at Craftsy, a start-up with online classes for crafts. “There’s this really thriving start-up scene here, and the sense we can be in a place we love and work at a cool new company but not live in Silicon Valley.”

How can start-ups be thriving? You don't thrive until you're mature. All those dot-com start-ups sure seemed to be thriving in the late '90s -- what happened to them after that, I'll have to order a history book on inter-library loan, since I'm too retarded to remember.

Online classes for crafts, or where higher ed meets e-tailing. Two great bubbles that burst great together!

BTW, her Linkedin profile shows that she went to the University of Dayton, which I don't recall being very close to Denver. All the talk about youngsters choosing the cities and bustling city cores over the dull suburbs papers over the reality that these kids aren't shopping around at the urban vs. suburban level, but at the entire metro area level -- which city will maximize my number of likes and followers? She didn't choose downtown Dayton over an attractive suburb of Dayton like Beavercreek -- she wanted to ditch dear, dirty Dayton altogether.

Group identities that are constructed consciously by first-generation adherents who merely affiliate with a city will be weak, shallow, and fleeting compared to those that are inherited unwillingly by multi-generational descendents who are rooted there. The western half of the country has long been more plagued by vice and social decay than the eastern half, and its history of rootlessness provides the central explanation.

Another long established fact about urban growth and migration is that cities do not grow except by wave after wave of even greater fools pouring into them. City folk are so caught up in their status contests (whether based on career or lifestyle), that they forgot to get married and have kids. By the time their career is established, or their reputation on Instagram suitably impressive, it's too late to start. Cities have been fertility sink-holes ever since they began thousands of years ago, and migration from the countryside was all that fed their growth.

What will happen to these 30 year-olds when the contests over who's sampled the most esoteric food truck fare begin to get old? They won't have any family or community life to fall back on; everything up till then has been based on lifestyle status competition. They will face the choice between staying stuck on the status treadmill forever, or drop out in isolation, where they'll indulge their vices until they bodies and brains loosen into mush. Sadly, that will begin by the time they're 40, when the final relief of death is still very far away.

Gosh, you guys, what the heck. I hate to be such a downer, but all this mindless enthusiasm for urban cancer is not only getting tiresome, but by now disturbing.

* During a brief refractory period, the cheerleading reporter lets some sobering facts slip by:

Atlanta, one of the biggest net gainers of young graduates in the 1990s, has taken a sharp turn. Its young, educated population has increased just 2.8 percent since 2000, significantly less than its overall population. It is suffering the consequences of overenthusiasm for new houses and new jobs before the crash, economists say.

Good thing that Ben Bernanke ordered an anti-hype fence to be built around Atlanta, lest the overenthusiasm ruin other Sun Belt boomtowns.

October 19, 2014

Wide-open eyeglasses for a non-ironic look

Earlier I showed how different the impression is when someone wears glasses that are narrow as opposed to wide. Narrow eyes, whether now or during the cocooning Midcentury, make people look aloof and self-conscious. Wide eyes like you saw back in the '70s and '80s look inviting and other-directed.

Of course today's narrow glasses are more off-putting than the Midcentury originals because there's now a level of irony on top of it all. Get it -- retro Fifties, geek chic! Yep, we get it.

It makes you wonder whether people could ironically wear wide-eye glasses (other than sunglasses). I've been wearing a pair from several decades ago since the summer, and haven't gotten any winking approval looks like "Oh I see what you did there, Seventies glasses FTW!" They're pleasantly inconspicuous.

I was searching Google Images to try to identify the drinking glasses that my parents used to own around the time I was born, with only pictures to go from. "Vintage glasses orange brown" turned up this result:


It's meant to be part of an ironic "hot for teacher" costume for Halloween, but it doesn't succeed in the ironic department. Somehow, wearing wide-eye glasses makes someone look inviting, kind, and sincere, even when they're aiming for ironic.

Contrast the effect with those "sexy nerd" glasses with narrow eyes and thick rims, where the girl just looks sassy and self-absorbed. Wearing glasses like the ones above makes her look refreshingly tuned in to other people instead of herself.

October 18, 2014

Extended family structure as an influence on generational membership

Why is it that even among people born in the same year, some of them identify more strongly with an older cohort, some with their own cohort, and some with a younger cohort? If generational membership were only a matter of when you were born, and what the environment was like along each step of your development, we shouldn't see this kind of variation among folks who were born in the same year.

Going solely off of a hunch from personal experience, it could be due to differences in the generational make-up of a person's extended family.

I was born in 1980, part of a late '70s / early '80s cohort that either gets lumped in as the tail-end of Gen X or is given its own tiny designation, Gen Y, between X-ers and Millennials. I've always felt and acted closer to core X-ers than to core Millennials (who to me seem like they come from another planet), although a good fraction of people in my cohort would tilt more toward the Millennial side. We all recognize that we're neither core X-ers nor core Millennials, yet when pushed off of the fence-sitting position, some of us fall closer to an earlier generation and some to a later generation.

Since we spend quite a bit of time socializing with family members, though, perhaps we should look into that source of influence as well. If they're not related to you, much older and much younger people typically are blind to you, and reject hanging out with you if you try to make yourself seen. But blood is thicker than water, and those much older or younger kids will interact with you and pass along their ways in a family setting.

I've only rarely interacted with the extended family on my dad's side, so I'll stick to the maternal side. Although she was born in the mid-'50s, she is unusually young among her three other siblings, who were born in the early, mid, and late '40s — more typical of the parents of core X-ers. My cousins through them are also all older than me: of those I met regularly growing up, one is a late '60s birth, two are early '70s births, and one is a mid-'70s birth. Our grandparents are also more typical of core X-ers, with one born in the mid-1910s and the other in the early '20s.

I would have to ask around, but I suspect the people in my cohort who tilt more toward the Millennial side of the fence have cousins who are more centered around their own age, aunts and uncles centered around their parents' age ('50s births), and grandparents who are Silents (late '20s / early '30s births). That extended family profile is closer to a Millennial's than an X-er's.

Those are the blood relationships, but when you count the affines (those who marry in), you get the same result as long as there aren't wild age differences in dating and marriage. Growing up, I only got to know the girlfriends (and eventual wives) of two of my cousins, but they were both core X-ers (late '60s or early '70s births). And the uncle-by-marriage that I knew well growing up was a Silent.

In short, if you look at my family tree and cover up my birth year, and my parents', it would like a typical Gen X tree. The lateral influences from my cousins (and once they were old enough, their girlfriends and wives), as well as vertical influences from aunts and uncles and grandparents, are more typical of someone born in the early '70s than the early '80s.

Granted, the less time you spend with your extended family growing up, the weaker this effect will be. And people in my cohort had parents who were part of the Me Generation who didn't mind moving away from their siblings and parents, and who expected us to do so as well once we set off for college and whatever career we wanted to pursue. Status was to be more important than extended family cohesion.

But some of us didn't grow up so isolated from our extended families. My mother's sister and her husband lived only a few blocks away from us when I was in elementary school, and that was a second home for me and my brothers. By that time, her children had moved out, but still visited frequently, and brought their girlfriends, so we weren't so distant from them either. And I spent long portions of off-time at my grandparents' home, during the summer and winter.

Nowadays, with extended family ties being moderately strong at best, generational membership is going to be primarily shaped by your own birth year, period. That determines who your peers will be in school, and there's your generation. But that still leaves secondary influence for the generational make-up of your extended family, and in cases where you belong to a cohort that is neither here nor there, this secondary influence could push you clearly into one or the other clearly defined generation on either side of your own.

October 16, 2014

The generational divide among grunge musicians

Grunge music was a flash-in-the-pan phenomenon of the early 1990s, serving as a bridge between the longer and more stable periods of college rock throughout the '80s and alternative rock throughout the '90s. In fact, there was a generational bridging underneath the stylistic bridging.

I finally came upon a copy of the Temple of the Dog album with "Hunger Strike" on it. (Posted from my thrift store cardigan.) That song has always struck me as aging better, and agreeing with me better, since it first became a hit over 20 years ago. Not one of those perfect pop songs, but one worth buying the album for.

Like many other late Gen X adolescents, I was into grunge when it was the next big thing, but quickly moved on — or backward — to punk, ska, and college rock from the late '70s and '80s. (My friends and I hated the lame alternative, post-grunge, or whatever it's called music that defined the mid-'90s through the early 2000s, even when it was novel.) A good deal of what I used to like, I began not-liking, but there are some songs like "Hunger Strike" that still sound cool and uplifting.

As it turns out, the grunge groups that I find more agreeable were made up mostly or entirely of late Boomers, born in the first half of the '60s, while those I don't relate to as much anymore were made up mostly or entirely by early X-ers, born in the second half of the '60s. The late Boomers are the ones shown in Fast Times at Ridgemont High — abandoning themselves to whatever feels good — while the early X-ers are shown a little later in the John Hughes movies — consciously torn between wanting to be impulsive while seeking the comfort of stability.

The abandon of the late Boomers gives them a clear advantage when it comes to jamming within a group, improvising, and going wherever the moment is taking you without questioning it. This was most clearly on display when glam metal bands went mainstream in the '80s, ushering in the golden age of the virtuoso guitar solo and near-operatic vocal delivery. But it showed up also in the era's cornucopia of cheerful synth-pop riffs, as well as jangly, joyful college rock.

When the early X-ers took up songwriting, rock's frontmen were suddenly from a more self-conscious and ironic generation. Stylistically, it meant that the shaman-like performance of the spellbinding guitar solo was over, and that vocal delivery would be more aware of its own emotional state, or more affected — twee rather than carefree on the upbeat side, angsty rather than tortured on the downer side.

During this transition, along came grunge. Temple of the Dog was made up of members from Soundgarden and Pearl Jam, before either group exploded in popularity. Pursuing a hunch, I found out that singers Chris Cornell and Eddie Vedder are both late Boomers. Pearl Jam was roughly half Boomers and half X-ers, while Soundgarden was all Boomers aside from the bassist.

And sure enough, Soundgarden always felt like the evolution of '80s metal, which was created by their generation-mates, albeit at an earlier stage of their lives. Pearl Jam sounded more of-the-Nineties (more self-aware, less abandoned), though more rooted in the sincerity of college rock bands from the '80s than sharing the irony of '90s alternative rock.

Which groups had a solid Gen X basis? Nirvana had no Boomers — no surprise there. Neither did Alice in Chains. Stone Temple Pilots were all X-ers aside from their guitarist. This was the angsty side of grunge (self-consciously angry), with the funky riff of "Man in the Box" pointing the way toward the aggro, rap-influenced metal of the late '90s (Korn, Limp Bizkit, etc.).

Screaming Trees were equally Boomer and X-er, and "Nearly Lost You" sounds pretty easygoing by alternative standards.

And other Boomer-heavy groups? The girl groups, as it turns out. Only the bassist in L7 and Babes in Toyland were X-ers, the rest were Boomers. On their first grungier album, Hole consisted of Boomers (I couldn't find the birth year for the drummer, though). Recall an earlier post which showed all-female bands peaking in popularity during the '80s — the girl grunge bands were a fading generational echo.

The more self-conscious mindset of women in Gen X made it difficult or impossible to get into a state of abandon needed for grunge music, which was only partly introspective — and partly keeping the free-wheeling spirit of the '80s alive. When I think of the prototypical wild child, she's a late Boomer like the girls in Fast Times, the women of carefree '80s porn, and the real-life basis for the protagonist of Story of My Life by Jay McInerney.

Generations keep their ways well beyond their formative years, almost like a language that they were surrounded by and continue to speak, regardless of what new languages may have shown up in the meantime. If cultural change were only a matter of a changing zeitgeist, then Pearl Jam and Nirvana should have sounded much more similar than they did. And if those differences were a matter of being at different life stages at the time, why were the older guys more free-wheeling and the younger guys more reserved? It came down to a changing of the generational guard.

Today's immigrants are revolutionizing the way we experience disease — again

With ebola in the news, it's worth placing it in the broader context of exotic epidemic diseases cursing status-striving societies, where laissez-faire norms open the floodgates to all manner of vile pollution from outside.

The last peak of status-striving, inequality, and immigration was circa 1920, right when the Spanish flu pandemic struck. During the Great Compression, with immigration nearly ground to a halt, epidemic diseases looked to become a thing of the past. That sunny man-on-the-moon optimism of the 1960s would be undone by the Me Generation of the '70s. Not coincidentally, old diseases began rearing their ugly heads once more.

See this earlier post that examined the rise and fall and rise of epidemic diseases and pests, in tandem with the trends of inequality and immigration. Vaccines, hygiene, public health initiatives, etc., seem to have little or nothing to do with the fall, which in several cases was well underway before the vaccine was even discovered, let alone administered across the population.

It could have boiled down to something simple like the body not being burdened by as much stress as in hyper-competitive times, and keeping the ecosystem of diseases manageable by not allowing in boatload after boatload of new strains. Since ancient times, global migration has spread contagious diseases, but the world became less intertwined during the Great Compression.

Ebola will not become the pandemic of our neo-Gilded Age because it doesn't spread so easily. But ebola is just the tip of the iceberg of what is pouring into our country, and Western countries generally. It is not an isolated curiosity, but part of a larger population of pathogens being trucked and flown into this country every day.

One of those bugs, as of now an unseen up-and-comer, will soon enjoy its glory days just as the Spanish flu did 100 years ago. You can't say that America doesn't encourage the underdogs to give it their all.

October 2, 2014

When a girl gets taken advantage of, liberals cry rape, pseudo-cons shrug shoulders

There's really nobody to cheer for in the ongoing battle about "rape culture." The hysterical liberals / feminists are cheapening the charge of rape when they apply it to situations where a girl, usually intoxicated, gets taken advantage of. That involves a betrayal of trust, not the threat or use of violence.

Liberals used to concern themselves with matters of unfair treatment, injustice, one party taking advantage of another, and so on. You'd think the common-enough case of a drunk girl getting taken advantage of would be right up their alley. Hard to think of a better textbook example of someone using a higher bargaining position (clear-minded vs. intoxicated) to take advantage of another person.

Yet liberals these days don't talk much about the little person being taken advantage of (umm, vote for the Tea Party much? Cuh-reeeepy...) Most of them became numb to populism a couple decades ago. Hence every objection must be about preventing harm and providing care, no matter how unfitting this approach may be in any given case.

On the other hand, much of the so-called conservative reaction is to say, "Meh, what was she expecting? Anyway, no crime, no punishment. Next topic." While at least seeing past the blubbering about violence and harm, this response still shows a callousness toward a growing problem of young women getting taken advantage of while intoxicated, and while away from anyone who would look out for her interests, i.e. her family.

Sure doesn't sound like a world any of us want to live in, but pseudo-cons are so concerned with kneejerk reactions against the side of political correctness that they won't admit how unwholesome the situation is.

More and more, in fact, the pseudo-con position on any aspect of our fucked up world can be simplified as: "Unwholesomeness -- if you can't get used to it, you're a pussy." Real I-like-Ike kind of values.

This is the gist of some comments, copypasted below, that I left at this post at Uncouth Reflections about how hysterical accusations of rape are getting, that they're claiming ever less harm-based forms of not-so-consensual sex as rape.

That post was in turn motivated by this item at Time about Lena Dunham's new book, in which she relates a story about being "raped" in college. She was drunk and/or on Xanax, was insistent about going back with the guy who took advantage of her -- over an explicit attempt by a hapless white knight to steer her away from it -- and talked dirty to the guy during the act.

It never occurs to her that she was raped (she's thinking of actual rape, including force / violence). Her friend is the one who tells her she was raped, and convinces her. They are now thinking of metaphorical rape, and literal getting-taken-advantage-of. In the comments below I speculate about how our society has gotten here, making use of Haidt's framework of the moral foundations that liberals vs. conservatives draw on in their moral intuitions.

* * *

Incidents like Dunham’s are obviously not rape, they are getting taken advantage of. Why can’t today’s leftoids speak out against vulnerable people getting taken advantage of? Partly because the women make themselves vulnerable in the first place by blasting their brain with so many hard substances in rapid succession, in a public place where they know strangers only have one goal on their mind.

But there must be something more to it. We would object to a sleazy pawn shop owner offering a couple bucks for pristine 1970s receiver if it came in from a clearly stoned-out customer who says he just wants some cash to feed the munchies, man, you gimme anything for this thing?

These cases involve not harm or force but violation of trust. She trusted the guys at the party not to be that type of sleazeball, the stoner trusted the shopkeepers to give him honest treatment. When they wake up the next morning worrying, “Oh God, what did I do?” they feel betrayed or lied to.

Why insist on framing it as harm when it is not? Liberals are becoming so infantilized that it’s the only moral foundation they can appeal to anymore. “Mommy, that person hurt me!” No, they took advantage of you — it’s still wrong, but different from harming you. Kids don’t get it because they’re naive and don’t know about the possibility of having their trust betrayed.

Grown-ups should, though, and the fact that liberals cannot even appeal to their second-favorite moral foundation — fair treatment — shows how stunted this society has become.

Betrayal also taps into the moral foundation of community or in-group cohesion, but liberals are numb to that. If you’re both members of the same community (and typically they are even closer, being at least acquaintances), how could you think of taking advantage of her? Shame on you. Get out, and don’t come back until you’ve made up for it.

But liberals value hedonism, laissez-faire, individual advancement, and other quasi-libertarian leanings that most Republicans do. Hence they cannot object on the basis of wanting to prevent people from taking advantage of others. Under hedonism, that is guaranteed. And if laissez-faire and non-judgementalism are sacrosanct, who’s really to say that we’re in a place to judge a mere sleazeball who takes advantage of others? I mean, it’s not like he’s using force or violence.

Therefore, yes, he must have used force or threatened violence — that’s the farthest that the liberal is willing to draw the boundary. If they have an instinctive revulsion, it must be framed as some kind of harm, because anything less severe than that but still repugnant (like taking advantage of a drunk chick at a party) lies on the “fair game” side of the moral boundary line. Cognitive dissonance kicks in, so they rationalize what happened as harm / rape.

I alluded to it by calling Republicans quasi-libertarians, but let me say that too many conservatives don’t know how to react to these scenarios either. They know that the liberals are hysterically over-exaggerating, and like to invent new classes of victimhood, so their instinct is to dismiss these cases altogether.

But a drunk girl getting taken advantage of at a party isn’t a brand-new victim class. I’m sure that was a concern in Biblical times when the wine began flowing. It’s not as though she were a black who was denied admission to law school due to low LSAT scores, or a mentally ill tranny who feels robbed because ObamaCare won’t fund his castration / mangina surgery.

Brushing the Dunham type cases aside, like “Bitch deserved what she got for getting drunk,” or “Bitches need to learn to fend for themselves when drunk at college parties,” is too far in the anti-PC direction, however understandable the revulsion toward PC is.

I don’t sense any of that here — that’s more of a shrill Men’s Rights thing — but it’s worth emphasizing that it isn’t only liberals who are dumbfounded when they try to articulate their reaction to “drunk chick gets taken advantage of.” They both rely too heavily on laissez-faire and hedonistic norms to be able to say there’s something wrong with someone betraying another’s trust.

September 30, 2014

The crappy digital look: Demystifying the lie about how sensors vs. film handle light sensitivity

In this installment of an ongoing series (search "digital film" for earlier entries), we'll explore another case of digital photography offering supposedly greater convenience at the cost of compromised image quality. The end result is pictures that are too harshly contrasting, more pixelated, and color-distorted where it should be white, black, or shades of gray.

This time we'll look into the properties of the light-sensitive medium that records the visual information being gathered into the camera by the lens. I have only read one off-hand comment about the true nature of the differences between digital and film at this stage of capture, and mountains of misinformation. The only good article I've read is this one from Digital Photo Pro, "The Truth About Digital ISO," although it is aimed at readers who are already fairly familiar with photographic technology.

Given how inherent the difference is between the two media, and how much it influences the final look, this topic is in sore need of demystifying. So hold on, this post will go into great detail, although all of it is easy to understand. By the end we will see that, contrary to the claims about digital's versatility in setting the light-sensitivity parameter, it can do no such thing, and that its attempts to mimic this simple film process amounts to what used to be last-resort surgery at the post-processing stage. One of the sweetest and most alluring selling points of digital photography turns out to be a lie that has corrupted our visual culture, both high and low.

To capture an image that is neither too dark nor too bright, three inter-related elements play a role in both film and digital photography.

The aperture of the lens determines how much light is gathered in the first place: when it is wide open, more light passes through, when it is closed down like a squint, less light passes through. But light is not continually being allowed in.

The shutter speed regulates how long the light strikes the light-sensitive medium during capture: a faster shutter speed closes faster after opening, letting in less light than a shutter speed that is slower to close, which lets in more light.

Last but not least in importance, the light-sensitive medium may vary in sensitivity: higher sensitivity reacts faster and makes brighter images, lower senstivity reacts slower and makes dimmer images, other things being equal. This variable is sometimes labeled "ISO," referring to the name of a set of standards governing its measurement. But think of it as the sensitivity of the light-reactive material that captures an image. This scale increases multiplicatively, so that going from 100 to 200 to 400 to 800 is 3 steps up from the original. Confusingly, the jargon for "steps" is "stops."

A proper exposure requires all three of these to be in balance — not letting in too much or too little light through the lens aperture, not keeping the shutter open too long or too briefly, and not using a medium that is over-sensitive or under-sensitive. If you want to change one setting, you must change one or both of the other settings to keep it all in balance. For example, opening up the lens aperture lets in more light, and must be compensated for by a change that limits exposure — a faster closing of the shutter, and/or using a medium that is less light-sensitive.

Between digital and film, there are no major differences in two of those factors. The lenses can be opened up and closed down to the same degree, whether they are attached to camera bodies meant for one format or the other. And the shutter technology follows the same principles, whether it is opening up in front of a digital or film recording medium. (Digital cameras may offer slightly faster maximum shutter speeds because they are more recent and incorporate improvements in shutter technology, not because of digital properties per se.)

However, the two formats could not be more different regarding light-sensitivity of the recording medium.

Film cameras use rolls of film, which are loaded into and out of the camera on a regular basis. Load a roll, take however-many pictures, then unload it and send it off to the lab for development. The next set of pictures will require a new roll to be loaded. Digital cameras have a light-sensitive digital sensor which sends its readings to a memory card for later development and archiving. The sensor is hardwired into the camera body, while the memory card is removable.

Thus, no matter how many pictures you take with a digital camera, it is always the exact same light-sensitive piece of material that captures the visual information. With a film camera, every image is made on a new frame of film.

A digital sensor is like an Etch-a-Sketch that is wiped clean after each image is made, and used over and over again, while frames and rolls of film are like sheets of sketching paper that are never erased to be re-used for future drawings. The digital Etch-a-Sketch is just hooked up to a separate medium for storing its images, i.e. memory cards. Frames of film are both an image-capturing and an image-storage medium wrapped up into one.

Whether the light-sensitive material is always fresh or fixed once and for all has dramatic consequences for how it can be made more or less reactive to light — the third crucial element of proper exposure.

Film manufacturers can make a roll of film more reactive to light by making the light-sensitive silver halide crystals larger, and less reactive by making the crystals smaller. Hence slow films produce fine grain, and fast films large grain. What's so great is that you can choose which variety of film you want to use for any given occasion. If you're worried about too much light (outdoors on a sunny summer afternoon), you can load a slowly reacting film. If you're worried about not getting enough light (indoors in the evening), you can load a fast reacting film.

It's like buying different types of sketching paper depending on how much response you want there to be to the pencil lead — smooth and frictionless or bumpy and movement-dampening. Depending on the purpose, you're able to buy sketchpads of either type.

What was so bad about the good old way? The complaints boil down to:

"Ugh, sooo inconvenient to be STUCK WITH a given light sensitivity for the ENTIRE ROLL of film, unable to change the sensitivity frame-by-frame. What if I want to shoot half a roll indoors, and the other half outdoors?"

Well, you can just buy and carry two rolls of film instead of one — not much more expensive, and not much more to be lugging around. And that's only if you couldn't compensate for changes in location through the other two variables of aperture size and shutter speed. For the most part, these were not big problems in the film days, and served only as spastic rationalizations for why we absolutely need to shift to a medium that can alter the light-sensitivity variable on a frame-by-frame basis, just as aperture size and shutter speed can be.

That was the promise of digital sensors, which turns out to be a fraud that the overly eager majority have swallowed whole, while enriching the fraudsters handsomely.

Digital cameras do offer a means for making the image look as though it had been captured by a material that was more sensitive or less sensitive to light, and this variable can be changed on a frame-by-frame basis. But unlike film rolls that may have larger or smaller light-sensitive crystals, the photodiodes on the digital sensor have only one level of sensitivity, inherent to the material it is made from.

Because this sensitivity is baked into the materials, it certainly cannot be altered by the user, let alone on a frame-by-frame basis. And because the sensor is not removable, the user also has no recourse to swap it out for another with a different level of sensitivity.

How then do digital cameras attempt to re-create the many degrees of sensitivity that film offers? They choose a "native" sensitivity level for the photodiodes, which can never be changed, but whose electronic output signal can be amplified or dampened to mimic being more or less sensitive in the first place. In practice, they set the native (i.e. sole) sensitivity to be low, and amplify the signal to reach higher degrees, because dampening a highly sensitive "native" level leads to even lower quality.

Most digital cameras have a native (sole) sensitivity of ISO 100 or 160, meant to evoke the slowly reacting less sensitive kinds of film, and allow you to amplify that signal frame-by-frame, say to ISO 800, 3200, and beyond. But remember: it is never changing the "ISO" or sensitivity of the light-reactive material in the sensor, only amplifying its output signal to the memory card.

It is like always recording sound at a low volume, and then using a dial on an amplifier to make it louder for the final listening, rather than record at different volume levels in the initial stage. And we all know how high-quality our music sounds when it's cranked up to 11. It does not sound "the same only louder" — it is now corrupted by distortions.

We should expect nothing less from digital images whose "ISO" was dialed up far beyond the native (sole) sensitivity of 100 or 160.

Below are some online digital test shots taken with the lens cap fully in place, blocking out most light, with higher and higher settings for the faux-sensitivity ISO setting. Now, these images should have remained black or gray the whole way through. The only change that would have occurred if they were shot on more and more highly sensitive film material is a grainier texture, owing to the larger film crystals that make film more sensitive, and an increase in brightness, since what little light was sneaking in past the lens cap would have produced a stronger reaction.

And yet look at the outcome of a digital sensor trying to see in darkness:


Not only does the texture get grainier, and the light level brighter, when the native (sole) sensitivity is amplified, there are now obvious color distortions, with a harsh blue cast emerging at higher levels of sensor amplification.

What's worse is that different cameras may produce different kinds of color distortions, requiring photographers to run "noise tests" on each camera they use, rather than know beforehand what effects will be produced by changing some variable, independent of what particular camera they're using.

The test shots above were from a Canon camera. Here's another set from a Pentax, showing a different pattern of color distortions.


Now it's red instead of blue that emerges at higher levels of amplification. Red and blue are at opposite ends of the color spectrum, so that shooting a digital camera without test shots is like ordering a pizza, and maybe it'll show up vegetarian and maybe it'll show up meat lover's. Unpredictable obstacles — just what a craft needs more of.

These distortions can be manipulated in Photoshop back toward normal-ish, but now you've added an obligatory extra layer of corrections in "post" just because you want to be able to fiddle with light-sensitivity frame-by-frame, which you're not really doing anyways. Convenience proves elusive yet again.

So, if amplification of the native (sole) light sensitivity is not like using film rolls of different sensitivities, what is it like? As it turns out, it is almost exactly like a treatment from the film era called push-processing, which was a last-ditch rescue effort in the developing stage after shooting within severe limitations in the capturing stage.

Suppose you were shooting on film, and your only available rolls were of sensitivity ISO 100, which is a slowly reacting film best suited for outdoors in sunlight. Suppose you wanted to shoot an indoor or night-time scene, which might call for faster reacting film, say ISO 400. Could it still be done with such low-sensitivity film? You decide to shoot in the evening with a slow film, effectively under-exposing your film by 2 stops, worried the whole time that the images are going to come back way too dark.

Lab technicians to the rescue! ... kind of. If you let them know you under-exposed your whole roll of film by 2 stops, they can compensate for that by allowing your film to soak in the chemical developing bath for a longer time than normal, allowing more of those darkened details to turn brighter. (The film starts rather dark and the developing bath reveals areas of brightness over time.) Taking 100 film and trying to make it look as sensitive as 400 film is "pushing" its development by 2 stops.

But if that were all there were to it, nobody would've bothered using films of different sensitivities in the capturing stage — they would've let the lab techs worry about that in the developing stage. The costs of push-processing are various reductions in image quality, which Kodak's webpage on the topic summarizes in this way (click the link for fuller detail):

Push processing is not recommended as a means to increase photographic speed. Push processing produces contrast mismatches notably in the red and green sensitive layers (red most) compared to the blue. This produces reddish-yellow highlights, and cyan-blue shadows. Push processing also produces significant increases in film granularity. Push processing combined with under exposure produces a net loss in photographic speed, higher contrast, smoky shadows, yellow highlights and grainy images, with possible slight losses in sharpness.

Not a bad description of the signature elements of the digital look, is it? Blue shadows are exactly what the Canon test shots showed earlier.

Interestingly, they note that although push-processing produces less sharp images, they may subjectively appear to be normally sharp, given the increase in contrast. Sure, if a subject is wearing a normal red shirt and normal blue jeans, and you crank up the contrast parameter, the picture looks more defined — ultra-red juxtaposed against ultra-blue. But we're only fooling ourselves. Sharpness means how clear and crisp the details are, and push-processing and its obligatory counterpart in the digital world are actually losing details, while distracting us with more strongly contrasting colors.

Remember, this is what a digital camera is doing each time it takes a picture outside of its native (sole) sensitivity level of 100 or 160, i.e. when you shoot indoors, at night, or on cloudy days. In the digital world, every image is immediately rushed into emergency surgery.

Is there a way to compare side-by-side a film image that was processed both normally and with push-processing? Unfortunately, no, since developing the negative image from the latent image on the film cannot be undone, and then done a different way. I suppose you could take a shot of the same scene, with two identical cameras and two identical rolls of film, but with one camera set to the true sensitivity and the other set inaccurately, then develop the normal one normally and the under-exposed one with push-processing. That sounds like a bit too much just to make a perfect textbook comparison of normal vs. push-processed images, and I couldn't find any examples online.

But there are examples of film that has been push-processed. Although we can't compare them side-by-side with normally developed versions of the same film frame, at least we can pick up on some of the typical traits that push-processing introduces. Below is an example from this series at a photographer's website. The film is ISO 400, but was push-processed to look like ISO 3200. That is 3 stops of pushing, whereas Kodak and other photography guidebooks advise never pushing past 2 stops of over-development.


It's disturbing how digital this film photograph looks. It looks like someone opened a digital image in Photoshop and cranked up the contrast and saturation settings. Look for details on the man's shirt and pants, like folds and creases. They're hard to make out because push-processing renders the image less sharp. But we're distracted by how striking the contrast is between these overly rich yellows and reds and the cooler blues. It looks more defined, but is poorer in detail.

It's almost like a child drew an outline of pants and hit "fill" with yellow on MS Paint. Very little detail. The yellow pole also looks like a crude "fill" job. Even worse, these pictures were shot on medium-format film, which has a far higher resolution than the 35mm film we're all used to. It ought to have detail so fine that you could blow it up into a poster or banner without blurring of the details.

We also see the familiar blown-out sky from digital Earth, rather than the blue one we know and love. Other white areas look like intense spotlights, too. I can't tell if they have the red-yellow tint to them as Kodak warned, although they do look kind of bright pale yellow. There aren't many dark shadows to tell if they have the bluish tint warned about, although the asphalt on the road looks blue-gray. The color distortions might be more obvious if we had the same scene captured and developed normally, for comparison.

The ultra-contrasty, overly saturated, harshly blown-out bright areas are hard to miss, though. And they look like something straight from a digital camera plus Photoshop settings dialed up to 11.

You might object that, hey, this guy knows what he's doing, and he's using push-processing to give the pictures a flamingly dramatic style (he's gay). That misses the point: these kinds of distortions and reductions in image quality are built in with digital photography's light-sensitivity technology. They aren't going to be chosen purposefully for some intended artistic effect. They're just going to make ordinary people's pictures look cartoony and crappy because they don't know about them before buying a digital camera, and won't mind anyway because digital is all about convenience over quality.

Even Hollywood movies shot by pros will be subject to these digital distortions, although they'll have much better help cleaning them up in post — for a price. Good luck scrubbing your digital images that clean on your own with Photoshop.

In the end, is digital really more convenient, all things considered? All of these distortions require laborious and expensive corrections, which may well off-set the efficiency gains that were hoped for at the beginning. Or those corrections simply won't be done, and greater convenience will have been traded off against poorer quality. Either way, one of the fundamental promises of digital photography turns out to be a big fat lie.

September 28, 2014

Status contests and the shift from involuntary to voluntary identities

Transplants claiming to be New Yorkers. Whites trying to pass themselves off as blacks. Men who insist that they're really women. And denizens of the 21st century who dress up as though they belonged to the fedora-sporting Forties.

These and many other related phenomena have been noticed and detailed on their own, but as far as I'm aware, there has been no unified treatment of them, for either description or explanation.

What the phenomena have in common is a shift toward all forms of group membership being determined by deliberate choices to "identify" or affiliate with the group, rather than having belonged to that group for reasons beyond your control, say by being born into it.

Sociologists refer to "ascribed" status, which you are born into, raised in, or otherwise given involuntarily, vs. "achieved" status that you gain through your own doing. Membership in a race is ascribed, while membership in a fraternity is achieved. Being a child of divorce is ascribed, being a divorced adult is achieved.

Some forms of status could hypothetically go either way. Does membership in a regional culture stem from your birth, upbringing, and extended family roots? Or can you choose to identify with a region that you did not spend your formative years in, but have moved into as an adult?

When regional membership is ascribed, all that matters is birth, upbringing, and family roots — even if you have spent most of your adult life in a region that you were not raised in, you are still a guest within a host or adoptive culture. When membership is achieved, you're perfectly allowed to claim the regional identity of your adoptive place, after a suitable series of rites of passage, which may be tacit or explicit.

For example, when you first move to New York City, how long of a residence does it take until you're "really" a New Yorker? How numbed to the odor of piss does your nose have to become (in the old days), or how long do you have to use a monthly subway card rather than touristy tokens (in the new days), before you have gone through the trials and rituals that earn you admission into the club of "real" New Yorkers?

Notice that when status is achieved, the aspiring joiners will appeal to as many criteria as they can think of rationalizations for in their favor. Ascribed status constrains the debate. Sure, folks may still bicker about how many generations back the person's roots need to go, or how many kin they must have who are also New Yorkers, but that is still limited to just two criteria.

Thus, ascribed status largely speaks for itself, while achieved status encourages rattling off one after another qualification on the self-promoter's endless list. Status contests are limited in scope when status is ascribed — were you born here or not? — but turn into ever escalating games of one-ups-manship when it is achieved.

This suggests that in status-striving times, group membership will shift toward being more and more achieved, while in accommodating and egalitarian times it will shift toward being ascribed.

The prevailing norms in status-striving times are me-first and laissez-faire — who's to stop me from claiming a New Yorker identity if I work hard enough at it? If you work hard enough for it, you've earned it. Rags-to-riches and rugged individualism are other staples of the zeitgeist in status-striving times.

In accommodating times, the norms favor regulating interactions so that conflict is minimized. If we let one guy pursue New Yorker status as though it could be an accomplishment, then we open the floodgates to thousands of other combatants in a spiraling status war. Instead, individuals will attribute their various group memberships to the circumstances of their birth and upbringing — beyond their own control, and therefore pointless to change, and change, and change, according to whatever fashion battle they're engaged in at the moment.

In fact, you might as well make do with those circumstances and take a little pride in them. Upstate New York, the Ohio River Valley, Michigan — all these places used to carry a certain level of regional pride, no matter whether the person stayed or moved somewhere else. Now they are more likely to identify with the metro area that they have chosen to move into, probably embarrassed about where they came from.

Returning to the examples at the beginning of this post, let's spell out just how extreme our status contests have become. They have moved far beyond groups whose membership could be either ascribed or achieved, to the point where ascribed status should be indisputable, but where strivers are waging wars to make it achieved. They do not have to make up a majority of the status contests of our age — the fact that they are even happening at all proves how psychotic the climate has gotten.

Sex is entirely ascribed, yet the tranny movement asserts that men can identify as women or vice versa, and that the rest of society ought to assign them the sex status that the trannies insist on, rather than it being ascribed at birth. Tranny psychos are so status-striving that they whore for attention more than the others in the feminist and women's groups, and are always ready to start rattling off the top 100 reasons why I'm just as much of a woman as you (or more). They also viciously compete against each other to see who's unlocked the most achievements in the sim game of pretending to be a woman.

Generational membership is also determined by birth, yet we see more and more people cosplaying and LARP-ing as though they belonged to another generation. And not one that's just on the other side of their own, where honest disagreements might be made, but a generation whose formative years unfolded long before the person was even born.

Gen X-ers pretending to hail from the Midcentury, Millennials pretending to belong to the Boho vintage-y Seventies, not to mention legions of geeks placing themselves in the old timey Victorian era — steampunk conventions, going to night clubs wearing black corsets or black tailcoats, and so on. These are not occasional costumes worn as a fun break from routine, but part of their ongoing identity which they take (and craft) very seriously.

Similar widespread movements involve members of one race pretending to belong to another. OK, so they don't actually have the DNA test to back it up — but are we seriously going to rely only on bloodlines? The wigger is not an "honorary black," but someone who acts as though they were black, merely by aping real blacks. In the '90s, this term used to be a portmanteau word of "white nigger," alluding to the lily-white suburban area that this dork actually came from. Now that other races than whites pretend to be black, it now means "wannabe nigger," including East Asians and Indians who act that way.

Blacks have tried to push back against this attempt to make membership in the black race (or ethnic group) achieved rather than ascribed, but that hasn't stopped the wigger phenomenon from growing. It's just like women feminists trying to push back against mentally ill trannies trying to make membership in the female sex achieved rather than ascribed. Such efforts are ultimately doomed in a laissez-faire climate because they are seen as pleas for special or unfair treatment — to carve out race, or sex, as a domain where status is ascribed. But if status is to be achieved in so many other areas, it will play out that way for race and sex too, no matter how ridiculous it feels to normal people.

What were the counterparts of these extreme forms during the previous period of rising competitiveness and inequality, the Victorian era and turning of the 20th century?

Fin-de-siecle England was not only plagued by out-of-the-closet faggots (search Google Images for "gay Victorian photographs" — safe for work, they just show couples sitting together embracing). Trannies also had their own subculture and nightlife haunts that were raided by police.

Then there were Orientalists who LARP-ed as members of an exotic race or ethnic group, one that they were not rooted in one bit. As with today's wiggers, they did not merely dress up every once in awhile for fun, or borrow certain design elements to spice up their otherwise native style. They were constantly leveling up their identity as The Other, as close to 100% max stats as they could manage. They always dressed in the exotic style, and tried to re-create a foreign architectural style on English soil.

Finally there were various strains of anti-modernists who affiliated not with somewhat earlier generations or zeitgeists, but all the way back to the Gothic and Medieval periods from their nation's history. The most well known group was the Pre-Raphaelite Brotherhood of painters. They were not merely seeking contact with the past, or Luddites who hated where all this new-fangled technology was taking society. They chose to base their very identity on affiliation with the Medieval period.

In our second Gilded Age, everything old is new again.

September 25, 2014

Convenience as neglect, disloyalty, and desecration

A recent comment about digital cameras marveled at how remarkable technology is, that it has given us such cheaper, faster, and generally more convenient ways to take pictures. But that has come at a cost to image quality and to the emotional significance or resonance of our pictures, which has devolved in the digital age. This trade-off between convenience and some kind of quality is general, not only regarding cameras, so it's worth looking into.

These days the principle of convenience is so worshiped by so many people in so many contexts that we can hardly recognize how strange it is. From Walmart to Amazon to Redbox to Facebook, convenience has proven to be the most important value to 21st-century man (or more accurately, guy).

Yet convenience resonates with only one of the "moral foundations" in the Haidtian framework, namely liberty — freeing up the individual to pursue whatever they wish they had more time, money, and effort to devote towards.

On all other foundations, it offends rather than pleases our moral sensibilities. In matters of care and harm, it manifests as neglect; in the domain of fairness, as rule-bending and corner-cutting; in authority, as abdication at the top and shirking at the bottom; in group loyalty, as opting out; and in purity, as debasement.

Convenience is thus a libertarian rather than liberal or conservative value, and its pervasiveness reveals the callous laissez-faire norm that governs our neo-Dickensian Gilded Age v.2.0.

In politics it appeals mostly to so-called moderates or independents, who shop around for whichever candidate can offer them the most convenient quid pro quo if elected to office. Likewise in religion it appeals to the denominationally unaffiliated, who shop around for the most convenient arrangement of investment from the pew-filler and reward in self-fulfillment. Longer-term concerns about party or church stability, or indeed stewardship of anything outside of the individual's little existence, are utterly foreign to the convenience shopper.

As mundane as it sounds, there could hardly be a sharper ideological fault-line to wage a battle over than convenience, which prizes puny gains to the individual over substantial blows to group cohesion, whether it be the family, community, workplace, or nation. Or put the other way around, tolerating puny costs to the individual in order to hold these groups together is what makes us the successful social species that we are.

It is tolerance of inconveniences which compels us to care for the sick when we are healthy, to play fair, to carry out our duties to superiors and subordinates alike, to honor the wishes of the community, and to preserve purity from adulteration.

September 23, 2014

Transplant governors

Having studied the rooted vs. rootless connection that Senators have to the states they represent, let's turn now to governors.

As before, "rooted" means that they graduated from high school in the state that they're in charge of. If we had better data, we could count how many years from, say, age 5 to 20 they spent living in the state, but what you can find online isn't that fine-grained. High school graduation is the most convenient milestone for our purposes.

An appendix below contains the full list of where each governor was living across several milestones -- birth, high school, college, and any advanced degrees they took.

Onto the findings. The following states have transplant governors: Hawaii, Arizona, Colorado, New Mexico, North Dakota, Ohio, Florida, Maryland, Virginia, New Hampshire, and Vermont.

Transplants account for 11 of our 50 governors. That may appear to be a lower rate than the 29 of 100 Senators, but this difference is not statistically significant. Carpetbagging behavior appears to be independent of which branch of the government the politician pursues their career in.

Rootlessness is independent of party, a result also found among Senators. However, as we saw with Senators, the Republican transplants headed off for states where they would not disrupt the partisan status quo, such as Arizona or North Dakota, whereas the Democrat transplants are part of the ongoing disruption of "swing states" that used to be red but are turning blue, such as Colorado and Virginia. It would be a mistake to blame the politicians themselves: they only represent the will of the voters, a rising share of whom are carpetbaggers themselves, having fled expensive blue states to gentrify red states, where the competition for status is less saturated.

The main regions affected by transplants are rural New England, where a brain drain has left the local hold-outs willing to hire outsiders to preserve their fading regional culture; and the Mountain states, where boomtown growth has brought in truckloads of transplant citizens, and where a long history of frontier rootlessness has left the region vacant of an entrenched elite that aspiring office-holders would have to overcome (if political) or kow-tow to (if economic).

But the worst offenders did not even live in their state's general region for any of their four milestones -- Hickenlooper in Colorado, who is a total East Coaster, and Scott in Florida, who is from the Midwest / southern Plains.

Brewer in Arizona has only a weak connection -- a locally earned technical certificate, not four years of college, after moving from California. Ditto for Abercrombie in Hawaii, who left behind lifelong roots in New York to take an advanced degree locally. McAuliffe in Virginia isn't exactly from around the place either -- born and raised in upstate New York, college and after in DC.

Other transplants are not such flagrant outsiders. O'Malley in Maryland is from just over the border with Northwest DC, Martinez in New Mexico is from just over the Texas border in El Paso, Kasich in Ohio is from just over the border in the Pittsburgh metro area, and Shumlin in Vermont attended prep school just over the border in northern Massachusetts. While not from next-door, some are not from too far away either: Dalrymple in North Dakota is from Minneapolis, and Hassan in New Hampshire is from Boston.

The most resistant region is the Deep South, a pattern we saw in the legislative branch earlier. Their historical memory of the original carpetbaggers during the original Gilded Age has made their immune system more robust this time around.

The following states have governors who were born, raised, and educated entirely locally: Alabama, Arkansas, Georgia, Mississippi, Missouri, South Carolina, Iowa, Indiana, Kentucky, West Virginia, Kansas, Texas, Idaho, Utah, Michigan, Maine, and New York.

Most of these are part of "flyover country," but as with Senators, competition is so stiff in the power centers of New York and Texas that local roots is one of the few things that can decide a contest among competitors who all have impressive credentials and sociopathic ambition. Illinois (i.e. Chicago) is a tough nut to crack, too: Quinn only left the state for college. Similarly in California, Brown only left the state for his law degree. But the further you go back toward the historical Establishment in New York, you'll need every point for local roots that you can claim, to win over identically impressive credentials.

That's about all the major patterns I see, let us know if you see any others.

Please also chime in if you have advice about how to continue this study for the judicial branch. I figure the state Supreme Court is the place to look, but that makes for 5 to 9 judges per state = too many for a casually interested person. Counting the chief justice alone would make the data manageable, but I'm not sure how meaningful that position is across states. Is it the top of the top, or is it like the chair of a college department that rotates through people who don't want it?

So stay tuned, though it may take awhile before I can analyze what's going on with the rootedness of judges.

Appendix: Rootedness of American Governors, 2014

This table lists where each governor was born, graduated high school, graduated college, and took an advanced degree. The final column marks whether or not they're a transplant -- 1 for yes, blank for no. The table is sorted first by transplant status, and then alphabetically by state.


state governor party birth hs grad uni grad adv grad transplant
AZ Jan Brewer  R CA CA AZ
1
CO John Hickenlooper  D PA PA CT
1
FL Rick Scott  R IL MO MO TX 1
HI Neil Abercrombie  D NY NY NY HI 1
MD Martin O'Malley  D DC DC DC MD 1
ND Jack Dalrymple  R MN MN CT
1
NH Maggie Hassan  D MA MA RI MA 1
NM Susana Martinez  R TX TX TX OK 1
OH John Kasich  R PA PA OH
1
VA Terry McAuliffe  D NY NY DC DC 1
VT Peter Shumlin  D VT MA CT
1
AK Sean Parnell  R CA AK WA WA
AL Robert Bentley  R AL AL AL AL
AR Mike Beebe  D AR AR AR AR
CA Jerry Brown  D CA CA CA CT
CT Dannel Malloy  D CT CT MA MA
DE Jack Markell  D DE DE RI IL
GA Nathan Deal  R GA GA GA GA
IA Terry Branstad  R IA IA IA IA
ID Butch Otter  R ID ID ID

IL Pat Quinn  D IL IL DC IL
IN Mike Pence R IN IN IN IN
KS Sam Brownback  R KS KS KS KS
KY Steve Beshear  D KY KY KY

LA Bobby Jindal  R LA LA RI England
MA Deval Patrick  D IL MA MA MA
ME Paul LePage  R ME ME ME ME
MI Rick Snyder  R MI MI MI MI
MN Mark Dayton  D MN MN CT

MO Jay Nixon  D MO MO MO MO
MS Phil Bryant  R MS MS MS MS
MT Steve Bullock  D MT MT CA NY
NC Pat McCrory  R OH NC NC

NE Dave Heineman  R NE NE NY

NJ Chris Christie  R NJ NJ DE NJ
NV Brian Sandoval  R CA NV NV OH
NY Andrew Cuomo  D NY NY NY NY
OK Mary Fallin  R MO OK OK

OR John Kitzhaber  D WA OR NH OR
PA Tom Corbett  R PA PA PA TX
RI Lincoln Chafee  D RI MA RI

SC Nikki Haley  R SC SC SC

SD Dennis Daugaard  R SD SD SD IL
TN Bill Haslam  R TN TN GA

TX Rick Perry  R TX TX TX

UT Gary Herbert  R UT UT


WA Jay Inslee  D WA WA WA OR
WI Scott Walker  R CO WI WI

WV Earl Ray Tomblin  D WV WV WV WV
WY Matt Mead  R WY WY TX WY

September 22, 2014

Obvious vs. mysterious movie posters, with a look at Interstellar

Movie posters attract the audience's attention, shape their expectations about what the movie will be like, and hopefully leave them excited to see it. They contribute to our overall movie-going experience, for better or worse. Although their role is small, it is there, so they had better look good if we want the most enjoyable experience from the movies.

Today's posters couldn't look more dull if they tried (and they appear to). Typically the main characters are shown in some pose with some expression on their faces, while divorced from any kind of action or interaction that would tell us what the plot is all about. It's like they're trying to sell these characters to children who were shopping for action figures, or to teenagers browsing through profile pictures to see who they might want to be friends with.

Even pure character studies need some kind of plot structure to motivate the behaviors that reveal their character. Contemporary posters that lack context just come off as a group of dull head-shots that tell us nothing and therefore fail to lure us into the story. All we know is what kind of people we're going to be tagging along with — whatever they're up to, wherever they're going, for whatever purpose.

Since we'll be looking at science-fiction movies, here's a 21st-century example from Armageddon, although rom-coms and Oscar bait don't look too different.


Interstellar, the upcoming Christopher Nolan movie, has some posters cast from this mold, but I was surprised to also see this one, with something beyond mere head shots:


This looks pleasantly familiar. Brilliant rays of light reaching planet Earth far from off in outer space, in the dead of night to heighten the bright-dark contrast, and with the humble world down below unprepared for its imminent encounter with the Sublime. These elements formed the framework for just about every movie poster from the '80s that involved space travel and interaction between us and the outsiders. Have a look:




The lack of a celebrity headshot orientation allowed audiences to enter the theaters with an open mind about who they were going to meet, having made the trip more out of curiosity about the story.

The chiaroscuro style adds instant mystery, since nighttime scenes tend to be, well, really dark, and we're left wondering what the source of this brilliant light is, and what function it serves — to try to communicate with us, to guide a ship, to announce their arrival? Or maybe it's the outcome of an intense natural phenomenon that we've never seen before, not having paid much attention to what goes on in outer space.

That's all we need — a little visual mystery to lure us into wanting to experience the story. Not visual obviousness — here are the main characters wearing kabuki masks just so you're clear on what they're all like (and don't bother asking what the story is about).

Though seemingly minor players in the overall movie experience, posters can leave strong impressions, so we ought to demand more from the studios when they release these expectation-shaping ads. You never know, sometimes a horrible movie can become memorable simply for its bitchin' poster:

September 21, 2014

Part 2 of Digital creep, even after shooting on film: The decline of optical intermediate stages

Having examined digital creep in the final viewing stage, and during the initial capture stage here and here, let's take a good long look into the rise of digital at the intermediate stage, and what changes it has brought about in the final image. In discussions about the adoption of digital, it's rare to hear about digital creep into the stages after the initial capture but before the final display. Usually you only hear about shooting on digital sensors vs. film, or viewing the output of digital vs. film projectors. I'll be going into considerable detail here to try to make up for the general lack of attention given to the matter.

I'll stipulate at the outset that digital intermediate stages make it more convenient to make corrections (brightness, color, contrast) and to add special effects. But corrections and special effects were being made long before the digital age, in analog fashion. The question therefore is how the introduction of this digital link in the chain has affected the overall look-and-feel of the medium, which previously was fully analog and which now is at best a mixture of analog and digital stages (and at worst digital all the way down).

Let's assume that you're going to both shoot on film and view the final product on film (at the theaters) or on photographic paper (for still camera hobbyists). There's still an intermediate stage of the process where the transition to digital has been all but completed, and where remaining purely analog is nearly impossible — making a positive image from the developed negative (at which time corrections or special effects may also be made).

When you expose film to light, the light-sensitive silver halide crystals react and capture only a latent image. Then the processing lab gives it a chemical bath that develops that latent image into a negative — something you can actually see, but without color, and with dark and bright areas switched around. The last bath undoes the light-sensitivity of the film and fixes the image on the negative. That's why you can hold your negatives up to light and they won't start forming a new image.

This is the absolute minimum of non-digital technology that is used when shooting on film. Where does the process go after that, turning the negative into a positive, with full color and where darks are dark and brights are bright?

Today, virtually every lab for both still and motion pictures will digitally scan the film negatives, and continue the process from there. Manipulating light levels, color, contrast, etc., will be done in a software program on the digital scans of the negatives. After that, if prints are made, they will be of these digital scans of the negative. (These scans may have been digitally corrected if you paid the lab to do it, rather than do so with your own computer program.)

Photographic paper is light sensitive, unlike ordinary printer paper that is written on with ink, so how do they get the digital image onto a paper that reacts to light? The computer is hooked up to a LightJet style of printer, in which lasers take the brightness and color information from the digital image and reproduce it while shining on the light-sensitive paper. Then the photographic paper is given the same bath from the old days to develop the latent image on the paper and fix this into a positive print for final viewing.

For motion pictures, some form of a film recorder is used to transfer digital information onto light-sensitive film. As in still photography, the film negatives are digitally scanned. Then these digital images are digitally corrected, digital effects are added, and the result is displayed on a monitor. A film camera is then aimed at the monitor and captures each of these digital images in sequence, making a film copy of a digital stream-of-images. Now a film print can be sent off to be projected by optical film projectors in theaters — the kind where a lightbulb shines behind the print to render the image visible, and an enlarging lens blows it up to the size of the big screen.

The key point is that both still and movie photography make digital scans of the film negatives, and then take it from there, whether the ultimate display format is digital (CD, hard drive) or analog (prints).

How did it work in the old days before digital scanning? For still photography, the negative was placed in an optical enlarger, which works like a film projector. A lightbulb above sends light beams down through the negative, which then travel through an enlarging lens (to blow up that dinky little negative into, say, a 4" x 6" size), and which finally strike light-sensitive paper at the bottom of the apparatus, where a latent image is formed (and then developed into a positive and fixed in place with a chemical bath).

One key difference from the method of digital scans and LightJet printers is that the very same beams of light both "pick up" the information in the negative and strike the light-sensitive paper. In the digital method there are two separate sources of light: the light beams in the scanner that "pick up" the information in the negative, and those that come from the printer's lasers that strike the light-sensitive paper. Computer software translates the findings from the team of beams in the scanning hardware, into instructions for the team of beams in the printing hardware.

We need at least one team of light for ultimately striking the light-sensitive paper to render the positive image. The question is, what is the source of their instructions? With optical enlargers, it is from a single unbroken path of light directly through the negative. With LightJet printers, it is indirectly from a copy of the negative — from a digitized scan of it.

Similar changes occurred in the motion picture world. Instead of digitally scanning a film negative to make a positive image, they made contact prints, akin to the optical process used for stills. A light source sent beams directly through the negative and into a light-sensitive medium that was pressed tightly against the other side of the negative; the resulting latent image was then developed and fixed chemically to yield a final positive for viewing.

Unlike the set-up in the still photo lab, in the movie world the light beams did not pass through that much air (with distortions caused by whatever was in the air) or through an enlarging lens (enlargement took place in the projection booth). But the basic approach was the same: shine a single beam of light through the negative onto a light-sensitive material that would hold the final positive.

It's not so much a matter of how many layers of copies there are between the original and the final image, though. It's the nature of how the copies are made — purely analog, with light passing through film negatives (and perhaps air and a glass lens), or digitally from scanners and software.

What differences are there in the print when it comes from a digital scan of the negative rather than an analog projection through the negative?

Here comparisons are hard to make because we need to take the same developed film negative and run it down two separate paths to the final print — the analog way with an optical enlarger, and the digital way with scanners and LightJet printers. Optical enlargers are vanishingly rare these days, so it will be hard to carry out a fully analog process on a roll of film that was shot and developed today.

But what if someone had some old negatives and optical prints of those negatives lying around, and decided to have the negatives digitally scanned and make a new set of prints from these digital scans, following current practice? Then they could compare the prints from digital scans to the original prints from optical projection.

In this thread at photo.net, a commenter provides just such a comparison, shown below. Someone took an old set of negatives to have them scanned and printed at Walgreen's photo lab (the way all prints from film are made nowadays), and compared these to the original prints made from analog means 17 years earlier during the pre-digital age. The picture shows a person's slicked-back graying hair. Click to enlarge and see all the details.


We see the difficulty of digital to deal with the extremes of the bright-to-dark spectrum. The limited range of light levels in digital was covered in the posts about the capture stage, linked at the top of this post.

At the dark end, notice how the left side of the hair shows fine gradation of darkness levels in the optical print, where only a small portion is deep-dark. This region looks more uniformly deep-dark in the print from digital scan. Ditto in the top-left area above the hair, where the optical print reveals a lot more detail on whatever that greenish thing is, while the print from digital scan smooshes all the various shades of dark into a single deep-dark value, and swamps out some of the green thing's details in darkness.

At the bright end, notice how uniformly ultra-bright the white hairs are in the print from digital scan, whereas the optical print shows a finer gradation of brightness levels.

So, not only at the capturing stage, but also when a digital scan is made of a developed film negative, the final print will show clipped highlights and lowlights, whereas a fully analog process would have yielded a more richly continuous range at the extremes of dark and bright. As a result, the print from scanning looks more harshly contrasting — one of the signature elements of the digital "look".

A separate color distortion is evident in the blown-up crop of the white hair, where the print from digital scan shows bluish blobs in what is supposed to be white or light gray hair. No such color artifacts are seen in the optical print.

Finally, notice how the print from digital scan renders the grain in the negative — the texture looks blockier and pixelated, and larger in scale. The film speed is ISO 100, which is fine-grained. The larger scale of the "grain" in the digital-derived print is a failure to preserve the fine and regular grain of the negative, a problem that the optical print did not have. Pixels on the scanner's sensor and grains in the film negative don't match up one-for-one, so we shouldn't expect a perfectly faithful rendition of fine film grain. But the result here is still pretty cruddy-looking.

While the print from digital looks more defined, it also looks more unnatural. Both aspects stem from the way that digital yields high-contrast images, as opposed to high-contrast films that have smoother gradations from one part of the spectrum to another.

You might object that the print from digital scan was probably rushed along by some random Walgreens employee whose main task is not digital scanning and correcting — or indeed anything related to visual media. But that's beside the point: we had high schoolers operating the lab at one-hour photo-mats back in the pre-digital days, yet those optical prints didn't look so crappy. The old analog process was more robust to the lab technician's lack of expertise, whereas the digital intermediate process is more fragile when the technician isn't so skilled.

This explains why digital looks less dull in Hollywood movies than in amateur photography. Hollywood hires teams of pros to work full-time at making digital look as good as it can. Don't expect that when you're operating the digital camera at the capture stage, or when you're doing the digital processing.

And even if you shoot on film and order prints, don't expect the digital intermediate stage to be handled by the local lab tech and the machines in the local lab the way they would be in the labs that serve Hollywood studios. Such elite services weren't needed in the analog / optical days (though that would've helped too), but now that there's a digital link in the chain, impressive results will require a more skilled technician for the digital scanning and correcting stage.

If you've been wondering why even movies shot on film (and displayed on film) don't look quite the way they used to, the digital intermediate stage is why. The final print is not the end result of a purely analog process. And if you've wondered why prints of film-captured images look different from prints of 20 years ago, that's why: the digital scanning of the negative introduces a non-analog step, with the effects seen in the comparison above.

Your pictures will still look better by capturing on film and making prints than going digital all the way. Just make sure to find someplace other than the drug store to have them processed and scanned before printing. There are still developing and printing labs for professionals, and they're happy to do jobs for hobbyists as well.

Is there still a place that does the fully analog optical printing process? Yes, Blue Moon Camera and Machine located where else but in Portland. You can mail them your film, and they'll mail you back the prints. They get good ratings on Yelp, and it's not just mindless hipster enthusiasm for all things vintage.

If it were, they'd be fleecing the customer. But to develop and make prints from a 24-exposure roll of color negative film, you're only out $14.80, compared to about $20 everywhere else using the digital scanning method. There is a minimum $8 return shipping cost, so you'd have to send them several rolls at a time to spread that out into a reasonable per-roll shipping price. You can also send in already developed negatives (new or old) and have optical prints made from them.

I haven't used them yet, but I'm definitely going to give them a try. Who knows how long we'll have left to make purely analog pictures? I'd regret passing up the chance, especially given how simple and affordable it is.