When the current wave of white-collar crime was in its early stage during the 1980s, one of its most notorious figures — Michael Milken, the junk bond king — was scarcely 40 years old. Fast-forward 20 years, and the leader of one of the largest Ponzi schemes ever — Bernie Madoff — had passed 70.
Do these two examples suggest a graying of the white-collar criminal class? Where are all the enterprising mega-swindlers in their late 30s and early 40s nowadays?
Wikipedia has a category page that lists 100 American white-collar criminals. Strikingly, something like 80-90% of them are members of the Silent and Baby Boomer generations. Now, some of them may be minor figures in the huckster hall of shame, but not most of them who are infamous enough to be included in such lists.
The Silents and Boomers are not simply carrying on a tradition passed down from earlier generations. There are only two real examples who were born before 1925. Charles Keating was born in '23, but his crimes belonged to the savings-and-loans crisis of the late '80s — not to an earlier period that the Silents and Boomers were the descendants of. Only Louis Wolfson, born 1912, disgraced himself before the current wave, back in the late '60s. (Sam Gilbert, born 1913, was not in finance.)
How about on the other side of the Silent/Boomer cohort — Gen X? I count only one who is American, in finance / Ponzi schemes, caused massive damage, and was born after 1964: Laura Pendergest-Holt (born '74). In their favor, births from the first half of the '60s appeared to be uncommon too.
Browsing around other lists of "top 10" worst white-collar criminals don't turn up many further examples not already in the Wikipedia list (such as Nicholas Cosmo, born '71). And of course those other lists include Silents and Boomers not already on Wikipedia's list.
It seems that no matter whether the high-level crimes were committed during the '70s, '80s, '90s, or 21st century, it has almost always been Silents and Boomers who were behind it — back then as upstarts, now as the Establishment. They were the original "Me Generation" of the 1970s whose thirst for higher and higher status instituted the dog-eat-dog morality in place of the Midcentury norms of making-do and reining-it-in which kept individual ambitions from destroying societal cohesion.
It can come as no surprise that these generations have produced the worst offenders throughout the entirety of the white-collar crime wave that has been escalating for the past 30 to 40 years.
Fortunately, though, this bodes well for the future since the Silents and Boomers aren't going to be living for too much longer. And their Gen X successors do not appear to be so driven by greed to take on the mantle. If anything, their guiding purpose has been to expose hypocrisy rather than to rationalize the dog-eat-dog morality. And the Millennials after them don't seem to have an ambitious bone in their bodies — not exactly the source of the Next Big Thing, but not the engineers of the next big Ponzi scheme either.
June 30, 2014
Historical analysis of horoscope advice to reveal the popular mood?
Horoscopes give the life advice that the readers want to hear. By shifting responsibility onto the stars, the reader can follow their own plans without feeling as though their motives were self-centered. Hey, it's what the star chart recommended!
But what readers want to do, and hence want to hear recommendations for, is not a constant over time. The range in typical mindsets today is way different from those in earlier periods. I doubt there was much horoscope advice in 1974 that nudged readers toward participating in a real estate bubble to get rich quick, although that wouldn't have been surprising advice in 2004. Romantic advice from 1984 would have been phrased to make something happen, while in 2014 it would be geared more toward soaking up attention.
How can we study the changes over time? I couldn't quickly find any sources online, nor any references in Google Scholar that suggest this line of thinking has been pursued in "the literature" before.
Tabloid newspapers would probably be too hard to track down, either online or in real life. Women's magazines are readily available in university library archives, and the entire history of Vogue has been digitally archived. Mass market horoscope books have been published, and would be easy to hunt around for.
I might poke around some of the public libraries and thrift stores in my area. If anyone can point me to online sources, though, I can dig through those as well.
But what readers want to do, and hence want to hear recommendations for, is not a constant over time. The range in typical mindsets today is way different from those in earlier periods. I doubt there was much horoscope advice in 1974 that nudged readers toward participating in a real estate bubble to get rich quick, although that wouldn't have been surprising advice in 2004. Romantic advice from 1984 would have been phrased to make something happen, while in 2014 it would be geared more toward soaking up attention.
How can we study the changes over time? I couldn't quickly find any sources online, nor any references in Google Scholar that suggest this line of thinking has been pursued in "the literature" before.
Tabloid newspapers would probably be too hard to track down, either online or in real life. Women's magazines are readily available in university library archives, and the entire history of Vogue has been digitally archived. Mass market horoscope books have been published, and would be easy to hunt around for.
I might poke around some of the public libraries and thrift stores in my area. If anyone can point me to online sources, though, I can dig through those as well.
Categories:
Media,
Pop culture,
Psychology
June 28, 2014
Today's grandmothers can free children from helicopter parenting
While I'm home for the summer, I'm having many more opportunities to observe the parenting culture, as my 6 year-old nephew is staying here without his parents. Just his grandmother and Uncle Agnostic.
Parenting styles appear to form right around the time a child is born, and remain more or less frozen from then on. Even though helicopter parenting was in full swing during the '90s for small children (the Millennials), the parents of teenagers still let them have their own life, and did not constantly hound them ("touching base") when they left for college. That's because these parents had their kids in the '70s or early '80s, and retained the mindset of that time, which encouraged doing things on your own, without endless supervision.
I'm seeing this again, where my mother is taking care of my nephew more like the way she raised us, and less like today's helicopter parents (including my brother). A major part of that is letting him play by himself, or making friends with other kids his age in the neighborhood — without "play dates," just interacting spontaneously amongst each other.
How could the other kids' parents allow their own child to participate in such dangerous activities? Well, it turns out the parents aren't home. One boy is the grandson of the woman who lives a few doors up, and another girl from across the street is being babysat by her grandmother while her parents are away on vacation.
Of course — grandmothers! How else could small children be allowed to interact with each other by simply visiting each other's houses and asking if so-and-so wants to come out and play? Today's helicopter parents are too paranoid against their community members, so leave it to those whose parenting style was formed back in the '70s and first half of the '80s.*
Kids have to learn how to treat others, and how to respond to others' treatment of them, away from authority figures mediating their interactions. That's called preparing for real life. Shelter your kids, and they cannot mesh into any normal social setting outside of their nuclear household.
If you would like to do something about your OCD parenting, but think you're too committed to hovering, then just bite the bullet and send them away to Camp Grandma for the summer.
Lots of X-ers don't exactly have the warmest relationship with their parents, but you can get over it for the benefit of your kid. They need an environment where they can take hard falls, schedule their own social activities, and face the consequences of their actions. And if you're like most parents today, you cannot bring yourself to give that to them directly.
* This would not have had the same effect when you yourself were a child, since it was your parents who let you enjoy an unsupervised life growing up. Your own grandmother was probably a worry wort, like mine was (and still is). They were bringing up your parents during the Midcentury heyday of Dr. Spock, "smothering mothers," etc., and that stuck with them right up through their grandparenting years in the '70s, '80s, and '90s.
Parenting styles appear to form right around the time a child is born, and remain more or less frozen from then on. Even though helicopter parenting was in full swing during the '90s for small children (the Millennials), the parents of teenagers still let them have their own life, and did not constantly hound them ("touching base") when they left for college. That's because these parents had their kids in the '70s or early '80s, and retained the mindset of that time, which encouraged doing things on your own, without endless supervision.
I'm seeing this again, where my mother is taking care of my nephew more like the way she raised us, and less like today's helicopter parents (including my brother). A major part of that is letting him play by himself, or making friends with other kids his age in the neighborhood — without "play dates," just interacting spontaneously amongst each other.
How could the other kids' parents allow their own child to participate in such dangerous activities? Well, it turns out the parents aren't home. One boy is the grandson of the woman who lives a few doors up, and another girl from across the street is being babysat by her grandmother while her parents are away on vacation.
Of course — grandmothers! How else could small children be allowed to interact with each other by simply visiting each other's houses and asking if so-and-so wants to come out and play? Today's helicopter parents are too paranoid against their community members, so leave it to those whose parenting style was formed back in the '70s and first half of the '80s.*
Kids have to learn how to treat others, and how to respond to others' treatment of them, away from authority figures mediating their interactions. That's called preparing for real life. Shelter your kids, and they cannot mesh into any normal social setting outside of their nuclear household.
If you would like to do something about your OCD parenting, but think you're too committed to hovering, then just bite the bullet and send them away to Camp Grandma for the summer.
Lots of X-ers don't exactly have the warmest relationship with their parents, but you can get over it for the benefit of your kid. They need an environment where they can take hard falls, schedule their own social activities, and face the consequences of their actions. And if you're like most parents today, you cannot bring yourself to give that to them directly.
* This would not have had the same effect when you yourself were a child, since it was your parents who let you enjoy an unsupervised life growing up. Your own grandmother was probably a worry wort, like mine was (and still is). They were bringing up your parents during the Midcentury heyday of Dr. Spock, "smothering mothers," etc., and that stuck with them right up through their grandparenting years in the '70s, '80s, and '90s.
Categories:
Age,
Cocooning,
Generations,
Kinship,
Over-parenting,
Psychology
June 25, 2014
When music videos were shot on film
Checking out the videos on Totally '80s today on VH1 Classic, I was struck by how common it was to shoot on film back then, despite the fact that video technology was not only available but cheaper than film, and already becoming the standard for shooting news and pornography.
Shouldn't music videos have joined in with other lesser media like reporting and porno, and chosen to shoot on video? They could have, but then they wouldn't have that stylized look that film gives.
Video is shot at a higher frame rate (capturing more motion per second), gives more desaturated colors, and has a more restricted dynamic range of brightness levels. It's more photorealistic and ordinary, making it better suited to media where the viewer wants that "you are there" feeling -- such as news reporting and porno.
For music videos, this format was generally chosen when the idea was to put the viewer in the audience of an ordinary, real-world live performance by the band. It's as if a documentary crew went to shoot a small gig that the band was playing that night.
Below are screenshots from the videos for "Any Way You Want It" by Journey, "Start Me Up" by the Rolling Stones, and "I Want Candy" by Bow Wow Wow, all of which were shot on video. (Click on the song titles to see the full videos.) No real reason for these particular examples, except that they're fresh in my mind from today, and are all from the early '80s -- to show how early the format was adopted for the ordinary/documentary approach. (Click to enlarge.)
Film gives lusher colors, more striking dark-bright contrast, more texture of the medium itself (film grain), and stylized motion by shooting fewer frames per second.
Here are some screenshots from "Papa Don't Preach" by Madonna, "Rhythm of the Night" by DeBarge, and "Love in an Elevator" by Aerosmith. No real reason for these either, except that they're fresh in my mind, and are from the second half of the '80s -- to show how film was still going strong well after it had been abandoned for video in news and porno. It didn't even need to be a narrative video like the one by Madonna. The other two feature a lot of live performance footage, but the setting is supposed to be larger than life and out of the ordinary, requiring a more stylized look.
Now that music videos are so rarely made, let alone watched, and even then are shot on digital, you wonder what effect it will have on the visual expectations of today's young generation. Will they expect the sky to be white rather than blue, will they find black shadows too dark, and will they feel comfortable only with either washed-out or caricatured/campy colors rather than ones that are warm and lush?
After all, it's not as though they have replaced music videos with another medium that has a film-y look and feel. The major new visual medium for them is video games, which they prefer to look more pale, blandly colored, and evenly lit than a news broadcast.
Shouldn't music videos have joined in with other lesser media like reporting and porno, and chosen to shoot on video? They could have, but then they wouldn't have that stylized look that film gives.
Video is shot at a higher frame rate (capturing more motion per second), gives more desaturated colors, and has a more restricted dynamic range of brightness levels. It's more photorealistic and ordinary, making it better suited to media where the viewer wants that "you are there" feeling -- such as news reporting and porno.
For music videos, this format was generally chosen when the idea was to put the viewer in the audience of an ordinary, real-world live performance by the band. It's as if a documentary crew went to shoot a small gig that the band was playing that night.
Below are screenshots from the videos for "Any Way You Want It" by Journey, "Start Me Up" by the Rolling Stones, and "I Want Candy" by Bow Wow Wow, all of which were shot on video. (Click on the song titles to see the full videos.) No real reason for these particular examples, except that they're fresh in my mind from today, and are all from the early '80s -- to show how early the format was adopted for the ordinary/documentary approach. (Click to enlarge.)
Film gives lusher colors, more striking dark-bright contrast, more texture of the medium itself (film grain), and stylized motion by shooting fewer frames per second.
Here are some screenshots from "Papa Don't Preach" by Madonna, "Rhythm of the Night" by DeBarge, and "Love in an Elevator" by Aerosmith. No real reason for these either, except that they're fresh in my mind, and are from the second half of the '80s -- to show how film was still going strong well after it had been abandoned for video in news and porno. It didn't even need to be a narrative video like the one by Madonna. The other two feature a lot of live performance footage, but the setting is supposed to be larger than life and out of the ordinary, requiring a more stylized look.
Now that music videos are so rarely made, let alone watched, and even then are shot on digital, you wonder what effect it will have on the visual expectations of today's young generation. Will they expect the sky to be white rather than blue, will they find black shadows too dark, and will they feel comfortable only with either washed-out or caricatured/campy colors rather than ones that are warm and lush?
After all, it's not as though they have replaced music videos with another medium that has a film-y look and feel. The major new visual medium for them is video games, which they prefer to look more pale, blandly colored, and evenly lit than a news broadcast.
Categories:
Design,
Media,
Music,
Pop culture,
Technology,
Television
June 22, 2014
How can "mercy" killings of pets be justified, when they never attempt suicide?
One of the more galling rationalizations for killing a pet whose near future looks dark is that, in some way, the animal is ready for death and perhaps even sending a signal to the owner that it's time to go and please do the job for me.
Their movement gets stiff, they don't walk or jump around as much or at all, they aren't as playful, they stop eating and drinking — they're just laying around, waiting for you to put them out of their listless misery. It's only humane of you to honor their request and have a doctor come pump barbiturates into their veins to shut off their nervous system.
Wait just a second — if they're so miserable and beyond all hope, why aren't they trying to do the job themselves? Not like it would be hard for most pets. Over a lifetime, they have cultivated a good sense of what things and places are dangerous, so instead of avoiding them, they could go toward them. Y'know, walk out into busy traffic, climb someplace high and go splat on the ground below, pick a fight with a nasty predator or rival... anything, really.
In fact, animals never commit suicide (which is not to say that they don't sometimes behave in a way that results in their own death). Their survival instinct is too strong. I suspect the same is true for human beings, and that suicide is a maladaptive disease of civilization (the exact mechanism being irrelevant for now). I don't recall ever hearing of hunter-gatherers taking their own lives. In any case, we sure don't see it in animals, particularly not pets who are winding down their last days.
Laying still, breathing slow, refusing food and water is their way of conserving rather than wasting energy so that they'll live as long as possible in the final stage of life. Yes, even trying to eat, digest, and excrete food would be a waste if the bodily systems required are performing poorly or winding down.
If they seem sad, it is because they sense the end is near — not sad because you're taking too long to bump them off already. Like any other living creature, they want to go out on their own terms, and not have their lives taken from them by some paternalistic authority. They want to be respected and leave with their dignity intact, not to get snuffed out in a final disrespectful humiliation.
Their movement gets stiff, they don't walk or jump around as much or at all, they aren't as playful, they stop eating and drinking — they're just laying around, waiting for you to put them out of their listless misery. It's only humane of you to honor their request and have a doctor come pump barbiturates into their veins to shut off their nervous system.
Wait just a second — if they're so miserable and beyond all hope, why aren't they trying to do the job themselves? Not like it would be hard for most pets. Over a lifetime, they have cultivated a good sense of what things and places are dangerous, so instead of avoiding them, they could go toward them. Y'know, walk out into busy traffic, climb someplace high and go splat on the ground below, pick a fight with a nasty predator or rival... anything, really.
In fact, animals never commit suicide (which is not to say that they don't sometimes behave in a way that results in their own death). Their survival instinct is too strong. I suspect the same is true for human beings, and that suicide is a maladaptive disease of civilization (the exact mechanism being irrelevant for now). I don't recall ever hearing of hunter-gatherers taking their own lives. In any case, we sure don't see it in animals, particularly not pets who are winding down their last days.
Laying still, breathing slow, refusing food and water is their way of conserving rather than wasting energy so that they'll live as long as possible in the final stage of life. Yes, even trying to eat, digest, and excrete food would be a waste if the bodily systems required are performing poorly or winding down.
If they seem sad, it is because they sense the end is near — not sad because you're taking too long to bump them off already. Like any other living creature, they want to go out on their own terms, and not have their lives taken from them by some paternalistic authority. They want to be respected and leave with their dignity intact, not to get snuffed out in a final disrespectful humiliation.
Categories:
Morality,
Pets,
Politics,
Psychology,
Violence
June 21, 2014
Are veterinarians biased against cats?
We cloak professional healers in an aura of sanctity, as though they were guardian angels and miracle workers. But they are fallible human beings with their own set of motivations, in addition to wanting to help patients.
The most disturbing example must be the veterinarians, who turn out to be biased against an entire class of their patients — the cats.
Here are the results of a survey of vets and vet technicians. The majority express no preference for either, but that's just giving them the easy fence-sitting answer. Psychological studies show that people who own both dogs and cats are more like dog people than cat people. Push them in a real-life setting, and they'd more likely come down in favor of dog patients. Of those who do express a preference, the vets are 2 to 1 in favor of their dog patients, and the vet techs are even more biased at 3 to 1.
Here are even more extensive statistics, painting the same basic picture of preferences, but also revealing how these manifest themselves in the hospitals and clinics themselves.
The two main reasons seem to be that dogs give cues that are easier for humans to read, and they aren't as fiesty. Dogs are more closely adapted to interacting with people, but I still wonder how much the greater difficulty of "reading" cats is due to the vet not being a cat lover in the first place. To each their own in their private life, but this is like a pediatrician who doesn't care for kids.
Aside from the bias against cats in itself, dog people also tend to be more liberal, in the sense of having less respect for purity and sanctity (click on the "pets" tag below to review earlier posts on this topic). I expect they're more likely to callously favor euthanasia, whereas cat people would set a higher threshold for putting a pet down.
And given the greater influence that the vet has in this decision — not being as easily persuadable as a panic-stricken pet owner — this likely results in more pets (canine and feline) having their lives taken than there should be.
Part of the belief in "healers as angels" is that we shouldn't research and choose which hospital and which doctor we should bring our loved one to. Aren't all angels equally angelic? In the case of taking care of cats, though, you ought to check into these things beforehand, to at least make sure you'll be seeing a cat person with the proper respect for life, and not a dog-loving mercy killer.
The most disturbing example must be the veterinarians, who turn out to be biased against an entire class of their patients — the cats.
Here are the results of a survey of vets and vet technicians. The majority express no preference for either, but that's just giving them the easy fence-sitting answer. Psychological studies show that people who own both dogs and cats are more like dog people than cat people. Push them in a real-life setting, and they'd more likely come down in favor of dog patients. Of those who do express a preference, the vets are 2 to 1 in favor of their dog patients, and the vet techs are even more biased at 3 to 1.
Here are even more extensive statistics, painting the same basic picture of preferences, but also revealing how these manifest themselves in the hospitals and clinics themselves.
The two main reasons seem to be that dogs give cues that are easier for humans to read, and they aren't as fiesty. Dogs are more closely adapted to interacting with people, but I still wonder how much the greater difficulty of "reading" cats is due to the vet not being a cat lover in the first place. To each their own in their private life, but this is like a pediatrician who doesn't care for kids.
Aside from the bias against cats in itself, dog people also tend to be more liberal, in the sense of having less respect for purity and sanctity (click on the "pets" tag below to review earlier posts on this topic). I expect they're more likely to callously favor euthanasia, whereas cat people would set a higher threshold for putting a pet down.
And given the greater influence that the vet has in this decision — not being as easily persuadable as a panic-stricken pet owner — this likely results in more pets (canine and feline) having their lives taken than there should be.
Part of the belief in "healers as angels" is that we shouldn't research and choose which hospital and which doctor we should bring our loved one to. Aren't all angels equally angelic? In the case of taking care of cats, though, you ought to check into these things beforehand, to at least make sure you'll be seeing a cat person with the proper respect for life, and not a dog-loving mercy killer.
Categories:
Economics,
Morality,
Pets,
Psychology
June 19, 2014
The generational structure of status contests: Competing over careers vs. lifestyles
Periods of relative economic and political stability are marked by a pervasive code of reining-it-in and making-do, which prevents individual ambitions from overgrazing the commons. The last such period was the Great Compression of the 1920s through the 1970s.
Peter Turchin's analysis of the dynamics of cycles in ideological climate and in material conditions suggests that popular attitudes change first, followed by their aggregate material effects. The Great Compression was preceded, for example, by the Progressive Era. Likewise, the period of rising inequality since circa 1980 was preceded by a popular push away from the ethos of reining-it-in.
In 1976, Tom Wolfe summed up this decisive shift in attitudes by labelling the trailblazers as "the Me Generation." Why should I have to rein it in, when I deserve better? I see what I want, I'm going to take it, and everybody else better get out of my way. After all, I deserve better.
These people in the early and middle stages of their working years, who were going to shake up the stability of the older incumbents, were the Silent Generation and the Baby Boomers. And ever since, they have made careerism their preferred mode of status competition. That's the closest to what we mean by "status" — job prestige, income and wealth, and the necessary credentials.
Once they began aging into the middle and later stages of their careers, did they gradually work less and then retire to make way for younger generations — in the way that the older generation did for them, when they were starting out? Hell no. They've dug themselves in like ticks on the political-economic body.
This has saturated careerism as an arena for status competition. Later generations can certainly try to break into that arena and do battle with the incumbents, but their success will be far smaller than back when the incumbents in the economy and government didn't put up much of a fight, since in those days they were glad to retire and give the next generation an opportunity to control things.
What options does this leave for the status strivers among later cohorts such as Generation X and the Millennials? Compete over your leisure pursuits, rather than pursuing your career.
I've eaten at more trendy food trucks than you guys have. I've heard of more obscure music groups than you guys have. I've unlocked more achievements in some video game than you guys have. And I'm more up-to-date on some epic TV show than you guys are. You jelly?
This does not boil down to an effect of age, as though young people will always have difficulty competing in the career arena and will therefore invest more of their efforts in lifestyle competition. Remember that in the '70s and '80s, the Silents and Boomers faced almost no pushback from the incumbents. The dazzling success of the Me Generation was not necessarily due to some greater talent they had, but perhaps due to the incumbents following a different, not-too-competitive code. It doesn't take much of a soldier to wipe out a bunch of pacifists, now does it?
An economic study by Erica Segall of age, period, and cohort effects on consumption patterns did find a significant cohort effect of being a member of Gen X or later when it came to what portion of your budget goes to consumer spending.
Now, the Me Generation has always indulged in contests of conspicuous consumption, but only to the extent that they honestly signal the competitors' superior job prestige and earning potential or accumulated wealth. If they compete over food, it will be based on the price of admission and dining per se — not steering the vanguard fad of vegan egg creams.
In general, then, their consumerism will be limited to longer-lasting and higher-priced items such as cars, real estate, and trophy wives. With all of their energies focused on dominating their career, they just don't have enough time and effort to compete over their possessions. Rather, they'll set aside a huge amount, in payments that can be made monthly and automatically instead of having to be attended to on an hourly basis, and for items that are obvious to everyone as status symbols.
The later generations who compete primarily on consumerism don't have much wealth to flaunt, but it doesn't take that much money to enter the lifestyle competition. Let's say your weekly foodie excursion sets you back $50, and that you take two weeks off a year. That's only $2,500 for the whole year — a sum that you'd have to fork over every month to rent in New York City, and even then only in a shoebox far away from all the action.
Competing as a fashionista or as a gadget geek will set you back a little more, but still nowhere near the cost of luxury cars and real estate, or credit cards to keep the trophy wife.
In fact, as long as you're competing more on how trendy rather than how wealthy you are, why not just buy clothes, gadgets, and meals that aren't very expensive at all, provided they run through fashion cycles fast enough? Cheap and static doesn't allow for any kind of competition, but cheap and high-turnover opens up a whole 'nother arena for poor strivers to climb their way to the top of some pyramid, and then another pyramid, and another, and another.
Hence, purchasing the top du jour at Forever 21, the app du jour at the Apple Store, the burger du jour at Wendy's, and to wash it down, the microbrew du jour at Whole Foods.
Then there's consumption's twin, leisure. The early waves of strivers kicked off the higher ed bubble, but it was purely to obtain credentials that would let them shove aside the incumbents in the economy and government, who didn't have an MBA or whose JD was not from Harvard or Yale. Even at the middle level, as long as it served as a launching pad toward a higher-earning career, the Me Generation had no problem going to State U.
For Gen X and the Millennials, however, choosing which college to attend was influenced more by what the choice would tell the world about their lifestyle. Nobody's paying for college out of pocket, so which school they go to reveals little about their wealth. Rather, they're trying to show how much time they researched what the different schools are like, and which one matched up the closest with their lifestyle, and would signal their commitment to competing in that lifestyle.
Not to mention that college is pure leisure these days. Kids do no work and get full credit. Degrees are bought and sold, so as long as your student loans don't bounce, you're in the clear as far as your studies go. That frees up more time for you to compete with the other students over lifestyle pursuits, whether that's being a shopaholic or a video game addict.
The "year abroad" during college is a big deal for the same reason — do you know of a more trendy yet less spoiled location than the other year-abroaders? Ditto for the unpaid internship: nobody is earning money, nor will anybody's gig lead anywhere afterward, so you try to score a more trendy and enviable spot for making yourself busy.
In daily life, the post-Me generations spend a lot of time in coffee shops, foodie joints, and cafeterias at Whole Foods style supermarkets. They're the minority, though. The cocooning majority hangs out online, where status preening takes place on websites where each competitor is assigned an ID card that shows how many points they've racked up within that domain — likes on Facebook, followers on Twitter or Instagram, gamerscore on Xbox Live, elite posting status on HuffPo / Amazon / IGN / Rotten Tomatoes, and so on and so forth. Climbing these ladders doesn't cost much money, but if all you've got is time and effort, you too can achieve internet immortality.
Thus, from the Me Generation to the Stuff White People Like Generation.
There are some interesting distinctions even within the career vs. lifestyle groups. Silents seem to be driven more by wealth, Boomers by influence and power, although both are careerists. And although both are lifestyle competitors, Gen X wants to be cool, Millennials want to be famous. I attribute these splits to how the cocooning vs. outgoing cycle has affected them. Accumulating wealth or having a bunch of followers are less social (you don't interact with fans), while influencing and controlling others is more interactive, and so is membership in a scene that's cool (not lame).
This has been a rough outline, as there's still a lot more to be said about the effects and implications of a generational split in status competition. But I'll save those for further posts, rather than try to cover everything all at once.
Peter Turchin's analysis of the dynamics of cycles in ideological climate and in material conditions suggests that popular attitudes change first, followed by their aggregate material effects. The Great Compression was preceded, for example, by the Progressive Era. Likewise, the period of rising inequality since circa 1980 was preceded by a popular push away from the ethos of reining-it-in.
In 1976, Tom Wolfe summed up this decisive shift in attitudes by labelling the trailblazers as "the Me Generation." Why should I have to rein it in, when I deserve better? I see what I want, I'm going to take it, and everybody else better get out of my way. After all, I deserve better.
These people in the early and middle stages of their working years, who were going to shake up the stability of the older incumbents, were the Silent Generation and the Baby Boomers. And ever since, they have made careerism their preferred mode of status competition. That's the closest to what we mean by "status" — job prestige, income and wealth, and the necessary credentials.
Once they began aging into the middle and later stages of their careers, did they gradually work less and then retire to make way for younger generations — in the way that the older generation did for them, when they were starting out? Hell no. They've dug themselves in like ticks on the political-economic body.
This has saturated careerism as an arena for status competition. Later generations can certainly try to break into that arena and do battle with the incumbents, but their success will be far smaller than back when the incumbents in the economy and government didn't put up much of a fight, since in those days they were glad to retire and give the next generation an opportunity to control things.
What options does this leave for the status strivers among later cohorts such as Generation X and the Millennials? Compete over your leisure pursuits, rather than pursuing your career.
I've eaten at more trendy food trucks than you guys have. I've heard of more obscure music groups than you guys have. I've unlocked more achievements in some video game than you guys have. And I'm more up-to-date on some epic TV show than you guys are. You jelly?
This does not boil down to an effect of age, as though young people will always have difficulty competing in the career arena and will therefore invest more of their efforts in lifestyle competition. Remember that in the '70s and '80s, the Silents and Boomers faced almost no pushback from the incumbents. The dazzling success of the Me Generation was not necessarily due to some greater talent they had, but perhaps due to the incumbents following a different, not-too-competitive code. It doesn't take much of a soldier to wipe out a bunch of pacifists, now does it?
An economic study by Erica Segall of age, period, and cohort effects on consumption patterns did find a significant cohort effect of being a member of Gen X or later when it came to what portion of your budget goes to consumer spending.
Now, the Me Generation has always indulged in contests of conspicuous consumption, but only to the extent that they honestly signal the competitors' superior job prestige and earning potential or accumulated wealth. If they compete over food, it will be based on the price of admission and dining per se — not steering the vanguard fad of vegan egg creams.
In general, then, their consumerism will be limited to longer-lasting and higher-priced items such as cars, real estate, and trophy wives. With all of their energies focused on dominating their career, they just don't have enough time and effort to compete over their possessions. Rather, they'll set aside a huge amount, in payments that can be made monthly and automatically instead of having to be attended to on an hourly basis, and for items that are obvious to everyone as status symbols.
The later generations who compete primarily on consumerism don't have much wealth to flaunt, but it doesn't take that much money to enter the lifestyle competition. Let's say your weekly foodie excursion sets you back $50, and that you take two weeks off a year. That's only $2,500 for the whole year — a sum that you'd have to fork over every month to rent in New York City, and even then only in a shoebox far away from all the action.
Competing as a fashionista or as a gadget geek will set you back a little more, but still nowhere near the cost of luxury cars and real estate, or credit cards to keep the trophy wife.
In fact, as long as you're competing more on how trendy rather than how wealthy you are, why not just buy clothes, gadgets, and meals that aren't very expensive at all, provided they run through fashion cycles fast enough? Cheap and static doesn't allow for any kind of competition, but cheap and high-turnover opens up a whole 'nother arena for poor strivers to climb their way to the top of some pyramid, and then another pyramid, and another, and another.
Hence, purchasing the top du jour at Forever 21, the app du jour at the Apple Store, the burger du jour at Wendy's, and to wash it down, the microbrew du jour at Whole Foods.
Then there's consumption's twin, leisure. The early waves of strivers kicked off the higher ed bubble, but it was purely to obtain credentials that would let them shove aside the incumbents in the economy and government, who didn't have an MBA or whose JD was not from Harvard or Yale. Even at the middle level, as long as it served as a launching pad toward a higher-earning career, the Me Generation had no problem going to State U.
For Gen X and the Millennials, however, choosing which college to attend was influenced more by what the choice would tell the world about their lifestyle. Nobody's paying for college out of pocket, so which school they go to reveals little about their wealth. Rather, they're trying to show how much time they researched what the different schools are like, and which one matched up the closest with their lifestyle, and would signal their commitment to competing in that lifestyle.
Not to mention that college is pure leisure these days. Kids do no work and get full credit. Degrees are bought and sold, so as long as your student loans don't bounce, you're in the clear as far as your studies go. That frees up more time for you to compete with the other students over lifestyle pursuits, whether that's being a shopaholic or a video game addict.
The "year abroad" during college is a big deal for the same reason — do you know of a more trendy yet less spoiled location than the other year-abroaders? Ditto for the unpaid internship: nobody is earning money, nor will anybody's gig lead anywhere afterward, so you try to score a more trendy and enviable spot for making yourself busy.
In daily life, the post-Me generations spend a lot of time in coffee shops, foodie joints, and cafeterias at Whole Foods style supermarkets. They're the minority, though. The cocooning majority hangs out online, where status preening takes place on websites where each competitor is assigned an ID card that shows how many points they've racked up within that domain — likes on Facebook, followers on Twitter or Instagram, gamerscore on Xbox Live, elite posting status on HuffPo / Amazon / IGN / Rotten Tomatoes, and so on and so forth. Climbing these ladders doesn't cost much money, but if all you've got is time and effort, you too can achieve internet immortality.
Thus, from the Me Generation to the Stuff White People Like Generation.
There are some interesting distinctions even within the career vs. lifestyle groups. Silents seem to be driven more by wealth, Boomers by influence and power, although both are careerists. And although both are lifestyle competitors, Gen X wants to be cool, Millennials want to be famous. I attribute these splits to how the cocooning vs. outgoing cycle has affected them. Accumulating wealth or having a bunch of followers are less social (you don't interact with fans), while influencing and controlling others is more interactive, and so is membership in a scene that's cool (not lame).
This has been a rough outline, as there's still a lot more to be said about the effects and implications of a generational split in status competition. But I'll save those for further posts, rather than try to cover everything all at once.
Categories:
Age,
Cocooning,
Design,
Economics,
Education,
Food,
Generations,
Media,
Music,
Politics,
Pop culture,
Psychology,
Technology,
Television,
Video Games
June 13, 2014
What killed New Hollywood, what took its place, and why?
Great post by Paleo Retiree and comment thread over at Uncouth Reflections, centering around William Friedkin's movie Sorcerer but wandering outward to touch on New Hollywood's rise and fall.
Few topics are in such dire need of revisionist history in art and entertainment as the demise of New Hollywood. I won't try to do all of that in a single post. I will, however, copy and paste the two long-winded but hopefully insightful comments I left over there.
The standard story is that the grimy, hardcore realistic, no-tidy-endings, fate's-a-bitch movies of the '70s were dethroned by crass commercial "fun for the whole family" movies like Star Wars and E.T. In reality, the '70s style thrillers were not dethroned, jettisoned, or blasted into outer space. The tradition was added to by taking things in a more science-fiction direction, and with a more satisfying catharsis by the end, a la Alien and RoboCop.
Examples from the later evolution of the grimy thriller are not optimistic, uplifting, fun for the whole family, or disposed toward mass merchandising and product tie-ins. ("Hey kids, this week only at McDonalds -- get a free CHESTBURSTER ALIEN toy when you buy a happy meal!")
This means the whole "Boo Spielberg" line of the pop history of New Hollywood is missing the big picture, and is just scapegoating the sci-fi chart-toppers like E.T. and Ghostbusters. The path from The Parallax View to RoboCop is more like an evolution of a species than it is a great extinction event.
* * *
I can’t comment on Sorcerer, but you can see why the grimy realism movement of the ’70s only lasted for a moment — it wasn’t stylized enough.
Fortune plays a huge role in real life, and there typically isn’t a memorable, satisfying resolution to most stories. Fictional narratives ought to stylize those details of reality so that the random texture of events is still perceptible, but not calling attention to itself the whole time. And they ought not to get in the way of the larger structure of beginning, middle, and end, including a climax for the audience to feel catharsis, followed by some resolution to bring their elevated state back down to the mundane level.
Movies where you’re going through a variety of different positions, sometimes easing into it and other times jack-hammering away, have to push you over the top and let you come back down to enjoy that refractory period. Otherwise it feels like your partner is fucking around with you, and wasn’t it so cool how she just up and left without any ultimate climax or resolution to the night’s roll in the hay?
Hey, I like a lot of the grimy, fate’s-a-bitch kind of movies, but you have to admit why there weren’t so many that worked at the time, and why the movement didn’t have much momentum to keep it going longer. The range of stories that lend themselves to the approach is severely limited, and they’d more or less run through them all by the mid-’70s. Not every narrative calls for Sisyphus and Sado-Masochism, y’know? Not even a majority of them.
Posing this as a New Hollywood vs. corporate blockbuster thing is going too easy on New Hollywood’s superstar directors. It’s more like, stochastic and fatalistic blockbuster vs. purposeful and cathartic blockbuster, where a hero’s efforts achieve something.
The waning interest in attempted follow-ups to Chinatown owes more to the constraints imposed by human psychology on the viewer’s side, than to economic motives on the creator’s side.
* * *
Pursuing the topic of New Hollywood burning out and what replaced it — it wasn’t Star Wars, Star Trek, E.T., Ghostbusters, Indiana Jones, or any of the other optimistic blockbusters that the whole family could enjoy.
Those movies are so different in tone from Chinatown, Taxi Driver, and The Parallax View, that we’d have to conclude there was an abrupt U-turn in public tastes (or artists’ inclinations) around that time. And in general the second half of the ’70s isn’t so different from the first half of the ’80s — not one of those U-turn periods, artistically or pop culturally.
What represented the next step forward from the fate’s-a-bitch blockbusters of the New Hollywood period was the sci-fi thriller, whether dystopian or at least in the tradition of “Oh shit, mankind doesn’t belong in this hostile environment.” Alien, Escape from New York, Blade Runner, Videodrome, The Terminator, Predator, RoboCop, Total Recall.
They portrayed a grimy realistic setting (not an unconvincing emo caricature a la contempo sci-fi thrillers), where the deck is stacked against the puny heroes, and where cruel fate generally has its way with ordinary background characters and featured protagonists alike. Buuuut, where we sense a progression toward an ultimate climax of redemption — and where that is paid off by the end. This allows even the schlockier examples like Predator to satisfy the viewer in a way that the more pretentious examples in the Sisyphean / S&M approach cannot.
And by telling the story within science fiction, they could get out from the “is this story REALLY plausible?” constraint of the hardcore realism of their ’70s forerunners. Sci-fi is just plausible enough.
There was a related movement toward slasher thrillers in horror. Grimy, realistic, unrelenting hostile forces wipe out just about everyone, despite their best efforts and teamwork — but not everyone. There’s at least a Pyrrhic victory for the lone survivor. The supernatural element allows the story-tellers to move outside of the strict boundaries imposed by hardcore realism.
These related inheritors of the ’70s thriller both reached their end in the early ’90s, when Total Recall styled itself as the sci-fi thriller to end all sci-fi thrillers, and Twin Peaks styled itself as the supernatural thriller to end all supernatural thrillers.
Few topics are in such dire need of revisionist history in art and entertainment as the demise of New Hollywood. I won't try to do all of that in a single post. I will, however, copy and paste the two long-winded but hopefully insightful comments I left over there.
The standard story is that the grimy, hardcore realistic, no-tidy-endings, fate's-a-bitch movies of the '70s were dethroned by crass commercial "fun for the whole family" movies like Star Wars and E.T. In reality, the '70s style thrillers were not dethroned, jettisoned, or blasted into outer space. The tradition was added to by taking things in a more science-fiction direction, and with a more satisfying catharsis by the end, a la Alien and RoboCop.
Examples from the later evolution of the grimy thriller are not optimistic, uplifting, fun for the whole family, or disposed toward mass merchandising and product tie-ins. ("Hey kids, this week only at McDonalds -- get a free CHESTBURSTER ALIEN toy when you buy a happy meal!")
This means the whole "Boo Spielberg" line of the pop history of New Hollywood is missing the big picture, and is just scapegoating the sci-fi chart-toppers like E.T. and Ghostbusters. The path from The Parallax View to RoboCop is more like an evolution of a species than it is a great extinction event.
* * *
I can’t comment on Sorcerer, but you can see why the grimy realism movement of the ’70s only lasted for a moment — it wasn’t stylized enough.
Fortune plays a huge role in real life, and there typically isn’t a memorable, satisfying resolution to most stories. Fictional narratives ought to stylize those details of reality so that the random texture of events is still perceptible, but not calling attention to itself the whole time. And they ought not to get in the way of the larger structure of beginning, middle, and end, including a climax for the audience to feel catharsis, followed by some resolution to bring their elevated state back down to the mundane level.
Movies where you’re going through a variety of different positions, sometimes easing into it and other times jack-hammering away, have to push you over the top and let you come back down to enjoy that refractory period. Otherwise it feels like your partner is fucking around with you, and wasn’t it so cool how she just up and left without any ultimate climax or resolution to the night’s roll in the hay?
Hey, I like a lot of the grimy, fate’s-a-bitch kind of movies, but you have to admit why there weren’t so many that worked at the time, and why the movement didn’t have much momentum to keep it going longer. The range of stories that lend themselves to the approach is severely limited, and they’d more or less run through them all by the mid-’70s. Not every narrative calls for Sisyphus and Sado-Masochism, y’know? Not even a majority of them.
Posing this as a New Hollywood vs. corporate blockbuster thing is going too easy on New Hollywood’s superstar directors. It’s more like, stochastic and fatalistic blockbuster vs. purposeful and cathartic blockbuster, where a hero’s efforts achieve something.
The waning interest in attempted follow-ups to Chinatown owes more to the constraints imposed by human psychology on the viewer’s side, than to economic motives on the creator’s side.
* * *
Pursuing the topic of New Hollywood burning out and what replaced it — it wasn’t Star Wars, Star Trek, E.T., Ghostbusters, Indiana Jones, or any of the other optimistic blockbusters that the whole family could enjoy.
Those movies are so different in tone from Chinatown, Taxi Driver, and The Parallax View, that we’d have to conclude there was an abrupt U-turn in public tastes (or artists’ inclinations) around that time. And in general the second half of the ’70s isn’t so different from the first half of the ’80s — not one of those U-turn periods, artistically or pop culturally.
What represented the next step forward from the fate’s-a-bitch blockbusters of the New Hollywood period was the sci-fi thriller, whether dystopian or at least in the tradition of “Oh shit, mankind doesn’t belong in this hostile environment.” Alien, Escape from New York, Blade Runner, Videodrome, The Terminator, Predator, RoboCop, Total Recall.
They portrayed a grimy realistic setting (not an unconvincing emo caricature a la contempo sci-fi thrillers), where the deck is stacked against the puny heroes, and where cruel fate generally has its way with ordinary background characters and featured protagonists alike. Buuuut, where we sense a progression toward an ultimate climax of redemption — and where that is paid off by the end. This allows even the schlockier examples like Predator to satisfy the viewer in a way that the more pretentious examples in the Sisyphean / S&M approach cannot.
And by telling the story within science fiction, they could get out from the “is this story REALLY plausible?” constraint of the hardcore realism of their ’70s forerunners. Sci-fi is just plausible enough.
There was a related movement toward slasher thrillers in horror. Grimy, realistic, unrelenting hostile forces wipe out just about everyone, despite their best efforts and teamwork — but not everyone. There’s at least a Pyrrhic victory for the lone survivor. The supernatural element allows the story-tellers to move outside of the strict boundaries imposed by hardcore realism.
These related inheritors of the ’70s thriller both reached their end in the early ’90s, when Total Recall styled itself as the sci-fi thriller to end all sci-fi thrillers, and Twin Peaks styled itself as the supernatural thriller to end all supernatural thrillers.
Categories:
Art,
Literature,
Movies,
Psychology
June 12, 2014
Are Millennials consciously reversing their sheltered upbringing when they raise kids of their own?
An earlier post looked at the dynamics of parenting styles, where folks who grew up mostly in rising-crime times choose to lock their kids up from the outside world, while these locked-up people themselves, who grew up in falling-crime times, don't see what the harm is in letting their kids lead a more unsupervised life.
We saw this during the last wave of outgoing behavior and communal focus. The Silent Generation, who were locked up by smothering mothers during the cocooning Midcentury, begat the late Boomers and first half of Generation X, who couldn't have enjoyed a less supervised childhood and adolescence. This continued with the second half of Gen X, whose parents were mostly early Boomers — who for their part spent a good deal of childhood in the Dr. Spock climate of the Midcentury.
I was taken aback during an episode of the Real Housewives of Orange County (which I sometimes tune into, for sociological insight), where the daughter of one of the housewives is beginning to raise a family of her own. The housewife Vicki is a late Boomer, the daughter Briana an early Millennial ('87).
Briana has decided to move away from Orange County, way out to Oklahoma, where her money will go a lot farther than it could in southern California. There's Steve Sailer's "affordable family formation" unfolding in clear terms.
Then she added that she wanted to raise her kids where they could run around in the driveway out front, and run off to go play with the other neighborhood kids. She revealed that in the 12 years that she lived in her family's house in Orange County, they'd only known two of their neighbors. The area is white and upper-middle class, so don't bother trying to excuse the helicopter parents there on the basis of dangerous ghetto Mexicans. It's just good old paranoia.
It may be only one data-point, but you can tell when someone is speaking more or less as a representative of their group.
Before, I noted that in the case of Millennials, they feel nostalgia for not having a life as children. Now that they are starting to have kids, this frustrated attempt at nostalgia has developed into a reflection on how deprived they were of social contact outside the home, from birth until college (by which time it's too late to cram 15 years of social maturation into the time you've got). Every generation remembers the negative side of their upbringing more than the positive, and try to correct that when they have kids of their own.
Before long, then, we'll see a reversal of the helicopter parenting trend that began about 25 years ago. Probably not for another five years or so, since the late X-ers are still busy raising kids, and lord knows they remember how dangerous childhood used to be. Who would've guessed that Beavis and Butt-head would become such over-protective fathers?
We saw this during the last wave of outgoing behavior and communal focus. The Silent Generation, who were locked up by smothering mothers during the cocooning Midcentury, begat the late Boomers and first half of Generation X, who couldn't have enjoyed a less supervised childhood and adolescence. This continued with the second half of Gen X, whose parents were mostly early Boomers — who for their part spent a good deal of childhood in the Dr. Spock climate of the Midcentury.
I was taken aback during an episode of the Real Housewives of Orange County (which I sometimes tune into, for sociological insight), where the daughter of one of the housewives is beginning to raise a family of her own. The housewife Vicki is a late Boomer, the daughter Briana an early Millennial ('87).
Briana has decided to move away from Orange County, way out to Oklahoma, where her money will go a lot farther than it could in southern California. There's Steve Sailer's "affordable family formation" unfolding in clear terms.
Then she added that she wanted to raise her kids where they could run around in the driveway out front, and run off to go play with the other neighborhood kids. She revealed that in the 12 years that she lived in her family's house in Orange County, they'd only known two of their neighbors. The area is white and upper-middle class, so don't bother trying to excuse the helicopter parents there on the basis of dangerous ghetto Mexicans. It's just good old paranoia.
It may be only one data-point, but you can tell when someone is speaking more or less as a representative of their group.
Before, I noted that in the case of Millennials, they feel nostalgia for not having a life as children. Now that they are starting to have kids, this frustrated attempt at nostalgia has developed into a reflection on how deprived they were of social contact outside the home, from birth until college (by which time it's too late to cram 15 years of social maturation into the time you've got). Every generation remembers the negative side of their upbringing more than the positive, and try to correct that when they have kids of their own.
Before long, then, we'll see a reversal of the helicopter parenting trend that began about 25 years ago. Probably not for another five years or so, since the late X-ers are still busy raising kids, and lord knows they remember how dangerous childhood used to be. Who would've guessed that Beavis and Butt-head would become such over-protective fathers?
Categories:
Age,
Cocooning,
Generations,
Over-parenting,
Psychology,
Television
June 10, 2014
Helicopter parents escalating hostilities against the community
When will the cocooning mindset open up? The best way to keep your finger on the social pulse is to look at whether parents allow their children to have a life of their own vs. hold them close and lock them up at all hours.
My brother told me that, in the safe middle-class area where he lives, the local parents are starting to cause traffic problems near school bus stops. They've moved beyond walking their kids to the stop and hovering over them, to driving over in the car and parking near the stop until the bus arrives, when they are allowed to open the door and head over to the bus.
That's right -- drive-in bus stops for school children. I forgot to ask whether the parents let the kids walk from the car to the bus on their own, or whether the parents escort them while hovering.
Was this an isolated example? I googled "parents parked at bus stop" and found several news reports on the first page alone. One article was from the Southeast, where my brother lives. How about a different region, like Noo Joizee? According to this article:
And just what kind of post-apocalyptic ghetto hellhole do these poor children live in, where they require such obsessive supervision? "It's not paranoia if they're really out to get you!" The 2010 Census says the town is 91% white and 5% Asian, median family income over $100,000 -- better send your kids to school wearing body armor!
These situations are making it clearer and clearer that helicopter parenting is not just a private choice with private effects, but one that disrupts the lives of others in public spaces and corrodes communal cohesion. In thoughts and in actions, the nuclear family is waging a daily battle against the community. Welcome to "amoral familism" in white middle-class America.
And the parents have nothing to fear, since a cocooning climate favors the tiniest possible social units -- family over community. All they have to do is speak the magic words -- parent, children, family, "as a mom," "as a dad" -- and the debate is over. Get out of the way, community -- there's FAMILIES coming through!
My brother told me that, in the safe middle-class area where he lives, the local parents are starting to cause traffic problems near school bus stops. They've moved beyond walking their kids to the stop and hovering over them, to driving over in the car and parking near the stop until the bus arrives, when they are allowed to open the door and head over to the bus.
That's right -- drive-in bus stops for school children. I forgot to ask whether the parents let the kids walk from the car to the bus on their own, or whether the parents escort them while hovering.
Was this an isolated example? I googled "parents parked at bus stop" and found several news reports on the first page alone. One article was from the Southeast, where my brother lives. How about a different region, like Noo Joizee? According to this article:
The Jefferson Township Police Department is asking its residents—and particularly parents of school-age children—to help alleviate traffic concerns caused by parents parking vehicles near school bus stops.
Police Capt. Eric F. Wilsusen said that with the rise in parents driving rather than walking their children to bus stops, there has been an increase in the number of vehicles parked near the stops.
"Some bus stops have in excess of 10 vehicles parked at some stops," Wilsusen said in a statement. "The numerous vehicles are causing concerns for traffic and pedestrian safety, particularly at busy intersections."
He added that the police department has seen "numerous complaints" from the community regarding the issue.
And just what kind of post-apocalyptic ghetto hellhole do these poor children live in, where they require such obsessive supervision? "It's not paranoia if they're really out to get you!" The 2010 Census says the town is 91% white and 5% Asian, median family income over $100,000 -- better send your kids to school wearing body armor!
These situations are making it clearer and clearer that helicopter parenting is not just a private choice with private effects, but one that disrupts the lives of others in public spaces and corrodes communal cohesion. In thoughts and in actions, the nuclear family is waging a daily battle against the community. Welcome to "amoral familism" in white middle-class America.
And the parents have nothing to fear, since a cocooning climate favors the tiniest possible social units -- family over community. All they have to do is speak the magic words -- parent, children, family, "as a mom," "as a dad" -- and the debate is over. Get out of the way, community -- there's FAMILIES coming through!
Categories:
Cocooning,
Morality,
Over-parenting
June 9, 2014
Wholesome and lurid themes in pop culture — separated or mixed together
Despite the trend toward increasingly squeaky-clean pop culture, where half of the top 10 movies at the box office for the year are kiddie crap, there's a counter-movement toward ever more lurid trash outside of the respectable mainstream — serial dramas about serial killers on TV, torture porn movies, and gory voyeuristic video games. Nothing is found in between that mixes wholesome and dark themes. There's a bunch of inoffensive kiddie stuff over here, and a pile of lurid filth way over there.
It's not so different from the climate of the Midcentury, where horror comic books, pulp novels, and the sleazier tiers of film noir stood out in stark contrast to the squeaky-clean world of Father Knows Best, Shirley Temple, and "How Much Is That Doggie in the Window?"
In between those periods, pop culture shifted toward a more even spread of wholesome and dark themes. This reached its peak in the '80s and early '90s, when every week one of the mainstream, fit-for-the-whole-family sit-coms ran "a very special episode" about death and grieving, suicide, drug addiction, divorce, teenage pregnancy, teenage runaways, and so on. On the other side of the spectrum, the slasher horror movies portrayed teenagers who were wholesome and basically sympathetic — not brats whose death you'll be cheering along, and not flat cut-outs for puppet-like use in a concern-trolling melodrama like Law & Order: SVU.
Although I haven't seen them, the plot summaries of many hit movies from the Jazz Age sound a lot like "very special episodes" from the '80s — Flaming Youth, Children of Divorce, and so on. Horror classics like Dracula, Frankenstein, Dr. Jekyll and Mr. Hyde, and King Kong show victims who are basically likable and respectable — not faceless crowds a la the "attack of the giant ants" flicks from the Midcentury, or victims who are randomly abducted without having time to establish their basic likability, a la the comic books reviewed in Seduction of the Innocent, or 21st-century torture porn.
The examples from falling-crime times reveal a cocooning mindset — sure, there's this whole other world of sick perverted crap, but as long as we quarantine ourselves from it, everything will be all hunky-dory. "That kind of thing could never happen here." Or, "We'd never allow our children to..." What begins as an impulse for greater security leads to an ignorant and arrogant attitude about how vulnerable their neck of the woods is to dark forces.
In rising-crime times, pop culture reflects the more streetwise and humble attitude that it can happen here, and that parents or adults in general cannot put up a magic force-field around young people, if the dark forces want to get to them bad enough. Being more out-and-about, and the rising-crime climate that follows along with it, is a humbling experience.
Looking into the texture of pop culture thus allows us keener insight into the popular mind when it comes to a trait as important as hubris vs. humility, which we could not tell from grosser measures like, say, church attendance (butts in seats). Nothing wrong with coarse measures to begin with, but it's striking how much you can learn about people from other times and places by what kinds of culture resonate with them.
It's not so different from the climate of the Midcentury, where horror comic books, pulp novels, and the sleazier tiers of film noir stood out in stark contrast to the squeaky-clean world of Father Knows Best, Shirley Temple, and "How Much Is That Doggie in the Window?"
In between those periods, pop culture shifted toward a more even spread of wholesome and dark themes. This reached its peak in the '80s and early '90s, when every week one of the mainstream, fit-for-the-whole-family sit-coms ran "a very special episode" about death and grieving, suicide, drug addiction, divorce, teenage pregnancy, teenage runaways, and so on. On the other side of the spectrum, the slasher horror movies portrayed teenagers who were wholesome and basically sympathetic — not brats whose death you'll be cheering along, and not flat cut-outs for puppet-like use in a concern-trolling melodrama like Law & Order: SVU.
Although I haven't seen them, the plot summaries of many hit movies from the Jazz Age sound a lot like "very special episodes" from the '80s — Flaming Youth, Children of Divorce, and so on. Horror classics like Dracula, Frankenstein, Dr. Jekyll and Mr. Hyde, and King Kong show victims who are basically likable and respectable — not faceless crowds a la the "attack of the giant ants" flicks from the Midcentury, or victims who are randomly abducted without having time to establish their basic likability, a la the comic books reviewed in Seduction of the Innocent, or 21st-century torture porn.
The examples from falling-crime times reveal a cocooning mindset — sure, there's this whole other world of sick perverted crap, but as long as we quarantine ourselves from it, everything will be all hunky-dory. "That kind of thing could never happen here." Or, "We'd never allow our children to..." What begins as an impulse for greater security leads to an ignorant and arrogant attitude about how vulnerable their neck of the woods is to dark forces.
In rising-crime times, pop culture reflects the more streetwise and humble attitude that it can happen here, and that parents or adults in general cannot put up a magic force-field around young people, if the dark forces want to get to them bad enough. Being more out-and-about, and the rising-crime climate that follows along with it, is a humbling experience.
Looking into the texture of pop culture thus allows us keener insight into the popular mind when it comes to a trait as important as hubris vs. humility, which we could not tell from grosser measures like, say, church attendance (butts in seats). Nothing wrong with coarse measures to begin with, but it's striking how much you can learn about people from other times and places by what kinds of culture resonate with them.
Categories:
Cocooning,
Crime,
Movies,
Pop culture,
Psychology,
Television,
Video Games
Rising inequality from passing the tax burden onto future generations
An overlooked cause of the rising inequality levels over the past 35 to 40 years is the unwillingness of earners to pay enough in taxes to cover the current disbursement of government goodies, which leads to them being paid for with borrowed money. They kick the can on down the road for future earners to deal with — only by then, the can will have grown into an oil drum, full of napalm. Heads up, dudes!
This is nothing more than the earlier generation extracting wealth from the later generation. The early one escapes having to pay much in taxes, while the later one must pay down the debt accrued by the early one in addition to funding current goodies. Did the later one have a say when the early one made the decision to pass the buck? Of course not — they weren't even born yet. It's the most shameless kind of "externality."
If the parasitic generation plays its cards right, they can become the beneficiaries of so many goodies once they're no longer current earners. So when the Boomers either retire or stop working a normal load, they'll start collecting Social Security and Medicare. But it won't be the Boomers themselves who are paying for that mountain of prescription drugs. It'll be yet another extraction of wealth from the Gen X and Millennial earners — this time transferring funds in real time, on top of the delayed effect of passing the buck.
For this article on sponging Boomers, The Economist included the chart below as a succinct reminder of how dramatically the generations differ in how much they'll be getting out of the tax-and-spend system, compared to what they'll have put in. It cannot be reduced to a chart based on age, since you can bet that, by retirement time, Gen X won't be enjoying the level of goodies that the Boomers have since they were young. Millennials can expect even less.
Will this kicking-the-can behavior ever end? Will anybody ever be willing to jump on the grenade to save the future generations of our society?
I've found some data from the General Social Survey which suggest that Gen X and the Millennials are qualitatively different from the Boomers, Silents, and Greatest Gen members, who are for their part remarkably like each other in kicking the can. For the first time in perhaps 100 years, we have a group that hasn't made howling about taxes their main political concern (even more amazing when you consider what today's taxes go to, compared to the '50s or '60s).
As the Scrooge McDucks die off, the majority of the country will be made up of folks who see taxes as like the collection plate that they pass around in church, or paying your dues with the labor union or other civic organization. Nobody likes to part with their hard-earned cash, but maybe someone else needs some of it more than you do. And maybe there are goods and services too expensive for one person or household to buy, which require pooling resources across a much wider base.
I realize this sounds like the first day of civics class, but in our status-striving climate, all we normally think about is "what's in it for me?" rather than how our individual choices affect the rest of society.
Soon I'll begin a series of posts with graphs showing the generational divide across a range of political and economic topics. The first will be attitudes toward one's own tax rate (complaining vs. accepting), and the second will be attitudes toward labor unions ("boo" vs. "meh"). Followed by whatever else I find.
For now, the key thing to bear in mind is how generationally structured the rising-inequality trend has been — and hence, how it will be when the shit hits the fan before inequality can begin to fall. I wouldn't be surprised if the upcoming civil war (or whatever it turns out to be) has an explicit faultline drawn between generations, given how aware both sides are about the role of, say, Medicare in sending the debt burden off into another galaxy.
This is nothing more than the earlier generation extracting wealth from the later generation. The early one escapes having to pay much in taxes, while the later one must pay down the debt accrued by the early one in addition to funding current goodies. Did the later one have a say when the early one made the decision to pass the buck? Of course not — they weren't even born yet. It's the most shameless kind of "externality."
If the parasitic generation plays its cards right, they can become the beneficiaries of so many goodies once they're no longer current earners. So when the Boomers either retire or stop working a normal load, they'll start collecting Social Security and Medicare. But it won't be the Boomers themselves who are paying for that mountain of prescription drugs. It'll be yet another extraction of wealth from the Gen X and Millennial earners — this time transferring funds in real time, on top of the delayed effect of passing the buck.
For this article on sponging Boomers, The Economist included the chart below as a succinct reminder of how dramatically the generations differ in how much they'll be getting out of the tax-and-spend system, compared to what they'll have put in. It cannot be reduced to a chart based on age, since you can bet that, by retirement time, Gen X won't be enjoying the level of goodies that the Boomers have since they were young. Millennials can expect even less.
Will this kicking-the-can behavior ever end? Will anybody ever be willing to jump on the grenade to save the future generations of our society?
I've found some data from the General Social Survey which suggest that Gen X and the Millennials are qualitatively different from the Boomers, Silents, and Greatest Gen members, who are for their part remarkably like each other in kicking the can. For the first time in perhaps 100 years, we have a group that hasn't made howling about taxes their main political concern (even more amazing when you consider what today's taxes go to, compared to the '50s or '60s).
As the Scrooge McDucks die off, the majority of the country will be made up of folks who see taxes as like the collection plate that they pass around in church, or paying your dues with the labor union or other civic organization. Nobody likes to part with their hard-earned cash, but maybe someone else needs some of it more than you do. And maybe there are goods and services too expensive for one person or household to buy, which require pooling resources across a much wider base.
I realize this sounds like the first day of civics class, but in our status-striving climate, all we normally think about is "what's in it for me?" rather than how our individual choices affect the rest of society.
Soon I'll begin a series of posts with graphs showing the generational divide across a range of political and economic topics. The first will be attitudes toward one's own tax rate (complaining vs. accepting), and the second will be attitudes toward labor unions ("boo" vs. "meh"). Followed by whatever else I find.
For now, the key thing to bear in mind is how generationally structured the rising-inequality trend has been — and hence, how it will be when the shit hits the fan before inequality can begin to fall. I wouldn't be surprised if the upcoming civil war (or whatever it turns out to be) has an explicit faultline drawn between generations, given how aware both sides are about the role of, say, Medicare in sending the debt burden off into another galaxy.
Categories:
Economics,
Generations,
Morality,
Politics,
Psychology
June 6, 2014
Can audiences tell how crummy digital movies look? Do they care?
In an article about the replacement of film projectors by digital projectors in movie theaters (about 93% in 2014), they quote David Fincher's cinematographer Jeff Cronenweth about how convincing digital looks these days:
Here is the still that the article included to showcase how film-like today's digitally captured images can look, from The Girl with the Dragon Tattoo:
You'll notice right away that this looks way harsh on the eyes, given how mundane the setting is. Most folks will not be able to put their finger on it, but their senses will pick it up nonetheless. Look at how blindingly white the light is coming through the windows, and reflecting off the table top and the papers or documents there as well. There are actually fabric curtains hanging in front of the center-right window, but they appear whitewashed.
This is a textbook example of digital having less dynamic range in brightness levels than good ol' film. Digital must make a trade-off that film does not -- pick up details in darker areas and wash them out in brighter areas, or vice versa, since the range is not wide enough to pick up both.*
If they made the other decision, the girl's black clothing would look uniformly black, like somebody traced around her outline and hit "fill" inside with MS Paint. It hardly looks any better, though, when they hit "fill" on the windows, table top, and documents with bright white.
And please don't try to rationalize this harsh crappy lighting by arguing that it's artistically justified. It's just an ordinary mundane setting, nothing supernatural or from-another-dimension is about to break through the window and into the apartment. It looks like two people who aren't having as much fun on vacation as they'd planned, and are looking for something to kill the time in the middle of the afternoon.
Perhaps it's a quirky, idiosyncratic, "signature" style? Nope. Check out this shot from an even more famous movie that the director and DP teamed up on, Fight Club:
Daytime setting, couple of people sitting around in a room lit partly from inside and partly from outside. Yet the window is not blown out into a featureless plane of ultra-white. There's enough detail for us to see each blind, and even those thin vertical strings that hold them in place. Table surfaces do not have their texture whitewashed.
But that movie was filmed in 1999, before the studios learned that audiences had lowered their standards so much that you could show them crud and still get the butts in the seats. No way you could've projected the highest-quality Betamax tape onto the big screen and expected a 1980s audience to habituate, worrying less about the quality of the image they're going to be looking at for two hours than about whether their popcorn has too much salt or whether their stadium seat is soft enough.
By now, the look and feel of film is a distant memory for most folks. That's the reason why they'd have any trouble telling if The Girl with the Dragon Tattoo was shot on film or digital.
* To stylize the difference, it's as if film could pick up details in light levels ranging from -5 to +5, but digital could only do -5 to +2, where anything greater than +2 gets treated as "+2". So +2, +3, +4, and +5 would all be rendered at the same brightness level -- namely +2, the upper extreme, i.e. really fucking bright.
The proudest comment I get is, 'What did you shoot "Dragon Tattoo" on?' To me, if you can't tell, we're getting much closer. There are certain scenes that it's indistinguishable."
Here is the still that the article included to showcase how film-like today's digitally captured images can look, from The Girl with the Dragon Tattoo:
You'll notice right away that this looks way harsh on the eyes, given how mundane the setting is. Most folks will not be able to put their finger on it, but their senses will pick it up nonetheless. Look at how blindingly white the light is coming through the windows, and reflecting off the table top and the papers or documents there as well. There are actually fabric curtains hanging in front of the center-right window, but they appear whitewashed.
This is a textbook example of digital having less dynamic range in brightness levels than good ol' film. Digital must make a trade-off that film does not -- pick up details in darker areas and wash them out in brighter areas, or vice versa, since the range is not wide enough to pick up both.*
If they made the other decision, the girl's black clothing would look uniformly black, like somebody traced around her outline and hit "fill" inside with MS Paint. It hardly looks any better, though, when they hit "fill" on the windows, table top, and documents with bright white.
And please don't try to rationalize this harsh crappy lighting by arguing that it's artistically justified. It's just an ordinary mundane setting, nothing supernatural or from-another-dimension is about to break through the window and into the apartment. It looks like two people who aren't having as much fun on vacation as they'd planned, and are looking for something to kill the time in the middle of the afternoon.
Perhaps it's a quirky, idiosyncratic, "signature" style? Nope. Check out this shot from an even more famous movie that the director and DP teamed up on, Fight Club:
Daytime setting, couple of people sitting around in a room lit partly from inside and partly from outside. Yet the window is not blown out into a featureless plane of ultra-white. There's enough detail for us to see each blind, and even those thin vertical strings that hold them in place. Table surfaces do not have their texture whitewashed.
But that movie was filmed in 1999, before the studios learned that audiences had lowered their standards so much that you could show them crud and still get the butts in the seats. No way you could've projected the highest-quality Betamax tape onto the big screen and expected a 1980s audience to habituate, worrying less about the quality of the image they're going to be looking at for two hours than about whether their popcorn has too much salt or whether their stadium seat is soft enough.
By now, the look and feel of film is a distant memory for most folks. That's the reason why they'd have any trouble telling if The Girl with the Dragon Tattoo was shot on film or digital.
* To stylize the difference, it's as if film could pick up details in light levels ranging from -5 to +5, but digital could only do -5 to +2, where anything greater than +2 gets treated as "+2". So +2, +3, +4, and +5 would all be rendered at the same brightness level -- namely +2, the upper extreme, i.e. really fucking bright.
Categories:
Movies,
Technology
June 5, 2014
Boomer representation in major political bodies
Baby Boomers (born from 1946 to '64) as a percentage of big-time political office-holders (current figures for the U.S.).
President: 100%
Governors: 78%
House Reps: 75%
Senators: 63%
Supreme Court Justices: 56%
And the majority of the non-Boomers are not X-ers but Silents -- the hidden generation that has tag-teamed with the Boomers in bankrupting and destroying Western economies and polities (a topic for an upcoming post).
By this point, the fix is in -- why bother voting? I've been old enough to vote since the '98 elections, and have cast it exactly once -- for Nader in 2000. (Not terribly different from Buchanan -- populist leftie vs. populist rightie.)
The Millennials are starting to become more numerous than the Boomers in the general population, after the aging process has winnowed out a fair number of Boomers, and as the echo boom has worked its way into the 20-something age group. The baby busters of Gen X tried holding up the fort the best they could while they were vastly outnumbered, so now it's on the Millennials to launch the attack on the Boomers.
True, they're introverted, awkward, and bratty. But awkwardness just means they won't make change in a face-to-face, storm-the-castle kind of way. They could still pull a lever in the cocoon of a voting booth.
And just maybe their brattiness can be harnessed to defund the preposterous and unsustainable entitlements for, e.g., prescription drugs through Medicare -- AKA, Boomers stubbornly continuing to eat junk food like children, getting diabetes and metabolic syndrome broadly, and wanting healthy younger people to foot the bill. And all without having to alter their diet, as a token of good faith and atonement, but feeling entitled to binge on carboholic junk until they drop dead.
And all those later-in-life health consequences of debauchery when they were young. Think of how much poor health, especially cancer, is the sum of all sorts of cryptic STDs and lingering side-effects of hard drug use. "Hey man, it was the Seventies, we didn't know any better... now pay our Medicare bills -- or don't young people have a sense of responsibility anymore?"
Gen X likes to rag on how lame the Millennials have turned out, but politics makes strange bedfellows, and as usual the future will prove to be an interesting time indeed.
President: 100%
Governors: 78%
House Reps: 75%
Senators: 63%
Supreme Court Justices: 56%
And the majority of the non-Boomers are not X-ers but Silents -- the hidden generation that has tag-teamed with the Boomers in bankrupting and destroying Western economies and polities (a topic for an upcoming post).
By this point, the fix is in -- why bother voting? I've been old enough to vote since the '98 elections, and have cast it exactly once -- for Nader in 2000. (Not terribly different from Buchanan -- populist leftie vs. populist rightie.)
The Millennials are starting to become more numerous than the Boomers in the general population, after the aging process has winnowed out a fair number of Boomers, and as the echo boom has worked its way into the 20-something age group. The baby busters of Gen X tried holding up the fort the best they could while they were vastly outnumbered, so now it's on the Millennials to launch the attack on the Boomers.
True, they're introverted, awkward, and bratty. But awkwardness just means they won't make change in a face-to-face, storm-the-castle kind of way. They could still pull a lever in the cocoon of a voting booth.
And just maybe their brattiness can be harnessed to defund the preposterous and unsustainable entitlements for, e.g., prescription drugs through Medicare -- AKA, Boomers stubbornly continuing to eat junk food like children, getting diabetes and metabolic syndrome broadly, and wanting healthy younger people to foot the bill. And all without having to alter their diet, as a token of good faith and atonement, but feeling entitled to binge on carboholic junk until they drop dead.
And all those later-in-life health consequences of debauchery when they were young. Think of how much poor health, especially cancer, is the sum of all sorts of cryptic STDs and lingering side-effects of hard drug use. "Hey man, it was the Seventies, we didn't know any better... now pay our Medicare bills -- or don't young people have a sense of responsibility anymore?"
Gen X likes to rag on how lame the Millennials have turned out, but politics makes strange bedfellows, and as usual the future will prove to be an interesting time indeed.
June 4, 2014
Overuse of close-up shots in movies: not a borrowing from TV
Those who can stand to watch Millennial era movies more than I can have noticed that there are way too many close-up shots nowadays. See David Bordwell on the topic here and here, as well as a lengthy thread on the forums at cinematography.com.
Close-up shots allow us to read detail on the face, but obscure body language and posture, as well as the setting. We generally only see one person at a time, so we don't get to see the interaction between characters — only cutting from one to another. Other people are part of the setting, so we also can't see how the characters are placed with respect to one another in space, nor where they're facing (e.g., are two characters looking at each other during an exchange, or is one of them facing / looking away?).
A common complaint is that this overuse of close-ups makes a movie "look like TV," where the norm is one close-up shot alternating with another for each line of dialog, on and on and on.
I don't care for contempo TV shows either, so this sounds a little off. My hunch is that TV didn't used to rely so heavily on close-up framing and shot / reverse shot editing. That would mean the trend has affected both TV and movies, though perhaps being even more common on the small screen.
Let's investigate, shall we? I'll take it for granted that today's lame-o shows follow the close-up approach, if everyone says so. But I also checked the trailers on YouTube for the first and second seasons of Orphan Black, one of those trendoid "edgy" shows that styles itself as breaking new ground but is jumping on the bandwagon of the look du jour. It too is mostly close-ups, and minimal movement from either the camera or the actors.
What about older TV shows? Sit-coms are out because comedy, even today, is shot from farther back so we can see the characters interacting with each other, and see their reactions to each other in real time rather than interrupting one's expression to catch a glimpse of another's. That leaves action and drama, which have both undergone the shift toward close-ups. Fight scenes today are shot close up and tight around the characters. And drama unfolds in alternating close-ups of two people standing still or sitting down.
Well, I've already written an off-the-cuff post about how engaging the camera work was on Magnum, P.I., so why not continue with that example? It was one of the highest-rated shows in the '80s, its theme song landed on the Billboard charts, and it's one that anyone would nominate for definitive '80s TV shows. So this isn't cherry-picking.
It's also worth studying because it was an even mix of action and adventure with drama and mystery — and not comedy. Does the dramatic dialog unfold in shot / reverse shot while close up? Does the action lock right onto subjects, blocking out the arena that it's taking place in?
I'm certainly not going to review every episode, or even do a close reading of one scene in one episode. That could be cherry-picking. Instead I've put together an array of 20 screenshots that come from the thumbnails of full-length episodes available on YouTube. This places each shot halfway through the episode, not at a spot that I purposely chose to make old TV look better than new movies. I didn't cherry-pick which episodes I included either — the first ones that came up in the search results.
Here is a just-dropping-in look at a couple dozen episodes of Magnum, P.I. (click to enlarge):
None is shot up so close that only the face is in view. The single shots are from far enough back that we typically get a sense of place from the setting. The somewhat closer-up shots are two-shots where we see characters interacting with each other, using their hands and upper body language. Plus that two-shot of the not-so-friendly dogs. Just think of how much we'd lose if we only had a close-up framing the dogs' heads -- are they even facing toward some character, or are they just the type that barks indiscriminately? Quite a few have more than two characters interacting, and placed at different distances from the camera.
Read that earlier post on the show for a closer reading of how particular sequences tend to play out. The blocking is not just "enter room, walk straight to standing / sitting spot, and start blathering for five minutes." Actors move around the setting, often at different distances (e.g., someone pacing nervously in the foreground, while another leisurely strolls around in the background, with a wall separating them so neither is aware of the other, but with a window through it so that we can see both movements).
Lots of examples too of what Bordwell calls "the cross," where A begins on the left and B on the right, and their paths lead A to the right and B to the left. Simple switching like that makes us attend more to the action, rather than tune out spatially once we know that A has plopped down here, B has plopped down there, and they'll never move around until they need to leave the scene.
While on YouTube, I watched some scenes from an episode of Columbo, and there too there was no heavy use of close-ups. Indeed, the most famous staging of characters on that show has Columbo in a medium or long shot, almost ready to leave the setting when he remembers "just one more thing" that leads him to backtrack into a closer-up shot to ask the suspect another couple of questions. Like Magnum, P.I., that show combined mystery and drama, though not action (or comedy).
That's my take on what's going on in both TV and movies from the past 20-odd years — a shift away from mystery, anticipation, and tension, and toward obviousness and instantaneous reflexive responses.
Before, you could clearly perceive the configuration of the characters relative to one another, relative to the larger setting, and their trajectories (relative to each other and within the setting). You can't anticipate with projecting a current movement forward. No path of motion, no anticipation. But anticipated outcomes are not actual outcomes — you're held in suspsense until the climax or pay-off where you see if what you were projecting actually took place or not.
With super close-up shots, all that stuff that would've been included in the frame of reference has been sealed out, and you can't project where anything is heading. If there's no action, the result is dull. If there is action, the result is disorienting rather than engaging. Either way, close-up after close-up alienates the audience.
This ties into a separate but related shift in editing away from longer shots and toward a rapid pow-pow-pow rhythm. A single fluid shot allows you to anticipate the outcome, be held in suspense for a little bit, and then see if it happened or not. The interrupting back-and-forth rhythm prevents a physiological reaction from building in intensity, or develop into a full emotion or thought. It's like your sex partner switching positions every 1.5 seconds — dammit, just hold still for a little while so we can get it on.
But the ziggy-zaggy rhythm is a topic for another post. (See how annoying these abrupt cuts are?)
Close-up shots allow us to read detail on the face, but obscure body language and posture, as well as the setting. We generally only see one person at a time, so we don't get to see the interaction between characters — only cutting from one to another. Other people are part of the setting, so we also can't see how the characters are placed with respect to one another in space, nor where they're facing (e.g., are two characters looking at each other during an exchange, or is one of them facing / looking away?).
A common complaint is that this overuse of close-ups makes a movie "look like TV," where the norm is one close-up shot alternating with another for each line of dialog, on and on and on.
I don't care for contempo TV shows either, so this sounds a little off. My hunch is that TV didn't used to rely so heavily on close-up framing and shot / reverse shot editing. That would mean the trend has affected both TV and movies, though perhaps being even more common on the small screen.
Let's investigate, shall we? I'll take it for granted that today's lame-o shows follow the close-up approach, if everyone says so. But I also checked the trailers on YouTube for the first and second seasons of Orphan Black, one of those trendoid "edgy" shows that styles itself as breaking new ground but is jumping on the bandwagon of the look du jour. It too is mostly close-ups, and minimal movement from either the camera or the actors.
What about older TV shows? Sit-coms are out because comedy, even today, is shot from farther back so we can see the characters interacting with each other, and see their reactions to each other in real time rather than interrupting one's expression to catch a glimpse of another's. That leaves action and drama, which have both undergone the shift toward close-ups. Fight scenes today are shot close up and tight around the characters. And drama unfolds in alternating close-ups of two people standing still or sitting down.
Well, I've already written an off-the-cuff post about how engaging the camera work was on Magnum, P.I., so why not continue with that example? It was one of the highest-rated shows in the '80s, its theme song landed on the Billboard charts, and it's one that anyone would nominate for definitive '80s TV shows. So this isn't cherry-picking.
It's also worth studying because it was an even mix of action and adventure with drama and mystery — and not comedy. Does the dramatic dialog unfold in shot / reverse shot while close up? Does the action lock right onto subjects, blocking out the arena that it's taking place in?
I'm certainly not going to review every episode, or even do a close reading of one scene in one episode. That could be cherry-picking. Instead I've put together an array of 20 screenshots that come from the thumbnails of full-length episodes available on YouTube. This places each shot halfway through the episode, not at a spot that I purposely chose to make old TV look better than new movies. I didn't cherry-pick which episodes I included either — the first ones that came up in the search results.
Here is a just-dropping-in look at a couple dozen episodes of Magnum, P.I. (click to enlarge):
None is shot up so close that only the face is in view. The single shots are from far enough back that we typically get a sense of place from the setting. The somewhat closer-up shots are two-shots where we see characters interacting with each other, using their hands and upper body language. Plus that two-shot of the not-so-friendly dogs. Just think of how much we'd lose if we only had a close-up framing the dogs' heads -- are they even facing toward some character, or are they just the type that barks indiscriminately? Quite a few have more than two characters interacting, and placed at different distances from the camera.
Read that earlier post on the show for a closer reading of how particular sequences tend to play out. The blocking is not just "enter room, walk straight to standing / sitting spot, and start blathering for five minutes." Actors move around the setting, often at different distances (e.g., someone pacing nervously in the foreground, while another leisurely strolls around in the background, with a wall separating them so neither is aware of the other, but with a window through it so that we can see both movements).
Lots of examples too of what Bordwell calls "the cross," where A begins on the left and B on the right, and their paths lead A to the right and B to the left. Simple switching like that makes us attend more to the action, rather than tune out spatially once we know that A has plopped down here, B has plopped down there, and they'll never move around until they need to leave the scene.
While on YouTube, I watched some scenes from an episode of Columbo, and there too there was no heavy use of close-ups. Indeed, the most famous staging of characters on that show has Columbo in a medium or long shot, almost ready to leave the setting when he remembers "just one more thing" that leads him to backtrack into a closer-up shot to ask the suspect another couple of questions. Like Magnum, P.I., that show combined mystery and drama, though not action (or comedy).
That's my take on what's going on in both TV and movies from the past 20-odd years — a shift away from mystery, anticipation, and tension, and toward obviousness and instantaneous reflexive responses.
Before, you could clearly perceive the configuration of the characters relative to one another, relative to the larger setting, and their trajectories (relative to each other and within the setting). You can't anticipate with projecting a current movement forward. No path of motion, no anticipation. But anticipated outcomes are not actual outcomes — you're held in suspsense until the climax or pay-off where you see if what you were projecting actually took place or not.
With super close-up shots, all that stuff that would've been included in the frame of reference has been sealed out, and you can't project where anything is heading. If there's no action, the result is dull. If there is action, the result is disorienting rather than engaging. Either way, close-up after close-up alienates the audience.
This ties into a separate but related shift in editing away from longer shots and toward a rapid pow-pow-pow rhythm. A single fluid shot allows you to anticipate the outcome, be held in suspense for a little bit, and then see if it happened or not. The interrupting back-and-forth rhythm prevents a physiological reaction from building in intensity, or develop into a full emotion or thought. It's like your sex partner switching positions every 1.5 seconds — dammit, just hold still for a little while so we can get it on.
But the ziggy-zaggy rhythm is a topic for another post. (See how annoying these abrupt cuts are?)
Categories:
Movies,
Psychology,
Television
June 3, 2014
Luddite experiments: the rollerball mouse
I've been using an old rollerball style mouse at my desktop computer for the past three or four weeks, and have found it better than the laser style mouses that replaced them over the last 5 to 10 years. It's a three-button scroll wheel model by General Electric (model 97859).
Here are a few thoughts on the differences:
1. The older ones are heavier, from the ball and the mechanical guts. Even the casing feels strongly built, not flimsy. More importantly, the added weight dampens your hand motion, so that the mouse doesn't take off with a typical movement. A lighter mouse almost has you making an unconscious effort to bring the movement to a stop, like a slight step on the brakes.
2. There's more friction between the mouse and the surface underneath, as the ball runs over the mousepad. This adds to the dampening of motion, like the old school steering wheels that gave the driver tighter control (unlike the newer, looser wheels where you turn too far and need to turn back to compensate).
3. The movement is smooth, fluid, and analog, as the ball spins the wheels in contact with it. It's not noticeably smoother than the laser mouses, but not less smooth either, as lazy thinking about "old technology" would predict.
4. There's more feedback, as a result of the above features. You're just more in-touch with what you're doing, whereas the laser model has a weightless quality that removes the feedback that should be going to your sense of touch. Both models give you visual feedback (the pointer moving on the screen), but the rollerball provides a redundant channel of sensory feedback. It's like using a keyboard with buttons rather than a touchscreen "keyboard".
5. The mouse buttons require a bit more force to press down, and there's a more audible click. They're still simple to use, but that extra bit of resistance and noise provides good feedback when clicking on something. The weightless, mushy, quiet mouse buttons actually require more monitoring on your part because they're so easy to press that they accidentally get pressed more often, and there's no "warning" click that goes off when this accident happens. This is an older vs. newer difference, not necessarily a rollerball vs. laser difference.
6. They're larger and fit more comfortably in the (adult male) hand. Another older vs. newer difference. The small and flimsy types require a somewhat tighter grip whenever your hand is on the mouse, which strains the hand muscles or tendons over time. You can't sense this from only moving it around for a second or two, but that added flexing / gripping adds up over a session of computer use. With a larger mouse, it's more like you're just pushing it around, while your hand rests on top.
The closest analogy I can think of is that rollerball mouses are like paintbrushes, pens, pencils, or crayons going over paper, canvas, or some other rough-ish surface, while the laser type is like the stylus for a touchscreen.
So, they're better functionally and ergonomically. The only downside is their higher maintenance, from the dust that is swept up inside the mouse whenever the ball is rolling. From my level of usage, I've had to clean out the mouse once a week, which takes all of a minute or two. For those who don't remember doing this, you remove the ball cover on the bottom, set the ball aside, and use your fingernail or something small and scrape-y to remove the ring of dust stuck around the two wheels inside. I don't need to clear it out with compressed air or anything extreme.
That downside is by far the most common complaint about them, but from the hysteria you'd think they required laborious scrubbing every couple of hours. Over the course of a week, you spend more time and calories brushing your teeth. But, in our throwaway culture we reject anything that requires even minimal maintenance. Your paintbrush has paint on the bristles and needs to be rinsed off? Nah, fuck it, just toss it out and buy a new brush.
You won't like the rollerball mouse if you prefer the weightless, friction-less experience where ultimately we'll just think of where we want the pointer to go, and it'll go there. If you prefer to keep in closer contact with the tools you're using, give them a try. The difference is not major, but enough to be worth it -- especially considering how cheap they are nowadays. I got mine at a thrift store for a dollar.
Bonus feature: they almost all use the PS/2 connector, freeing up your USB ports for things that you will plug and unplug frequently (not a mouse).
Here are a few thoughts on the differences:
1. The older ones are heavier, from the ball and the mechanical guts. Even the casing feels strongly built, not flimsy. More importantly, the added weight dampens your hand motion, so that the mouse doesn't take off with a typical movement. A lighter mouse almost has you making an unconscious effort to bring the movement to a stop, like a slight step on the brakes.
2. There's more friction between the mouse and the surface underneath, as the ball runs over the mousepad. This adds to the dampening of motion, like the old school steering wheels that gave the driver tighter control (unlike the newer, looser wheels where you turn too far and need to turn back to compensate).
3. The movement is smooth, fluid, and analog, as the ball spins the wheels in contact with it. It's not noticeably smoother than the laser mouses, but not less smooth either, as lazy thinking about "old technology" would predict.
4. There's more feedback, as a result of the above features. You're just more in-touch with what you're doing, whereas the laser model has a weightless quality that removes the feedback that should be going to your sense of touch. Both models give you visual feedback (the pointer moving on the screen), but the rollerball provides a redundant channel of sensory feedback. It's like using a keyboard with buttons rather than a touchscreen "keyboard".
5. The mouse buttons require a bit more force to press down, and there's a more audible click. They're still simple to use, but that extra bit of resistance and noise provides good feedback when clicking on something. The weightless, mushy, quiet mouse buttons actually require more monitoring on your part because they're so easy to press that they accidentally get pressed more often, and there's no "warning" click that goes off when this accident happens. This is an older vs. newer difference, not necessarily a rollerball vs. laser difference.
6. They're larger and fit more comfortably in the (adult male) hand. Another older vs. newer difference. The small and flimsy types require a somewhat tighter grip whenever your hand is on the mouse, which strains the hand muscles or tendons over time. You can't sense this from only moving it around for a second or two, but that added flexing / gripping adds up over a session of computer use. With a larger mouse, it's more like you're just pushing it around, while your hand rests on top.
The closest analogy I can think of is that rollerball mouses are like paintbrushes, pens, pencils, or crayons going over paper, canvas, or some other rough-ish surface, while the laser type is like the stylus for a touchscreen.
So, they're better functionally and ergonomically. The only downside is their higher maintenance, from the dust that is swept up inside the mouse whenever the ball is rolling. From my level of usage, I've had to clean out the mouse once a week, which takes all of a minute or two. For those who don't remember doing this, you remove the ball cover on the bottom, set the ball aside, and use your fingernail or something small and scrape-y to remove the ring of dust stuck around the two wheels inside. I don't need to clear it out with compressed air or anything extreme.
That downside is by far the most common complaint about them, but from the hysteria you'd think they required laborious scrubbing every couple of hours. Over the course of a week, you spend more time and calories brushing your teeth. But, in our throwaway culture we reject anything that requires even minimal maintenance. Your paintbrush has paint on the bristles and needs to be rinsed off? Nah, fuck it, just toss it out and buy a new brush.
You won't like the rollerball mouse if you prefer the weightless, friction-less experience where ultimately we'll just think of where we want the pointer to go, and it'll go there. If you prefer to keep in closer contact with the tools you're using, give them a try. The difference is not major, but enough to be worth it -- especially considering how cheap they are nowadays. I got mine at a thrift store for a dollar.
Bonus feature: they almost all use the PS/2 connector, freeing up your USB ports for things that you will plug and unplug frequently (not a mouse).
Categories:
Psychology,
Technology
Jennifer Lopez wondering if the whole fake gay boyfriend thing was worth it
If you keep in touch with Blind Gossip, you've already known about J.Lo's "boytoy" serving as nothing more than a gay eunuch for an aging woman who's already been through three divorces. In reality, she's getting bonked by her bodyguard. See here and here.
Today the news is coming out in the mainstream media that the queer is involved with some tranny. Try keeping a lid on your fake relationship now. Her publicists will have to work overtime to spin this as one of those things that a modern woman just couldn't see coming, how awful it is for a man to cheat, and so on.
With someone whose time has passed and has already gotten divorced three times, you can understand her wish to maintain a semblance of desirability by officially dating someone, while not wanting to get close enough to get burned again. But, grow up and become a spinster.
By refusing to age gracefully, the modern career gal who takes on a gay eunuch faces the inevitable moment when his filthy faggotry will come to light. They aren't exactly known for their discretion in sexual behavior. ("Hey, random dude who I just followed into a public bathroom, can I suck your cock?")
Even if your publicist does successfully spin the story as the poor trusting woman caught unawares by the crafty queer, the public is still going to view you as a pitiful, clueless dupe — not a shrewd modern woman who's still got it goin' on.
I can't wait until this happens with Anne Hathaway's fake gay husband. Couldn't happen to a nicer person...
Today the news is coming out in the mainstream media that the queer is involved with some tranny. Try keeping a lid on your fake relationship now. Her publicists will have to work overtime to spin this as one of those things that a modern woman just couldn't see coming, how awful it is for a man to cheat, and so on.
With someone whose time has passed and has already gotten divorced three times, you can understand her wish to maintain a semblance of desirability by officially dating someone, while not wanting to get close enough to get burned again. But, grow up and become a spinster.
By refusing to age gracefully, the modern career gal who takes on a gay eunuch faces the inevitable moment when his filthy faggotry will come to light. They aren't exactly known for their discretion in sexual behavior. ("Hey, random dude who I just followed into a public bathroom, can I suck your cock?")
Even if your publicist does successfully spin the story as the poor trusting woman caught unawares by the crafty queer, the public is still going to view you as a pitiful, clueless dupe — not a shrewd modern woman who's still got it goin' on.
I can't wait until this happens with Anne Hathaway's fake gay husband. Couldn't happen to a nicer person...
Categories:
Dudes and dudettes,
Gays,
Pop culture
June 1, 2014
The Catholic church and Boy Scout abuse scandals as part of the '80s revival in the wake of 9/11
The bulk of sexual abuse cases within the Catholic church and the Boy Scouts were part of the broader rising-crime trend of the '60s, '70s, and '80s. Yet it took until 2002 for the Catholic church scandal to break, and 2007 for the Boy Scouts scandal. (The Washington Times did a series on the Scouts in '91, but it did not reverberate throughout popular awareness.)
Fast-forward to 2014, and we aren't dealing with these topics anymore — not because we've become desensitized, accepted them, and come to expect such behavior. If it hasn't been a hot topic for awhile now, we can't be desensitized to it. And those who weren't tuned in to the period when it was a hot topic, like the younger Millennials, ought not to be desensitized, and should be raising a stink.
It's just something we don't want to think about anymore. The Boy Scouts are going to let faggots back into the ranks of troop leaders any day now, and no one is making the obvious jokes about what that just might lead to. We're supposed to trust the experts and authorities within the organization and any governmental supervisors, and go along with it. Thinking about, let alone talking about, y'know... sex, especially when it's perverted, harmful, and offensive, would just be... well, awkward.
What was it about the climate of 2002 to the late 2000s that allowed these topics to surface and be taken at least somewhat seriously among the elite and the general public alike? You didn't see this level of outrage during most of the '90s or during this decade so far.
I think the public response to cope with 9/11 put folks in a more open mood, both the victims who were coming forward and the audience who could've chosen to tune them out. When people sense a rise in the level of dangerous attack, they naturally band together and support each other more than when they feel like the world is safe enough to go it alone.
So, from about 2002 or 2003 through I'd say 2007 or '08, the social climate suddenly got more open, engaging, and freewheeling. There was also a revival of pop culture from the later half of the '70s and especially the '80s, as we sensed that those rising-crime times had lessons to teach us in our post-9/11 world. However, after 5 to 10 years of no more 9/11 style attacks, we gradually came to view the original attack as an awful fluke, and resumed the closed-off cocooning trend that began back in the '90s.
Not only, then, was that climate more favorable to discussing an epidemic of sexual abuse in the abstract, it was even more open to these concrete scandals because they took place primarily in the '70s and '80s. We weren't remembering only the uplifting parts of life in the good old days, but the darker current running under everyday life back then as well.
In 2014, high school kids couldn't find it any uncooler than to sport the American Apparel / '80s aerobics look that their counterparts were in 2006, and bringing up the depravity of homo leaders in a major church or youth organization would just be... y'know, awkward. It almost feels like we're back to the second Clinton administration, only more so, with neo-Hanson and neo-Spice Girls music on the radio, and with sex scandals striking audiences as titillating (at most sordid) rather than disturbing.
Fast-forward to 2014, and we aren't dealing with these topics anymore — not because we've become desensitized, accepted them, and come to expect such behavior. If it hasn't been a hot topic for awhile now, we can't be desensitized to it. And those who weren't tuned in to the period when it was a hot topic, like the younger Millennials, ought not to be desensitized, and should be raising a stink.
It's just something we don't want to think about anymore. The Boy Scouts are going to let faggots back into the ranks of troop leaders any day now, and no one is making the obvious jokes about what that just might lead to. We're supposed to trust the experts and authorities within the organization and any governmental supervisors, and go along with it. Thinking about, let alone talking about, y'know... sex, especially when it's perverted, harmful, and offensive, would just be... well, awkward.
What was it about the climate of 2002 to the late 2000s that allowed these topics to surface and be taken at least somewhat seriously among the elite and the general public alike? You didn't see this level of outrage during most of the '90s or during this decade so far.
I think the public response to cope with 9/11 put folks in a more open mood, both the victims who were coming forward and the audience who could've chosen to tune them out. When people sense a rise in the level of dangerous attack, they naturally band together and support each other more than when they feel like the world is safe enough to go it alone.
So, from about 2002 or 2003 through I'd say 2007 or '08, the social climate suddenly got more open, engaging, and freewheeling. There was also a revival of pop culture from the later half of the '70s and especially the '80s, as we sensed that those rising-crime times had lessons to teach us in our post-9/11 world. However, after 5 to 10 years of no more 9/11 style attacks, we gradually came to view the original attack as an awful fluke, and resumed the closed-off cocooning trend that began back in the '90s.
Not only, then, was that climate more favorable to discussing an epidemic of sexual abuse in the abstract, it was even more open to these concrete scandals because they took place primarily in the '70s and '80s. We weren't remembering only the uplifting parts of life in the good old days, but the darker current running under everyday life back then as well.
In 2014, high school kids couldn't find it any uncooler than to sport the American Apparel / '80s aerobics look that their counterparts were in 2006, and bringing up the depravity of homo leaders in a major church or youth organization would just be... y'know, awkward. It almost feels like we're back to the second Clinton administration, only more so, with neo-Hanson and neo-Spice Girls music on the radio, and with sex scandals striking audiences as titillating (at most sordid) rather than disturbing.
Categories:
Cocooning,
Crime,
Dudes and dudettes,
Gays,
Media,
Morality,
Politics,
Pop culture,
Psychology
Subscribe to:
Posts (Atom)