Returning to the topic of what causes the freshman 15 (see here and here), it breaks my heart to see cute girls' looks go to hell because of their diet. One of my undergrad chick friends started off her freshman year eating at the dining hall (where she tracked me down, and we became friends). She is naturally quite slim, and I know first-hand that she ate lots of animal products and little junk. Then last year she went off meal plan and started eating what you might expect a dorm suite full of 19 year-old girls these days would fix themselves (lots of processed carbs). Plus she started drinking beer.
I could post the Facebook photos to prove it, if I were really cruel, but suffice it to say that she went from very cute to attractive but semi-chubby. Fortunately, during the summer she's been at home being fed a better diet by her mother, and when I first ran into her at the mall, she looked like her old self again. (We'll see what happens this school year...)
Today I got one of those endless Facebook notices that so-and-so, a former tutoree of mine, uploaded new photos -- but this time I paid attention since one thumbnail showed what she and her roommates were fixing for dinner:
There's only a small amount of bread being split between the four or five of them. Granted, there appears to be some foul noodle or rice business going on, but it's far less than the fifty bowls of ramen that many college students eat in an afternoon, let alone how much you'd get at a Chinese take-out. Most of their meal is a dead animal with low-carb vegetables, both smothered in fat (olive oil and maybe something else). Judging from the rest of the album, none are the slightest bit chubby and have clear, firm skin and lustrous hair. And they all have delightful facial expressions and look as mentally well-balanced as sophomore girls can hope to be. My other friend's suitemates from last year are pretty foul-mouthed and even more foul-tempered; one has a beer belly and another is quite overweight.
They're the lucky ones. You can picture what might happen to your appearance and the vibes you give off if you ate cereal as a main meal and as dessert too. But for those with poor imaginations:
August 31, 2009
August 28, 2009
The banality of tech neologisms
One last thing while we're on the topic of tech company business models. The buzzword "freemium" comes up a lot, and it means giving away part of your service or product for free and charging for an enhanced version of it. Imagine if Facebook let you use the Wall feature for free but charged for everything else (pictures, status updates, gay quizzes, etc.). This is supposed to be a new way of thinking, or else it wouldn't need a neologism and lots of buzz to get it going.
In reality, though, it's what newspapers have done forever. If the online version of the paper let you browse through a couple articles per day for free (maybe just the ones expected to be really popular), while charging if you wanted to read most or all of the articles, we'd call that freemium.
But that's just what the business model was before the internet, when there were only print copies. Instead of connecting to the web and visiting the paper's site, you walked (or whatever) to a news stand, bookstore, grocery store, or what have you. You could flip through the paper and read over the headlines and maybe skim through an article or two. But you couldn't just bum around a news stand for an hour or however long while you digested the entire thing. You'd look like a jackass. So, the paper basically let you get away with reading a few articles and teaser headlines for free, while charging if you wanted to read the entire thing.
This also applies to the supposedly new idea of charging the high-frequency users, while letting the low-frequency users get the thing for free. It's really the same as the above. Some people only want to skim through a couple of articles, so you let them browse at the bookstore for free. They're still exposed to all those ads, though. The more hardcore users just buy it and read it in depth (and see the ads too).
I think the only reason many people take this ridiculous tech jargon seriously is that it truly does seem fresh and even revolutionary against the background assumption that you can only give stuff away and run ads; otherwise, your schoolmates might stop freeloading at your house and go hang out at some other sucker's place, and you'd feel like a loser. But these ideas are really nothing new and are just a way for a bunch of "gurus" to bilk some cash out of managers who didn't score high enough on the bullshit detection part of the GMAT. It's good that these ideas are being taken seriously again, but give me a break about how revolutionary and internet-era it is.
In reality, though, it's what newspapers have done forever. If the online version of the paper let you browse through a couple articles per day for free (maybe just the ones expected to be really popular), while charging if you wanted to read most or all of the articles, we'd call that freemium.
But that's just what the business model was before the internet, when there were only print copies. Instead of connecting to the web and visiting the paper's site, you walked (or whatever) to a news stand, bookstore, grocery store, or what have you. You could flip through the paper and read over the headlines and maybe skim through an article or two. But you couldn't just bum around a news stand for an hour or however long while you digested the entire thing. You'd look like a jackass. So, the paper basically let you get away with reading a few articles and teaser headlines for free, while charging if you wanted to read the entire thing.
This also applies to the supposedly new idea of charging the high-frequency users, while letting the low-frequency users get the thing for free. It's really the same as the above. Some people only want to skim through a couple of articles, so you let them browse at the bookstore for free. They're still exposed to all those ads, though. The more hardcore users just buy it and read it in depth (and see the ads too).
I think the only reason many people take this ridiculous tech jargon seriously is that it truly does seem fresh and even revolutionary against the background assumption that you can only give stuff away and run ads; otherwise, your schoolmates might stop freeloading at your house and go hang out at some other sucker's place, and you'd feel like a loser. But these ideas are really nothing new and are just a way for a bunch of "gurus" to bilk some cash out of managers who didn't score high enough on the bullshit detection part of the GMAT. It's good that these ideas are being taken seriously again, but give me a break about how revolutionary and internet-era it is.
August 27, 2009
Online newspapers will start charging this fall
This is so fucking sweet. Read my brief write-up at GNXP.
I can just hear the nasally whining from the still-wearing-a-retainer group -- endangering democracy! A reactionary defeat for the internet! (Somehow, I doubt that "saving journalism" will come up.)
These pukes are almost as disgusting as entitled liberals who insist that we get more and more free stuff from the government and yet cry when the state goes broke. Diamond Joe Quimby said it best:
"Are those morons getting dumber or just louder?"
I can just hear the nasally whining from the still-wearing-a-retainer group -- endangering democracy! A reactionary defeat for the internet! (Somehow, I doubt that "saving journalism" will come up.)
These pukes are almost as disgusting as entitled liberals who insist that we get more and more free stuff from the government and yet cry when the state goes broke. Diamond Joe Quimby said it best:
"Are those morons getting dumber or just louder?"
August 26, 2009
More on the second dot-com bust
It may not be fair to call it that since, thankfully, there was no irrational exuberance this time for Web 2.0 -- on Wall Street anyway (among World of Warcraft junkies, on the other hand...). Just recently we saw that Google is still clueless about how to make YouTube profitable -- and the same goes for Facebook and (yak) Twitter -- and before that, online newspapers that charge are profitable and those that don't aren't.
While the story of the first dot-com boom centered on how great it was going to be now that we were moving brick-and-mortar operations online, the second dot-com boom's credo is to build a network or community where everyone can participate for free. A quick reality check shows that its result is to give every retard a megaphone for free. Doesn't sound like a very profitable business model to me, but then I didn't go to business school during the 1990s. It therefore warms my heart to read more and more news that the wild, wild west days of Web 2.0 are coming to an end. Or better said, that their children's treehouse is finally being blasted apart by the gale of reality.
First, Google's growth rate is declining, meaning that sooner or later they'll have to face up to the truth that "attract eyeballs and then push ads" is not, in general, a sound engine of growth. It worked for their search product and TV, but that'll be it. Just to give them the simple advice I previously gave Facebook and YouTube -- start charging. For example, charge everyone with a Blogger account an annual payment of $1 (or whatever). That's not enough to deter anyone who has one or wants one. I think you could even charge for the search engine -- again, say $1 a year for unlimited searching from what everyone regards as the superior search page. Charge for Google Maps too, although probably just some tiny amount per use rather than a flat fee, since using Google Maps is pretty infrequent and highly variable.
I don't pretend that these are the best imaginable solutions, but taking them seriously would be a huge step in the right direction. If people will pay a buck for a single item on McDonald's dollar menu, then they'll do so for something that is much more crucial to their survival (in their eyes). The only way they couldn't is if supply were high, but who else is there besides Bing? Even then, once Microsoft and Yahoo saw that Google was making some money from its search engine, despite most people defecting, they'd see the writing on the wall and start charging too. It would be a complete turning of the tables on the history of free -- before, it was one user who told another, "Hey, you know you can download mp3s for free?" "Really? Awesome!" Now, it will one company to another: "Hey, you know people will pay for search engine use?" "Really? Awesome!"
In the end, we'd probably end up with the same two companies with more or less the same market share, except they'd have a lot more money by profiting off of us. Each one of us would only be out a measly $1, but that's a boatload more that Microsoft and Google would have for developing new products and funding basic science and math research. If you're too cheap, have fun with AltaVista.
Next, the free-wheeling days of Wikipedia are coming to a close, as barriers to entry will be set for who can approve edits to articles about living people, to ensure a higher quality product is turned out. Oh no, wait -- it'll still mostly be a group of idiots (from the FT):
Still, it's a move, however infinitesimal, in the right direction, although the bragging about the "very, very low threshold" shows how far there is left to go in deprogramming the people who run the internet. Again, if they charged $1 per year, they could hire a fleet of editors who aren't pathetic.
Yet, there are still signs that cluelessness abounds in the internet business universe. Try to think of the most moronic idea ever to build a for-profit website around -- is it stupider than "aiming to be a wiki for sports at all levels"? You lose. What will the website do?
Sounds pretty lame to me, but maybe there's some way to make money here -- except that you would be charging the users who are providing all of the useful updates, rather than paying them for their labor. OK, clearly there's a better way to profit here -- and I'll give you one guess what it is...
You can't make this stuff up. Building some silly niche website that you hope people will flock to, in hopes of generating endless and ever-increasing ad revenues, is the business equivalent of wearing a flannel shirt with ripped jeans and whining about the patriarchy in hopes of getting a girlfriend. It's not the 1990s anymore, you dumb shits. Those ideas failed then, and they'll continue to fail -- because they're retarded.
And why not rag on YouTube's failure to turn a profit some more. Here's an update on their brilliant new ad strategy -- find those who've made videos with high view counts and ask them to show ads on the specific video, in addition to having super-popular partners who show ads in general. You can just hear the dolla billz a-rollin' on in. Back in real life, we learn from the WSJ that nearly all user-made online video sucks and won't make anyone any money:
The few creators who do provide TV-quality content -- and it's saying a lot if you can't even equal TV -- are not really an exception. They're just migrating a TV show from TV to the web. Maybe a direct-to-DVD movie is a good comparison. Clearly in those cases, advertising could well earn a profit, just as it does on TV. But for almost all of online video content -- as it actually exists -- the TV analogy falls apart. YouTube may still be drinking the kool-aid, but at least Atom.com is starting to see the light. I saw some analyst (or someone) on Bloomberg TV talking about how Google might earn money from YouTube, and he pointed out that almost no one is going to sit through TV-style ads just to watch a cat on a skateboard. If he's representative of the business community, YouTube's cat may be skateboarding out of the bag.
Like all failed utopias before it, Web 2.0 will soon have to admit that free costly participation in a network or community is no route to growth, or even sustenance. Pretty soon, your dirty and lazy hippie housemates will have to chip in for supplies and clean up around the commune, and stop treating it like a vagrants' shelter supported by some charity.
While the story of the first dot-com boom centered on how great it was going to be now that we were moving brick-and-mortar operations online, the second dot-com boom's credo is to build a network or community where everyone can participate for free. A quick reality check shows that its result is to give every retard a megaphone for free. Doesn't sound like a very profitable business model to me, but then I didn't go to business school during the 1990s. It therefore warms my heart to read more and more news that the wild, wild west days of Web 2.0 are coming to an end. Or better said, that their children's treehouse is finally being blasted apart by the gale of reality.
First, Google's growth rate is declining, meaning that sooner or later they'll have to face up to the truth that "attract eyeballs and then push ads" is not, in general, a sound engine of growth. It worked for their search product and TV, but that'll be it. Just to give them the simple advice I previously gave Facebook and YouTube -- start charging. For example, charge everyone with a Blogger account an annual payment of $1 (or whatever). That's not enough to deter anyone who has one or wants one. I think you could even charge for the search engine -- again, say $1 a year for unlimited searching from what everyone regards as the superior search page. Charge for Google Maps too, although probably just some tiny amount per use rather than a flat fee, since using Google Maps is pretty infrequent and highly variable.
I don't pretend that these are the best imaginable solutions, but taking them seriously would be a huge step in the right direction. If people will pay a buck for a single item on McDonald's dollar menu, then they'll do so for something that is much more crucial to their survival (in their eyes). The only way they couldn't is if supply were high, but who else is there besides Bing? Even then, once Microsoft and Yahoo saw that Google was making some money from its search engine, despite most people defecting, they'd see the writing on the wall and start charging too. It would be a complete turning of the tables on the history of free -- before, it was one user who told another, "Hey, you know you can download mp3s for free?" "Really? Awesome!" Now, it will one company to another: "Hey, you know people will pay for search engine use?" "Really? Awesome!"
In the end, we'd probably end up with the same two companies with more or less the same market share, except they'd have a lot more money by profiting off of us. Each one of us would only be out a measly $1, but that's a boatload more that Microsoft and Google would have for developing new products and funding basic science and math research. If you're too cheap, have fun with AltaVista.
Next, the free-wheeling days of Wikipedia are coming to a close, as barriers to entry will be set for who can approve edits to articles about living people, to ensure a higher quality product is turned out. Oh no, wait -- it'll still mostly be a group of idiots (from the FT):
However, Mr Wales said that the site would set a "very, very low threshold to entry" for anyone who wanted editing privileges. "We're looking at anybody who has been around a very short period of time [on the site] and hasn't been blocked," he added – a number that could top 100,000, based on the number of people who are already frequent editors to the site.
Still, it's a move, however infinitesimal, in the right direction, although the bragging about the "very, very low threshold" shows how far there is left to go in deprogramming the people who run the internet. Again, if they charged $1 per year, they could hire a fleet of editors who aren't pathetic.
Yet, there are still signs that cluelessness abounds in the internet business universe. Try to think of the most moronic idea ever to build a for-profit website around -- is it stupider than "aiming to be a wiki for sports at all levels"? You lose. What will the website do?
The company and its investors are betting that sports fans -- and the players who hung up their cleats and goggles long ago -- will want to review and update Web pages devoted to their thrilling victories and bitter defeats.
"Our long-term goal is to be the definitive source of information on all athletes," said Nirav Tolia, chief executive of Fanbase and a veteran Silicon Valley entrepreneur with a colorful past of his own.
Sounds pretty lame to me, but maybe there's some way to make money here -- except that you would be charging the users who are providing all of the useful updates, rather than paying them for their labor. OK, clearly there's a better way to profit here -- and I'll give you one guess what it is...
It plans to make money by attracting a large audience and then selling advertising, and by letting users create and sell merchandise like customized team T-shirts.
You can't make this stuff up. Building some silly niche website that you hope people will flock to, in hopes of generating endless and ever-increasing ad revenues, is the business equivalent of wearing a flannel shirt with ripped jeans and whining about the patriarchy in hopes of getting a girlfriend. It's not the 1990s anymore, you dumb shits. Those ideas failed then, and they'll continue to fail -- because they're retarded.
And why not rag on YouTube's failure to turn a profit some more. Here's an update on their brilliant new ad strategy -- find those who've made videos with high view counts and ask them to show ads on the specific video, in addition to having super-popular partners who show ads in general. You can just hear the dolla billz a-rollin' on in. Back in real life, we learn from the WSJ that nearly all user-made online video sucks and won't make anyone any money:
Two years ago, the Internet was aflutter with the potential of Web video. [...]
That exuberance has since dissipated. "It's been a tough year," says Scott Roesch, general manager for Atom.com, a Viacom-owned portal that focuses on material for young males. "A lot of people have realized that Web video is no longer at the stage where if you build it, they will come. You can't just throw $500,000 with a nice Web site and expect that to be a business." Although they'll still be launching more than a dozen new series over the next year, Mr. Roesch says that Atom is scaling back its development budgets and focusing on building online communities.
The few creators who do provide TV-quality content -- and it's saying a lot if you can't even equal TV -- are not really an exception. They're just migrating a TV show from TV to the web. Maybe a direct-to-DVD movie is a good comparison. Clearly in those cases, advertising could well earn a profit, just as it does on TV. But for almost all of online video content -- as it actually exists -- the TV analogy falls apart. YouTube may still be drinking the kool-aid, but at least Atom.com is starting to see the light. I saw some analyst (or someone) on Bloomberg TV talking about how Google might earn money from YouTube, and he pointed out that almost no one is going to sit through TV-style ads just to watch a cat on a skateboard. If he's representative of the business community, YouTube's cat may be skateboarding out of the bag.
Like all failed utopias before it, Web 2.0 will soon have to admit that free costly participation in a network or community is no route to growth, or even sustenance. Pretty soon, your dirty and lazy hippie housemates will have to chip in for supplies and clean up around the commune, and stop treating it like a vagrants' shelter supported by some charity.
Helicopter parents going broke rather than letting kids play unsupervised
Here's a funny WSJ article on what's happening in the wake of after-school programs having their budgets slashed, "forcing" parents to find other ways to ensure that their kids are being monitored. One woman has decided to fork over $7,800 for a school year's worth of babysitting (at $200 per week), while another pair of women are handing over $6,000 each for a sitter who will watch both their kids during the school year. Others are disrupting their sleep schedules or endangering their jobs to shuttle the kids around themselves, while others still are simply hectoring their poor relatives into playing the babysitter role.
Earth to nutjob parents: get a grip. Unless your kid is a born criminal, he'll be fine alone. The bus will take him home. I'd understand if the kid were 3 or 4, but once they're in elementary school, they can figure enough stuff out around the house that they don't need you there to make sure they don't light the cat on fire. You might worry about what they'll eat, but get real -- you already stuff them full of sugar with all the blueberry muffins, yoghurt, jelly, and fruit juice or soda that you have them eat. In reality, your 8 year-old will merely be glued to the TV, video game console, or the internet. And if he's discovered his sexuality, he'll be too busy indulging in a good "ain't no one home" jerk to be causing any trouble.
However, if you do have a real hellraiser on your hands, just supervise his behavior from afar the good ol' fashion way -- threaten to punch his lights out if he fucks shit up when you're not home.
Whatever the case, the sudden lack of after-school free daycare is no cause for carving up your budget even leaner during hard times. Stop hyperventilating and grow up -- it's not worth that much money just to prevent your kid from playing unsupervised for a few hours.
Earth to nutjob parents: get a grip. Unless your kid is a born criminal, he'll be fine alone. The bus will take him home. I'd understand if the kid were 3 or 4, but once they're in elementary school, they can figure enough stuff out around the house that they don't need you there to make sure they don't light the cat on fire. You might worry about what they'll eat, but get real -- you already stuff them full of sugar with all the blueberry muffins, yoghurt, jelly, and fruit juice or soda that you have them eat. In reality, your 8 year-old will merely be glued to the TV, video game console, or the internet. And if he's discovered his sexuality, he'll be too busy indulging in a good "ain't no one home" jerk to be causing any trouble.
However, if you do have a real hellraiser on your hands, just supervise his behavior from afar the good ol' fashion way -- threaten to punch his lights out if he fucks shit up when you're not home.
Whatever the case, the sudden lack of after-school free daycare is no cause for carving up your budget even leaner during hard times. Stop hyperventilating and grow up -- it's not worth that much money just to prevent your kid from playing unsupervised for a few hours.
August 25, 2009
News at the data blog
Detailed info on my for-purchase data blog / e-book in progress is on the right sidebar, along with a full table of contents to see what you're getting. Purchases can be made with the PayPal button at the top. Here's what's come out in the past week:
7. The changing social climate of young people from 1870 to present. I quantitatively search through the archives of the Harvard Crimson (the undergrad newspaper) to see how the zeitgeist has changed over time. Young people typically leave very little written record, let alone over such a long stretch of time, so this presents a uniquely fine-grained picture of the social forces they faced. The topics include identity politics (with five topics and a composite index), religion (also five topics and a composite), and generational awareness. There are some things that everyone knew, but there are quite a few surprises, such as when the obsession with racism or sexism peaks. There are large swings up and down over time, supporting a cyclical view of history. I discuss what kinds of processes or models are necessary to explain such patterns.
7. The changing social climate of young people from 1870 to present. I quantitatively search through the archives of the Harvard Crimson (the undergrad newspaper) to see how the zeitgeist has changed over time. Young people typically leave very little written record, let alone over such a long stretch of time, so this presents a uniquely fine-grained picture of the social forces they faced. The topics include identity politics (with five topics and a composite index), religion (also five topics and a composite), and generational awareness. There are some things that everyone knew, but there are quite a few surprises, such as when the obsession with racism or sexism peaks. There are large swings up and down over time, supporting a cyclical view of history. I discuss what kinds of processes or models are necessary to explain such patterns.
August 23, 2009
Google still clueless about how to make YouTube profitable
Returning to the theme of online content providers that are vs. are not profitable -- as with The Financial Times and Wall Street Journal, who charge, compared to the NYT, etc., who don't -- the WSJ recently ran an article about how YouTube has yet to generate profit. And given the kool-aid that everyone has drunk in Silicon Valley, what is Google trying to do about it? Why, push more ads.
A few quick numbers:
Here's a crazy idea: charge those 428 million monthly visitors just $1 for a year's access, and you nearly double the projected revenue of about $500 million. I know that everyone believes that no one will pay for anything anymore, but that's stupid. As the FT's CEO pointed out, online media providers are too fatalistic and think that asking fees would violate some fundamental law of economics -- they need to grow some balls and start charging. That'll wake everyone up from their delusion that they will have unlimited, quality content provided for free forever.
But won't you drive away a lot of people, and so lower your ad exposure rate, and so lower your ad revenues? Not if anyone involved thinks about it clearly. If someone is too poor or too stingy to send a single dollar YouTube's way for an entire year of access to a website that they're practically glued to, then guess what -- they either have no disposable income, or they do but won't spend it because they expect everything to be free. Either way, they weren't going to buy the product or service that you were advertising in the first place. Nothing lost by alienating them. Those who do pay for access, you're at least somewhat sure that they're willing to buy stuff -- isn't this a better audience to target ads at?
When did we all become so crazy that we thought that advertising to the money-less or cheapskates was the best way to sell your product?
Everyone laughs now in hindsight at what ridiculous business models people thought were rock-solid during the dot-com boom -- I mean, selling dog food over the internet? You're retarded. However, most of this awareness only pertains to the types of products that would be sold online, not to the entire business model. Just look at Facebook and YouTube. They seem like reasonable things to offer over the internet, but their owners are too cowardly to implement a business model that would score them a profit -- and you thought nothing could over-ride the profit motive in big business.
Are the teenagers and college students who spend 95% of their free time on Facebook or YouTube paying nothing for the texts they send on their cell phones? (Just ask their parents.) They would die if Facebook went away because it keeps them connected to their social circle just like cell phone access does. Charging them will not drive them away. Again, it could be a pitiful amount since they have about 200 million users -- a $5 annual membership generates $1 billion. Make that $50 a year, or just over $4 per month, and that turns into $10 billion. What are they so afraid of? "Alienating users" cannot be an answer since, to reiterate, such people are too cheap or broke to buy what the ads are selling to begin with, and are just freeloading. They are also boosting the overall network size, but only their close friends and family would care about them being in the Facebook network -- the rest of the world could give a shit whether they're in or out. And because birds of a feather flock together, the cheapskates' peers are also unlikely to send much money to Facebook's advertisers.
Stan Liebowitz wrote a great post-mortem of the dot-com boom and crash, Re-thinking the Network Economy (featured in the Amazon box above), and he devotes an entire chapter to whether and in which cases will advertising generate enough revenue. Just noting that network TV earns money with ads means nothing. Most internet ads don't block everything else out for 5 minutes every 15 minutes anyway. They're easily ignored, and you can typically close them out with a single click when they've scarcely begun. Liebowitz points out that for quality media content, some combination of ads plus subscription fees will be the way to go -- just as it was before.
The "ads will solve everything" notion that still spellbinds internet companies shows that all of the lessons of the dot-com crash have not been learned, no matter how simple they are to figure out. If YouTube truly wanted to imitate the TV model, they would allow free access to the videos that no one wants to see plus a tiny handful of quality videos, while subjecting them to five-minute screen-hijacking ads every 15 minutes -- a la network TV. It would then charge subscription fees to see the better stuff, as judged by view counts, ratings, blogger buzz, or whatever. It couldn't charge too much because, after all, it's just YouTube. These subscribers would see some ads, but nothing too annoying. Finally, there would be a counterpart to HBO and The Sopranos, which would require higher fees but have much better content and no ads.
Many would surely defect to some other YouTube clone site, but here's a newsflash: those sites aren't generating profit either. Ones that offer a small number of videos and don't have much bandwidth being burned up might be able to coast by without being profitable. But transplanting the bulk of YouTube to another site just postpones the inevitable. This new site would face the exact same problems with its business model as the old one. In the end, internet companies will have to deprogram themselves from the dogma of Free, and internet users will have to realize that they aren't entitled to free content.
A few quick numbers:
Some Wall Street analysts estimate YouTube's revenue will roughly double to about $500 million this year. [...]
YouTube has mostly been a distraction since Google bought it for $1.7 billion in 2006. The Internet giant hasn't been able to turn a profit from the video site even though YouTube has grown into a massive destination, with 428 million unique monthly visitors in June, according to comScore.
Here's a crazy idea: charge those 428 million monthly visitors just $1 for a year's access, and you nearly double the projected revenue of about $500 million. I know that everyone believes that no one will pay for anything anymore, but that's stupid. As the FT's CEO pointed out, online media providers are too fatalistic and think that asking fees would violate some fundamental law of economics -- they need to grow some balls and start charging. That'll wake everyone up from their delusion that they will have unlimited, quality content provided for free forever.
But won't you drive away a lot of people, and so lower your ad exposure rate, and so lower your ad revenues? Not if anyone involved thinks about it clearly. If someone is too poor or too stingy to send a single dollar YouTube's way for an entire year of access to a website that they're practically glued to, then guess what -- they either have no disposable income, or they do but won't spend it because they expect everything to be free. Either way, they weren't going to buy the product or service that you were advertising in the first place. Nothing lost by alienating them. Those who do pay for access, you're at least somewhat sure that they're willing to buy stuff -- isn't this a better audience to target ads at?
When did we all become so crazy that we thought that advertising to the money-less or cheapskates was the best way to sell your product?
Everyone laughs now in hindsight at what ridiculous business models people thought were rock-solid during the dot-com boom -- I mean, selling dog food over the internet? You're retarded. However, most of this awareness only pertains to the types of products that would be sold online, not to the entire business model. Just look at Facebook and YouTube. They seem like reasonable things to offer over the internet, but their owners are too cowardly to implement a business model that would score them a profit -- and you thought nothing could over-ride the profit motive in big business.
Are the teenagers and college students who spend 95% of their free time on Facebook or YouTube paying nothing for the texts they send on their cell phones? (Just ask their parents.) They would die if Facebook went away because it keeps them connected to their social circle just like cell phone access does. Charging them will not drive them away. Again, it could be a pitiful amount since they have about 200 million users -- a $5 annual membership generates $1 billion. Make that $50 a year, or just over $4 per month, and that turns into $10 billion. What are they so afraid of? "Alienating users" cannot be an answer since, to reiterate, such people are too cheap or broke to buy what the ads are selling to begin with, and are just freeloading. They are also boosting the overall network size, but only their close friends and family would care about them being in the Facebook network -- the rest of the world could give a shit whether they're in or out. And because birds of a feather flock together, the cheapskates' peers are also unlikely to send much money to Facebook's advertisers.
Stan Liebowitz wrote a great post-mortem of the dot-com boom and crash, Re-thinking the Network Economy (featured in the Amazon box above), and he devotes an entire chapter to whether and in which cases will advertising generate enough revenue. Just noting that network TV earns money with ads means nothing. Most internet ads don't block everything else out for 5 minutes every 15 minutes anyway. They're easily ignored, and you can typically close them out with a single click when they've scarcely begun. Liebowitz points out that for quality media content, some combination of ads plus subscription fees will be the way to go -- just as it was before.
The "ads will solve everything" notion that still spellbinds internet companies shows that all of the lessons of the dot-com crash have not been learned, no matter how simple they are to figure out. If YouTube truly wanted to imitate the TV model, they would allow free access to the videos that no one wants to see plus a tiny handful of quality videos, while subjecting them to five-minute screen-hijacking ads every 15 minutes -- a la network TV. It would then charge subscription fees to see the better stuff, as judged by view counts, ratings, blogger buzz, or whatever. It couldn't charge too much because, after all, it's just YouTube. These subscribers would see some ads, but nothing too annoying. Finally, there would be a counterpart to HBO and The Sopranos, which would require higher fees but have much better content and no ads.
Many would surely defect to some other YouTube clone site, but here's a newsflash: those sites aren't generating profit either. Ones that offer a small number of videos and don't have much bandwidth being burned up might be able to coast by without being profitable. But transplanting the bulk of YouTube to another site just postpones the inevitable. This new site would face the exact same problems with its business model as the old one. In the end, internet companies will have to deprogram themselves from the dogma of Free, and internet users will have to realize that they aren't entitled to free content.
August 22, 2009
Video games' evolutionary aesthetics
Returning to The Art Instinct and the results of Komar and Melamid's survey of what people like to see in paintings (still featured in the Amazon box above), the populist winner was a landscape mostly in blue and green, with some white, that featured a body of water and perhaps a few people and domestic animals. And this was true for just about every country that Komar and Melamid looked at. Here is the landscape based on the Kenyan survey.
The evolutionary story is that we prefer seeing such things because they would have been signals of good times during the long African savanna stage of our species' existence. These hardwired preferences would lead us to seek out places that boded well and turn away from those that portended ruin.
Dutton also mentions a study by Karl Grammer that shows how our environmental aeasthetic preferences change with age: in children, they are strikingly similar and in tune with the above green and blue landscape, although as people age they tend to also prefer features of the landscapes that they've been exposed to while growing up. Based on this idea, we expect that products geared toward children that involve a landscape will have the Komar and Melamid features, while products geared toward full adults may have such features but will also include a wider range of environmental types.
One key product to test this prediction is video games -- what other incredibly popular cultural product is geared toward kids and typically shows landscapes? I'll focus mostly on the Nintendo and Super Nintendo era (roughly 1986 to 1995), before a large fraction of video game players were full adults. Two recent systems -- Nintendo's Wii and DS -- are also fairly child-friendly.
While the superstar video games often include a level or two located in a desert, cave, or snow-covered mountain, the most common setting has a blue sky, sparkling water, verdant low-cut grass, some boulders or cliffs, and changes in elevation rather than a flat expanse of earth. The lighting is always as it would be during a summer afternoon. Moreover, even in video games that offer a variety of landscapes, the one that appears in the first level -- that is, in the beginning when the creators try to get you hooked on the game -- is much more likely to be the blue and green, water and grass type. They wait until after you've already gotten into the game to depress you with darkened, snowy areas (Contra places this landscape in the fifth stage), or make you feel claustrophobic inside of a dim cave (even such darker themed games as Castlevania 2 have you roam a brightly lit towns and blue and green environments before coming to caverns or foreboding castles).
This isn't a quantitative analysis, but anyone who played video games during this period knows what I'm talking about. But just to provide a few examples of what the first levels of some classic video games look like, here are some YouTube clips. You may have to skip through the first 30 to 60 seconds to get to the actual gameplay.
Super Mario 2
Legend of Zelda
Adventure Island
Super Ghouls N Ghosts (starting at 5:00)
Secret of Mana
Sonic the Hedgehog 2
Willow arcade game (turn off the geek's voice and just enjoy the feast for the eyes)
One game from that period that might have become even more popular, had it featured more blue and green types, is Bonk's Adventure for the TurboGrafx-16. There are only a couple of levels with water, and they're later on. They should've put some waterfalls in the background of the first stage, and it would've sold better.
Whenever I flip through Game Informer, I can immediately tell which games are for a Nintendo system because they're full of color and light. Nothing wrong with a limited color range or dark lighting, but in video games this just comes off as a third-rate B horror movie. Here are three contemporary examples of the blue and green, grass and water type:
Super Mario Galaxy 2
New Super Mario Bros. Wii
Kirby Squeak Squad
Note that in none of these games does the plot involve our evolutionary past, cave men, etc. So the fact that such a broad range of best-selling games all converge on their preferred landscape type is yet more evidence of its universal and hardwired appeal. (Of course, games that do have cave men characters also have this type of landscape, as with Joe and Mac 2.)
The games oriented toward older video game players -- the average player is now 35 -- rarely use lush colors or estival lighting, and almost never take place in the green outdoors. They're in some dungeon, dark alleyway, abandoned space lab, war-blown urban rubble heap, or whatever. That kind of environment is tougher to make more visually moving. Typically, it ends up as the video game equivalent of a death metal album, and it caters to the same audience -- fat 30-something male losers. Drop by your local GameStop sometime and see for yourself.
And this isn't just due to the darker plot that such games have. As I mentioned before, the Castlevania games for the Nintendo and Super Nintendo all have, in addition to dark and claustrophobic areas, those that are outdoors with green grass and falling water. Even beat 'em up games whose theme is urban crime and decay often have an evolutionarily friendly stage (for example, the woods stage in Double Dragon 2 or the beach stage in Streets of Rage 2). So did Guerrilla War, Metal Slug, and Contra (whose entire third stage is a waterfall), all popular body count run 'n' gun games.
Back when video games were aimed at normal people, and even today with systems for people who have a life, the environments were designed (consciously or not) to appeal to our innate and common preference for landscapes that are mostly blue and green, and somewhat white, always including some body of water. We didn't need depressing visuals because the unforgiving difficulty of the games gave us enough of a headache already. They may not have been at the level of a Hudson River School painting, but at least they did the job well enough for what is basically a toy. The darker current style also falls far short of its high art counterparts, but it doesn't really move you even a little -- it's just one more thing that makes them so boring.
The evolutionary story is that we prefer seeing such things because they would have been signals of good times during the long African savanna stage of our species' existence. These hardwired preferences would lead us to seek out places that boded well and turn away from those that portended ruin.
Dutton also mentions a study by Karl Grammer that shows how our environmental aeasthetic preferences change with age: in children, they are strikingly similar and in tune with the above green and blue landscape, although as people age they tend to also prefer features of the landscapes that they've been exposed to while growing up. Based on this idea, we expect that products geared toward children that involve a landscape will have the Komar and Melamid features, while products geared toward full adults may have such features but will also include a wider range of environmental types.
One key product to test this prediction is video games -- what other incredibly popular cultural product is geared toward kids and typically shows landscapes? I'll focus mostly on the Nintendo and Super Nintendo era (roughly 1986 to 1995), before a large fraction of video game players were full adults. Two recent systems -- Nintendo's Wii and DS -- are also fairly child-friendly.
While the superstar video games often include a level or two located in a desert, cave, or snow-covered mountain, the most common setting has a blue sky, sparkling water, verdant low-cut grass, some boulders or cliffs, and changes in elevation rather than a flat expanse of earth. The lighting is always as it would be during a summer afternoon. Moreover, even in video games that offer a variety of landscapes, the one that appears in the first level -- that is, in the beginning when the creators try to get you hooked on the game -- is much more likely to be the blue and green, water and grass type. They wait until after you've already gotten into the game to depress you with darkened, snowy areas (Contra places this landscape in the fifth stage), or make you feel claustrophobic inside of a dim cave (even such darker themed games as Castlevania 2 have you roam a brightly lit towns and blue and green environments before coming to caverns or foreboding castles).
This isn't a quantitative analysis, but anyone who played video games during this period knows what I'm talking about. But just to provide a few examples of what the first levels of some classic video games look like, here are some YouTube clips. You may have to skip through the first 30 to 60 seconds to get to the actual gameplay.
Super Mario 2
Legend of Zelda
Adventure Island
Super Ghouls N Ghosts (starting at 5:00)
Secret of Mana
Sonic the Hedgehog 2
Willow arcade game (turn off the geek's voice and just enjoy the feast for the eyes)
One game from that period that might have become even more popular, had it featured more blue and green types, is Bonk's Adventure for the TurboGrafx-16. There are only a couple of levels with water, and they're later on. They should've put some waterfalls in the background of the first stage, and it would've sold better.
Whenever I flip through Game Informer, I can immediately tell which games are for a Nintendo system because they're full of color and light. Nothing wrong with a limited color range or dark lighting, but in video games this just comes off as a third-rate B horror movie. Here are three contemporary examples of the blue and green, grass and water type:
Super Mario Galaxy 2
New Super Mario Bros. Wii
Kirby Squeak Squad
Note that in none of these games does the plot involve our evolutionary past, cave men, etc. So the fact that such a broad range of best-selling games all converge on their preferred landscape type is yet more evidence of its universal and hardwired appeal. (Of course, games that do have cave men characters also have this type of landscape, as with Joe and Mac 2.)
The games oriented toward older video game players -- the average player is now 35 -- rarely use lush colors or estival lighting, and almost never take place in the green outdoors. They're in some dungeon, dark alleyway, abandoned space lab, war-blown urban rubble heap, or whatever. That kind of environment is tougher to make more visually moving. Typically, it ends up as the video game equivalent of a death metal album, and it caters to the same audience -- fat 30-something male losers. Drop by your local GameStop sometime and see for yourself.
And this isn't just due to the darker plot that such games have. As I mentioned before, the Castlevania games for the Nintendo and Super Nintendo all have, in addition to dark and claustrophobic areas, those that are outdoors with green grass and falling water. Even beat 'em up games whose theme is urban crime and decay often have an evolutionarily friendly stage (for example, the woods stage in Double Dragon 2 or the beach stage in Streets of Rage 2). So did Guerrilla War, Metal Slug, and Contra (whose entire third stage is a waterfall), all popular body count run 'n' gun games.
Back when video games were aimed at normal people, and even today with systems for people who have a life, the environments were designed (consciously or not) to appeal to our innate and common preference for landscapes that are mostly blue and green, and somewhat white, always including some body of water. We didn't need depressing visuals because the unforgiving difficulty of the games gave us enough of a headache already. They may not have been at the level of a Hudson River School painting, but at least they did the job well enough for what is basically a toy. The darker current style also falls far short of its high art counterparts, but it doesn't really move you even a little -- it's just one more thing that makes them so boring.
August 21, 2009
Seattle -- not quite as gay as you thought
From a 2006 NPD press release about which cities are most vs. least desiring of more healthy items on restaurant menus, we see that Seattle scored among the least, along with a bunch of cities that the SWPL crowd would never live in (although they might visit just to have a story to tell about their brush with gritty street life).
Of course, "healthy" items means high-carb garbage with no intrinsic taste, and so therefore has ten pounds of sugar dumped on it -- blueberry muffins, pumpkin spice bagels, etc. Or a bunch of empty carbs with fat heaped on top to provide some actual taste. Unfortunately that won't stop the pasta or rice from spiking your blood sugar and raising your insulin levels, so you'd be healthier to just heap the fat on some more fat, along with protein.
Most restaurants don't do that unless they charge an arm and a leg -- after all, a high-carb meal is for starving peasants -- except for those Brazilian grills that are popping up all over. Perhaps being from a third-world country, they know that you don't fuck around when people plunk down $15 - $30 for dinner. Poor people get enough bread, rice, and sugar at home because it's cheap. When they go out, they want envigorating food like sirloin, chorizo, and quail eggs.
Of course, "healthy" items means high-carb garbage with no intrinsic taste, and so therefore has ten pounds of sugar dumped on it -- blueberry muffins, pumpkin spice bagels, etc. Or a bunch of empty carbs with fat heaped on top to provide some actual taste. Unfortunately that won't stop the pasta or rice from spiking your blood sugar and raising your insulin levels, so you'd be healthier to just heap the fat on some more fat, along with protein.
Most restaurants don't do that unless they charge an arm and a leg -- after all, a high-carb meal is for starving peasants -- except for those Brazilian grills that are popping up all over. Perhaps being from a third-world country, they know that you don't fuck around when people plunk down $15 - $30 for dinner. Poor people get enough bread, rice, and sugar at home because it's cheap. When they go out, they want envigorating food like sirloin, chorizo, and quail eggs.
August 19, 2009
"The Paper That Doesn't Want to Be Free"
So reads the headline of an NYT article about the success of The Financial Times, which charges for access to its website, as does its main competitor The Wall Street Journal. Meanwhile, the rest of the big newspapers are going down the toilet financially. Does this merely reflect the fact that rich businessmen read the two financial papers, while the NYT or LA Times serve broader and less wealthy audiences? The FT's CEO says that's nonsense:
If the audience statistics provided by Quantcast are at all accurate, then the NYT's audience is hardly any different from that of the FT or the WSJ. Politically I'm sure they are, but not by age, race, having kids, household income, or education level. The only real difference is that the NYT's online readers are slightly more male than female, while the financial paper's readers are much more so. One-third of NYT readers belong to households that make over $100,000 a year -- if they managed to convert just the richer readers into paying customers, that alone would give them a larger paying audience than at the FT (the NYT's total online visitor size is 10.7 million, while the FT's is 2.7 million).
I've never heard anyone complain about the quality of the FT or WSJ tumbling downhill, which doesn't mean no one does, only that it's rare enough that I haven't heard it. But it's trivial to recall people lamenting how worse the NYT has been getting. Yet, if their sales strategy is one seemingly geared to 17 year-old nose-bleeding, retainer-wearing brats -- who feel entitled to freely download as many songs, movies, and video games as they please -- is it any wonder that the paper's quality should suffer? What kind of fucked up world is it where he who does not pay the piper calls the tune?
"I sometimes think there's too much fatalism around -- people throwing up their hands and saying it's not possible for general publishers to charge," Mr. Ridding said. "I think it is possible, and necessary, for them to charge."
If the audience statistics provided by Quantcast are at all accurate, then the NYT's audience is hardly any different from that of the FT or the WSJ. Politically I'm sure they are, but not by age, race, having kids, household income, or education level. The only real difference is that the NYT's online readers are slightly more male than female, while the financial paper's readers are much more so. One-third of NYT readers belong to households that make over $100,000 a year -- if they managed to convert just the richer readers into paying customers, that alone would give them a larger paying audience than at the FT (the NYT's total online visitor size is 10.7 million, while the FT's is 2.7 million).
I've never heard anyone complain about the quality of the FT or WSJ tumbling downhill, which doesn't mean no one does, only that it's rare enough that I haven't heard it. But it's trivial to recall people lamenting how worse the NYT has been getting. Yet, if their sales strategy is one seemingly geared to 17 year-old nose-bleeding, retainer-wearing brats -- who feel entitled to freely download as many songs, movies, and video games as they please -- is it any wonder that the paper's quality should suffer? What kind of fucked up world is it where he who does not pay the piper calls the tune?
August 18, 2009
New at the data blog
Detailed info on my for-purchase data blog / e-book in progress is on the right sidebar, along with a full table of contents to see what you're getting. Purchases can be made with the PayPal button at the top. Here's what's come out in the past week:
Brief: Do Asians consume boat loads of carbohydrates? In order to see whether Asians consume lots of rice or carbs in general, as many believe, I look at USDA international data on grain consumption per capita for India, Indonesia, South Africa, Iran, Japan, China, South Korea, Russia, Brazil, Mexico, Egypt, Australia, Hungary, Canada, and the U.S. I've broken down each country's consumption by grain in two tables, as well as make a graph of total grain consumption per capita for an easy comparison. Grains studied include barley, corn, oats, rice, rye, sorghum, and wheat.
6. The rate of invention from 0 to 2008 A.D. I've found a book with 1001 world-changing inventions, and I've transcribed the dates and plotted the number of inventions over time, by century, half-century, decade, year, and a 10-year moving average of the yearly data. I've written before about the slowing pace of innovation since Bell Labs and the DoD were broken up in the mid-1980s, using a dataset of 100 modern inventions, so this allows for an independent test of that claim. (And the new post obviously gives a clearer picture since there are 10 times as many data-points.) It also puts recent trends in larger historical perspective. I discuss some plausible genetic and institutional causes for the rise of invention. There is not only a trend that stretches across centuries, but an apparent cycle on the order of human generations.
Brief: Do Asians consume boat loads of carbohydrates? In order to see whether Asians consume lots of rice or carbs in general, as many believe, I look at USDA international data on grain consumption per capita for India, Indonesia, South Africa, Iran, Japan, China, South Korea, Russia, Brazil, Mexico, Egypt, Australia, Hungary, Canada, and the U.S. I've broken down each country's consumption by grain in two tables, as well as make a graph of total grain consumption per capita for an easy comparison. Grains studied include barley, corn, oats, rice, rye, sorghum, and wheat.
6. The rate of invention from 0 to 2008 A.D. I've found a book with 1001 world-changing inventions, and I've transcribed the dates and plotted the number of inventions over time, by century, half-century, decade, year, and a 10-year moving average of the yearly data. I've written before about the slowing pace of innovation since Bell Labs and the DoD were broken up in the mid-1980s, using a dataset of 100 modern inventions, so this allows for an independent test of that claim. (And the new post obviously gives a clearer picture since there are 10 times as many data-points.) It also puts recent trends in larger historical perspective. I discuss some plausible genetic and institutional causes for the rise of invention. There is not only a trend that stretches across centuries, but an apparent cycle on the order of human generations.
August 16, 2009
What age should the crowd be if you want to pick up young girls?
Something struck me tonight, although I could have figured it out earlier had I given it serious thought. Assume that there are girls who are significantly younger than you and who would be cool with -- well, whatever, having a crush on you, giving you their phone number or asking for yours, grinding on you in a dance club, asking you out, or going into the lordosis reflex so that you'll come over and put it where it belongs. The large age gap could be a potential obstacle, so how can you dodge it simply by choosing a different environment in which to pursue them?
To get straight to it, you want everyone else to be roughly the same age -- as young as the girls you're interested in -- while only a few people, if any, are noticeably older. Preferably you're the only one. First, the observations that led me to this conclusion, and then the logic behind it.
I've had extensive contact with young girls in environments where everyone is roughly their age, and only a few are my age. These include the local teen dance club (where almost everyone is 15 to 18), '80s night at another club (which is mostly college kids), and the tutoring center I worked at before starting school (which was mostly high schoolers, with some younger kids too). In these contexts, all girls aged 14 to 21 had zero problem just crushing on me, most made some attempt to get close to me, and more than a few blatantly made a move on me.
Then there are places where there is a very heterogeneous age range, where they're just about the youngest, I'm more near the average, and the upper bound is middle-aged. These include any mall, Starbucks, and the club I sometimes go to on Saturdays -- it's the same one that hosts '80s night, but on Saturday the crowd is pretty diverse age-wise. Here, young girls may make a nervous move in my direction, but it only rarely goes farther than that. Something halts them halfway.
Finally, there are those spots where most people are much older than they are. Now, they are the outliers. I rarely visit these places, but they include a nauseating club where the average age is near 30, as well as Whole Foods. No young girl has made the slightest pass at me in these places, although plenty of cougar women have (shudder).
So, what's the pattern? The pattern is that girls feel most emboldened when everyone else is the same age as they are, while they feel terrified when everyone else is very different from them, and they're pretty anxious but not cripplingly so when there's a broad mix of ages. That's true whether the females are young or old.
Why is that? Simple: being comfortable is required for girls to let loose, and to be comfortable, girls need to feel like they fit in. The easiest way to utterly disorient a girl is to tell her, in a half-pitying tone, that she doesn't seem to fit in here.
wait omigod WHAT????!! i meeeeeeean, is it my clothes? or my shoes? oh wait, did i say something -- omigod, WHAT DID I SAY?!!! tell me!
Unlike guys, girls need for the entire atmosphere to make them feel at ease. Not like they're at a spa or anything, but it cannot put them off -- this will instantly kill any chance you had.
It's an in group vs. out group problem, where age is the ethnic marker. Imagine being the only Black girl in a room full of Whites, or vice versa. Or imagine being the only girl in a room full of men -- yeah, you won't be staying long. The most intuitive way to think about it is as a split between languages. (If you're 18 and he's 38, you might as well speak different languages.) If you belong to the majority linguistic group and you spot someone who you sense speaks a different language but is appealing enough to pursue, you'll pursue him. If you're in one of those hybrid dialect zones, you'll awkwardly approach him halfway linguistically and see if he returns the effort. But if everyone else speaks a language that you don't, you will feel paralyzed -- even if he really appeals to you. The in group usually doesn't treat an out-group member too well, and you might make a complete fool of yourself by going up to him.
This leads to the perhaps counter-intuitive advice that you should approach young girls in places where you stick out as the older person. But if we think about how the age distribution of a crowd affects her comfort level, it all makes sense. First, if she is noticeably younger than you, then you are noticeably older than she is, and no change in the age structure will mask that. So, what really matters is what will make her most comfortable in crossing that ethnic dividing line between the two of you.
If everyone else is older than she is, you won't stick out as being the older guy in the club -- but in this case, she'll feel frightened to cross the line, just based on how the in group tends to treat the out group. If you're the outlier, then she won't feel endangered in crossing the line -- she feels safe because the age range is her home turf. In a mixed area, it's in between these two extremes. Like a Spaniard who spies an Italian, she'll make an anxious effort to get some of her point across, just to see if he'll do the same. She doesn't feel as comfortable as on her native soil, but she doesn't feel like someone's dropped her in an alien land.
This probably explains why it's so easy for young girls to crush on their teachers or tutors. In most other cases where they're interacting with somewhat older, higher-status males, the age range is either pretty mixed (as in most business environments), or is predominantly older people (say, if she were a secretary who only worked with the higher-ups). In a mixed office, you might float the idea with lots of plausible deniability, and see if he bites. If you're a college intern who's job is to record the minutes of meetings with the big wigs, it doesn't matter how high your class background is, or if your parents are as powerful as these guys -- you will feel so out of place age-wise that you'd feel too frozen to approach one of them.
But if he's the only older person in the environment, while everyone else is a young student like you -- well, suddenly you feel a lot more comfortable fantasizing about him when you pull the covers over you at night, or drawing his initials next to yours inside a heart while you're riding the school bus home. It feels like he would be crossing the bridge to join your group, rather than you having to go over and assimilate to his strange new group. Remember how necessary it is for girls to feel like the fitting in part of life is already covered, nothing to worry about.
Of course, if no younger girls would be into you, following this advice will go nowhere -- in fact, it will probably repulse them even more if you approached them on their turf rather than once they had wandered onto your turf. But if you know from experience that there are such girls, make sure to pursue them where you're the only older person. You might feel self-conscious at first, given that you'll stick out, but it'll be easier than if you're at a club (or whatever) with mostly older people and there happen to be a few sorority girls visiting. That's death. Plus, other things will weigh more on your mind than self-consciousness when you're wading through a pool of younger girls.
To get straight to it, you want everyone else to be roughly the same age -- as young as the girls you're interested in -- while only a few people, if any, are noticeably older. Preferably you're the only one. First, the observations that led me to this conclusion, and then the logic behind it.
I've had extensive contact with young girls in environments where everyone is roughly their age, and only a few are my age. These include the local teen dance club (where almost everyone is 15 to 18), '80s night at another club (which is mostly college kids), and the tutoring center I worked at before starting school (which was mostly high schoolers, with some younger kids too). In these contexts, all girls aged 14 to 21 had zero problem just crushing on me, most made some attempt to get close to me, and more than a few blatantly made a move on me.
Then there are places where there is a very heterogeneous age range, where they're just about the youngest, I'm more near the average, and the upper bound is middle-aged. These include any mall, Starbucks, and the club I sometimes go to on Saturdays -- it's the same one that hosts '80s night, but on Saturday the crowd is pretty diverse age-wise. Here, young girls may make a nervous move in my direction, but it only rarely goes farther than that. Something halts them halfway.
Finally, there are those spots where most people are much older than they are. Now, they are the outliers. I rarely visit these places, but they include a nauseating club where the average age is near 30, as well as Whole Foods. No young girl has made the slightest pass at me in these places, although plenty of cougar women have (shudder).
So, what's the pattern? The pattern is that girls feel most emboldened when everyone else is the same age as they are, while they feel terrified when everyone else is very different from them, and they're pretty anxious but not cripplingly so when there's a broad mix of ages. That's true whether the females are young or old.
Why is that? Simple: being comfortable is required for girls to let loose, and to be comfortable, girls need to feel like they fit in. The easiest way to utterly disorient a girl is to tell her, in a half-pitying tone, that she doesn't seem to fit in here.
wait omigod WHAT????!! i meeeeeeean, is it my clothes? or my shoes? oh wait, did i say something -- omigod, WHAT DID I SAY?!!! tell me!
Unlike guys, girls need for the entire atmosphere to make them feel at ease. Not like they're at a spa or anything, but it cannot put them off -- this will instantly kill any chance you had.
It's an in group vs. out group problem, where age is the ethnic marker. Imagine being the only Black girl in a room full of Whites, or vice versa. Or imagine being the only girl in a room full of men -- yeah, you won't be staying long. The most intuitive way to think about it is as a split between languages. (If you're 18 and he's 38, you might as well speak different languages.) If you belong to the majority linguistic group and you spot someone who you sense speaks a different language but is appealing enough to pursue, you'll pursue him. If you're in one of those hybrid dialect zones, you'll awkwardly approach him halfway linguistically and see if he returns the effort. But if everyone else speaks a language that you don't, you will feel paralyzed -- even if he really appeals to you. The in group usually doesn't treat an out-group member too well, and you might make a complete fool of yourself by going up to him.
This leads to the perhaps counter-intuitive advice that you should approach young girls in places where you stick out as the older person. But if we think about how the age distribution of a crowd affects her comfort level, it all makes sense. First, if she is noticeably younger than you, then you are noticeably older than she is, and no change in the age structure will mask that. So, what really matters is what will make her most comfortable in crossing that ethnic dividing line between the two of you.
If everyone else is older than she is, you won't stick out as being the older guy in the club -- but in this case, she'll feel frightened to cross the line, just based on how the in group tends to treat the out group. If you're the outlier, then she won't feel endangered in crossing the line -- she feels safe because the age range is her home turf. In a mixed area, it's in between these two extremes. Like a Spaniard who spies an Italian, she'll make an anxious effort to get some of her point across, just to see if he'll do the same. She doesn't feel as comfortable as on her native soil, but she doesn't feel like someone's dropped her in an alien land.
This probably explains why it's so easy for young girls to crush on their teachers or tutors. In most other cases where they're interacting with somewhat older, higher-status males, the age range is either pretty mixed (as in most business environments), or is predominantly older people (say, if she were a secretary who only worked with the higher-ups). In a mixed office, you might float the idea with lots of plausible deniability, and see if he bites. If you're a college intern who's job is to record the minutes of meetings with the big wigs, it doesn't matter how high your class background is, or if your parents are as powerful as these guys -- you will feel so out of place age-wise that you'd feel too frozen to approach one of them.
But if he's the only older person in the environment, while everyone else is a young student like you -- well, suddenly you feel a lot more comfortable fantasizing about him when you pull the covers over you at night, or drawing his initials next to yours inside a heart while you're riding the school bus home. It feels like he would be crossing the bridge to join your group, rather than you having to go over and assimilate to his strange new group. Remember how necessary it is for girls to feel like the fitting in part of life is already covered, nothing to worry about.
Of course, if no younger girls would be into you, following this advice will go nowhere -- in fact, it will probably repulse them even more if you approached them on their turf rather than once they had wandered onto your turf. But if you know from experience that there are such girls, make sure to pursue them where you're the only older person. You might feel self-conscious at first, given that you'll stick out, but it'll be easier than if you're at a club (or whatever) with mostly older people and there happen to be a few sorority girls visiting. That's death. Plus, other things will weigh more on your mind than self-consciousness when you're wading through a pool of younger girls.
August 15, 2009
Another reason the Wii sells so well -- it breaks least often
The primary driver of console sales is how many great games there are to play on the system. So, given the long shelf lives of Wii games compared to the flash-in-the-pan popularity of PlayStation 3 games, it's no surprise that the Wii is dominating the PS3 in sales.
But there are also aspects of the system itself that users pay attention to -- like, does it work? The most recent issue of Game Informer has a few charts that show how reliable each of the three current consoles are. They surveyed nearly 5000 people (I assume mostly through their website), and though this may introduce error into the absolute estimates (e.g., is the true percent of faulty PS3s 10% or 15%?), it shouldn't mess up the relative picture -- that is, which system screws up more than which other? Here are their results:
1. % whose consoles have failed: 54.2 for Xbox 360, 10.6 for PS3, 6.8 for Wii
2. % whose consoles failed again after first fix-up: 41.2 for Xbox 360, 14.7 for PS3, 11 for Wii
3. % whose friends have had these consoles fail: 69.9 for Xbox 360, 12.4 for PS3, 6 for Wii
4. % who said customer service was "very helpful": 37.7 for Microsoft, 51.1 for Sony, 56.1 for Nintendo
The writer offers a lame potential excuse that Wii owners may play them a lot less, but they don't provide statistics to support this. They say that about 40% of Wii owners play it for less than an hour a day, and that about 40% of Xbox 360 and PS3 owners play them for 3 - 5 hours a day. But this isn't the full distribution, and it doesn't say what the average owner does -- for all we know, 40% of Wii owners play it for less than an hour, while 60% play it for over 5 hours, and 40% of 360 and PS3 owners play them for 3 - 5 hours, while 60% play it for less than an hour. If stereotypes are true, most of the people for whom video games are a second full-time job are playing the 360 and PS3 -- but that doesn't mean the average owner of those consoles is so obsessed. (This mixes up "most X are Y" with "most Y are X".)
Still, even if it were true that Wii owners play it much less often, that can't account for the pattern because it's basically a 360 vs. not-360 distinction. Maybe less time playing accounts for the small difference between the Wii and PS3 failure rates, but not for the main pattern. Plus, the idea that playing a system more leads to greater breakdowns sounds like bullshit -- it's mostly due to how well the system is designed. The Atari, top-loader Nintendo, Super Nintendo, Sega Genesis, TurboGrafx-16 -- they all work, even though they've been put under stress for decades. The non-trivial fraction of Xbox 360s that have failed are less than 4 years old (at the oldest).
So, in addition to having more games with long-lasting appeal, the Wii hardware is least likely to break, and if something does go wrong, their customer service is the most helpful. Clearly the hardware and customer service aspects are not primary, since that would predict the PS3 vastly outselling the Xbox 360 and being neck-and-neck with the Wii. Still, these matters of secondary importance shouldn't be forgotten when we try to account for why the Wii is selling so well. The answer is pretty boring to normal people -- it's better in all key regards -- but is upsetting to most of the 20 and 30-something males who spend their lives shackled to their PS3.
But there are also aspects of the system itself that users pay attention to -- like, does it work? The most recent issue of Game Informer has a few charts that show how reliable each of the three current consoles are. They surveyed nearly 5000 people (I assume mostly through their website), and though this may introduce error into the absolute estimates (e.g., is the true percent of faulty PS3s 10% or 15%?), it shouldn't mess up the relative picture -- that is, which system screws up more than which other? Here are their results:
1. % whose consoles have failed: 54.2 for Xbox 360, 10.6 for PS3, 6.8 for Wii
2. % whose consoles failed again after first fix-up: 41.2 for Xbox 360, 14.7 for PS3, 11 for Wii
3. % whose friends have had these consoles fail: 69.9 for Xbox 360, 12.4 for PS3, 6 for Wii
4. % who said customer service was "very helpful": 37.7 for Microsoft, 51.1 for Sony, 56.1 for Nintendo
The writer offers a lame potential excuse that Wii owners may play them a lot less, but they don't provide statistics to support this. They say that about 40% of Wii owners play it for less than an hour a day, and that about 40% of Xbox 360 and PS3 owners play them for 3 - 5 hours a day. But this isn't the full distribution, and it doesn't say what the average owner does -- for all we know, 40% of Wii owners play it for less than an hour, while 60% play it for over 5 hours, and 40% of 360 and PS3 owners play them for 3 - 5 hours, while 60% play it for less than an hour. If stereotypes are true, most of the people for whom video games are a second full-time job are playing the 360 and PS3 -- but that doesn't mean the average owner of those consoles is so obsessed. (This mixes up "most X are Y" with "most Y are X".)
Still, even if it were true that Wii owners play it much less often, that can't account for the pattern because it's basically a 360 vs. not-360 distinction. Maybe less time playing accounts for the small difference between the Wii and PS3 failure rates, but not for the main pattern. Plus, the idea that playing a system more leads to greater breakdowns sounds like bullshit -- it's mostly due to how well the system is designed. The Atari, top-loader Nintendo, Super Nintendo, Sega Genesis, TurboGrafx-16 -- they all work, even though they've been put under stress for decades. The non-trivial fraction of Xbox 360s that have failed are less than 4 years old (at the oldest).
So, in addition to having more games with long-lasting appeal, the Wii hardware is least likely to break, and if something does go wrong, their customer service is the most helpful. Clearly the hardware and customer service aspects are not primary, since that would predict the PS3 vastly outselling the Xbox 360 and being neck-and-neck with the Wii. Still, these matters of secondary importance shouldn't be forgotten when we try to account for why the Wii is selling so well. The answer is pretty boring to normal people -- it's better in all key regards -- but is upsetting to most of the 20 and 30-something males who spend their lives shackled to their PS3.
August 11, 2009
Why does Starbucks perform so well?
Because what they offer is judged superior to the alternatives by consumers -- duh.
But for anyone who was an adolescent or older during the '90s, this may come as a surprise. After what Generation X re-labeled the decade of excess and corporate greed (which was really only a few years in the middle of the '80s), the anti-yuppie and anti-corporate backlash gained a lot of steam during the decade of grunge and postmodernism. The worldview that we inherited was that super-huge companies like Microsoft or McDonald's or Starbucks only got so big because -- well, they couldn't tell you how the ball got rolling if they were so crappy, but once they did get that big, they were going to stay that way through inertia -- regardless of the quality of their product. There were network effects, or lazy sheeple, or people wanting to buy things for reasons other than how well the product did its job (e.g., to look cool).
We can easily dismiss all claims that patrons of Microsoft, McDonald's, or Starbucks go there to look cool -- these companies have been the objects of ridicule for a decade or more by the culture snobs, and in Starbucks' case, most lower-middle and lower class people too. For example, as far back as 1998, it was cliche to whine about a Starbucks popping up everywhere: that was the year that the Simpsons ran a gag about every store in the mall being converted into a Starbucks during a single trip, and it was the year that The Onion ran a gag about a new Starbucks opening within the restroom of an existing Starbucks. Eleven years of derision later, there is even less of a chance to score hip points by buying from Starbucks.
More plausible is the claim that people aren't paying just for the product but the entire experience -- yeah, kinda. But get real. People will put up with the Soup Nazi if his product is better than the other guys' stuff.
So why don't we get the answer straight from the horse's mouth -- what does the the consuming public think about Starbucks, compared to their competitors? First, we can ignore all those taste tests done by a handful of self-described "caffeine junkies" from the office of a New York media company. Most importantly, they don't represent the broader group that Starbucks, Dunkin' Donuts, etc., compete over. Plus they're small in number, and without any expert qualifications. Perhaps if they could prove they had superior noses and tongues, like renowned wine tasters, then we could forgive the small sample size -- after all, experts with superhuman senses will always be rare. But just being a coffee junkie doesn't mean anything -- imagine if Slate ran a taste test of wines "as judged by five alcoholics from around the office." Who would care?
I've managed to find three surveys or tests that had large numbers of the broad target audience, two of which seem pretty free of influence from the companies judged, and one that was commissioned by one of the tested companies. I'm interested in finding similar data that go back awhile, to see if the rise of Starbucks tracked an increasingly favorable perception of its coffee. That's the assumption unless we get good evidence to the contrary. Note -- the coffee that people actually buy when they go to Starbucks, the majority of which is espresso-based specialty coffee, not drip coffee. But that will take lots of work, so if I do that, it will definitely go up at the pay-for-data blog.
One taste test between Starbucks and Dunkin' Donuts showed a preference for the latter, although it was packaged coffee brewed at home rather than the stuff brewed in the store itself, and it was black drip coffee, not an espresso specialty coffee. Recall that most people go out for coffee only if it's specialty -- hence all the jokes about how Starbucks customers are frou-frou people who aren't interested in a regular cup of joe, but instead crave their caramel macchiatos and vanilla mocha lattes. Read the press release here. The sample size was under 500. I don't know exactly what role Dunkin' Donuts played in this survey, but it's clear they commissioned it or something, as only these two brands were tested -- any neutral test would certainly have included at least McDonald's, and probably 7-11, Burger King, Caribou Coffee, Peet's Coffee & Tea, etc. Plus they had a separate website planned for the results.
The other two have larger sample sizes and were administered by third parties not obviously influenced by any of the companies tested -- USA Today and Zagat. When a critic compared various quick service coffee brands, USA Today offered a poll for readers to say which coffee they like best. See here for the results. When I checked it, there were 9142 votes, with 44% for Starbucks, 33% for Dunkin' Donuts, 13% for McDonald's, 7% for 7-11, and only 3% for Burger King. The coffees all appear to be non-espresso, and Starbucks still handily won. So much for Burger King's pathetic attempt to bring premium coffee to construction workers -- they would have had better luck selling classical music CDs to truck drivers.
Just as important are the fast food ratings that Zagat released a few months ago. See here and click on the "fast food" tab. The sample size here was over 6000. Unfortunately there are no percentages for how many preferred each brand, just rankings. At any rate, in the Best Coffee category, the ranking is Starbucks, Dunkin' Donuts, Peet's Coffee & Tea, McDonald's, and Caribou Coffee. Three of these were also ranked in the independently administered USA Today survey, and the ranking is the same -- Starbucks, followed by Dunkin' Donuts, and then McDonald's. I assume that Zagat also rated 7-11 and Burger King for coffee, but that they scored far lower than was needed to break into the top 5, so the full ranking probably agrees with the USA Today survey.
Moreover, the Zagat survey rated the facilities and the service of Starbucks and its competitors in what they call the Quick-Refreshment Chains -- "National counter-service chains where coffee, ice cream, frozen yogurt or smoothies are the featured offering." Dunkin' Donuts did not place in the top 5 for either category, while Starbucks scored 4th in Facilities and 5th in Service, behind other specialty / premium coffee chains. Though not in the same Quick-Refreshment group as the other two big coffee contenders, McDonald's placed 2nd in Facilities and 4th in Service in the Mega Chains group.
In sum, not only does the coffee-consuming public prefer Starbucks' coffee to that of its competitors, but it's a pretty nice place to hang out and the service is good (most baristas are amiable young girls). Obviously taste matters more -- otherwise McDonald's would be neck-and-neck with Starbucks, and Dunkin' Donuts, whose facilities and service apparently aren't so hot, would be the clear loser. But it's doing better than McDonald's, although not as well as Starbucks.
So there you have it. We conclude that the media industry interns who've run taste tests before and found Starbucks inferior to Dunkin' Donuts have brains too caffeine-addled to properly judge the taste of coffee. If they tended to be in favor of Starbucks, and by a not-so-gigantic margin, their results wouldn't be unusual. But they almost universally praise Dunkin' Donuts and slam Starbucks, with McDonald's in the middle -- the reverse of what everyone else says.
Since most of these losers live on the East Coast, this shouldn't be surprising. I was brought up and lived most of my adult life there too, but middle and upper-middle class people from the Mid-Atlantic through New England, lacking an official state religion, have developed a cult of the honest golden age of America -- it's rather Minnessotan, actually. You see this most clearly in their national pastime -- pretending to give a shit about baseball. Sure, some poor souls really are fascinated by the sport, but for 99% of upper-middle class East Coasters, it is pure affectation. It lets you pretend you have something in common with the people who clean the toilets in your office building, but it's not so contemporary in popularity that you could actually yak with them about it -- they're more likely into NASCAR, basketball, or dog and cock fighting.
Dunkin' Donuts, founded in New England, fits in perfectly with the East Coast retro faux populism -- more 1930s intellectual than 1990s street vagabond. First, it used to have blue collar associations -- you only have to recall that bloated, goofy schmuck who used to be their spokesman. And plus, only proles scarf down donuts. But Fred the Baker hadn't been in commercials since 1997 when he died in 2005, and Dunkin' Donuts now makes most of its money selling coffee, not donuts. I just checked my local supermarket tonight, and the packaged coffee for both Dunkin' Donuts and Starbucks is 75 cents per ounce (although Dunkin' Donuts' coffee is cheaper if you buy it online for some reason). Let's not forget They Might Be Giants recording a commercial jingle for Dunkin' Donuts -- talk about desperate to look cool. (If they wanted real New England populism, they should have asked Brian Dewan to make a commercial like he did for M2 awhile back.)
The use of the nickname "Dunkin' " just makes it worse -- or better, for their phony down-to-earth purposes. Can you imagine how quickly you'd be pilloried if you referred to the store as " 'Bucks" or "S-bucks"? It's almost as annoying as referring to Baltimore's baseball team as "the O's" when you couldn't care less about them, although not quite as retarded as Marylanders who invite you to go out for pizza at "Sole D' " (Sole D'Italia). And it's not just into casual talk that this grating nickname forces itself -- this NYT news article uses Dunkin' nearly twice as much as Dunkin' Donuts. Seriously, you fags need to go get a life.
Rebranding Dunkin' Donuts as a yuppie-in-hiding coffee chain has surely been good for their business, but let's get real about whose customers are trying to look cool and oh-so-authentic. It's 2009, not 1996, and affluent posers these days much prefer Dunkin' over Starbucks. For normal people, though, the taste of the coffee served is more important than getting a little help in proclaiming your authentic coolness, and that is why Starbucks controls the market.
But for anyone who was an adolescent or older during the '90s, this may come as a surprise. After what Generation X re-labeled the decade of excess and corporate greed (which was really only a few years in the middle of the '80s), the anti-yuppie and anti-corporate backlash gained a lot of steam during the decade of grunge and postmodernism. The worldview that we inherited was that super-huge companies like Microsoft or McDonald's or Starbucks only got so big because -- well, they couldn't tell you how the ball got rolling if they were so crappy, but once they did get that big, they were going to stay that way through inertia -- regardless of the quality of their product. There were network effects, or lazy sheeple, or people wanting to buy things for reasons other than how well the product did its job (e.g., to look cool).
We can easily dismiss all claims that patrons of Microsoft, McDonald's, or Starbucks go there to look cool -- these companies have been the objects of ridicule for a decade or more by the culture snobs, and in Starbucks' case, most lower-middle and lower class people too. For example, as far back as 1998, it was cliche to whine about a Starbucks popping up everywhere: that was the year that the Simpsons ran a gag about every store in the mall being converted into a Starbucks during a single trip, and it was the year that The Onion ran a gag about a new Starbucks opening within the restroom of an existing Starbucks. Eleven years of derision later, there is even less of a chance to score hip points by buying from Starbucks.
More plausible is the claim that people aren't paying just for the product but the entire experience -- yeah, kinda. But get real. People will put up with the Soup Nazi if his product is better than the other guys' stuff.
So why don't we get the answer straight from the horse's mouth -- what does the the consuming public think about Starbucks, compared to their competitors? First, we can ignore all those taste tests done by a handful of self-described "caffeine junkies" from the office of a New York media company. Most importantly, they don't represent the broader group that Starbucks, Dunkin' Donuts, etc., compete over. Plus they're small in number, and without any expert qualifications. Perhaps if they could prove they had superior noses and tongues, like renowned wine tasters, then we could forgive the small sample size -- after all, experts with superhuman senses will always be rare. But just being a coffee junkie doesn't mean anything -- imagine if Slate ran a taste test of wines "as judged by five alcoholics from around the office." Who would care?
I've managed to find three surveys or tests that had large numbers of the broad target audience, two of which seem pretty free of influence from the companies judged, and one that was commissioned by one of the tested companies. I'm interested in finding similar data that go back awhile, to see if the rise of Starbucks tracked an increasingly favorable perception of its coffee. That's the assumption unless we get good evidence to the contrary. Note -- the coffee that people actually buy when they go to Starbucks, the majority of which is espresso-based specialty coffee, not drip coffee. But that will take lots of work, so if I do that, it will definitely go up at the pay-for-data blog.
One taste test between Starbucks and Dunkin' Donuts showed a preference for the latter, although it was packaged coffee brewed at home rather than the stuff brewed in the store itself, and it was black drip coffee, not an espresso specialty coffee. Recall that most people go out for coffee only if it's specialty -- hence all the jokes about how Starbucks customers are frou-frou people who aren't interested in a regular cup of joe, but instead crave their caramel macchiatos and vanilla mocha lattes. Read the press release here. The sample size was under 500. I don't know exactly what role Dunkin' Donuts played in this survey, but it's clear they commissioned it or something, as only these two brands were tested -- any neutral test would certainly have included at least McDonald's, and probably 7-11, Burger King, Caribou Coffee, Peet's Coffee & Tea, etc. Plus they had a separate website planned for the results.
The other two have larger sample sizes and were administered by third parties not obviously influenced by any of the companies tested -- USA Today and Zagat. When a critic compared various quick service coffee brands, USA Today offered a poll for readers to say which coffee they like best. See here for the results. When I checked it, there were 9142 votes, with 44% for Starbucks, 33% for Dunkin' Donuts, 13% for McDonald's, 7% for 7-11, and only 3% for Burger King. The coffees all appear to be non-espresso, and Starbucks still handily won. So much for Burger King's pathetic attempt to bring premium coffee to construction workers -- they would have had better luck selling classical music CDs to truck drivers.
Just as important are the fast food ratings that Zagat released a few months ago. See here and click on the "fast food" tab. The sample size here was over 6000. Unfortunately there are no percentages for how many preferred each brand, just rankings. At any rate, in the Best Coffee category, the ranking is Starbucks, Dunkin' Donuts, Peet's Coffee & Tea, McDonald's, and Caribou Coffee. Three of these were also ranked in the independently administered USA Today survey, and the ranking is the same -- Starbucks, followed by Dunkin' Donuts, and then McDonald's. I assume that Zagat also rated 7-11 and Burger King for coffee, but that they scored far lower than was needed to break into the top 5, so the full ranking probably agrees with the USA Today survey.
Moreover, the Zagat survey rated the facilities and the service of Starbucks and its competitors in what they call the Quick-Refreshment Chains -- "National counter-service chains where coffee, ice cream, frozen yogurt or smoothies are the featured offering." Dunkin' Donuts did not place in the top 5 for either category, while Starbucks scored 4th in Facilities and 5th in Service, behind other specialty / premium coffee chains. Though not in the same Quick-Refreshment group as the other two big coffee contenders, McDonald's placed 2nd in Facilities and 4th in Service in the Mega Chains group.
In sum, not only does the coffee-consuming public prefer Starbucks' coffee to that of its competitors, but it's a pretty nice place to hang out and the service is good (most baristas are amiable young girls). Obviously taste matters more -- otherwise McDonald's would be neck-and-neck with Starbucks, and Dunkin' Donuts, whose facilities and service apparently aren't so hot, would be the clear loser. But it's doing better than McDonald's, although not as well as Starbucks.
So there you have it. We conclude that the media industry interns who've run taste tests before and found Starbucks inferior to Dunkin' Donuts have brains too caffeine-addled to properly judge the taste of coffee. If they tended to be in favor of Starbucks, and by a not-so-gigantic margin, their results wouldn't be unusual. But they almost universally praise Dunkin' Donuts and slam Starbucks, with McDonald's in the middle -- the reverse of what everyone else says.
Since most of these losers live on the East Coast, this shouldn't be surprising. I was brought up and lived most of my adult life there too, but middle and upper-middle class people from the Mid-Atlantic through New England, lacking an official state religion, have developed a cult of the honest golden age of America -- it's rather Minnessotan, actually. You see this most clearly in their national pastime -- pretending to give a shit about baseball. Sure, some poor souls really are fascinated by the sport, but for 99% of upper-middle class East Coasters, it is pure affectation. It lets you pretend you have something in common with the people who clean the toilets in your office building, but it's not so contemporary in popularity that you could actually yak with them about it -- they're more likely into NASCAR, basketball, or dog and cock fighting.
Dunkin' Donuts, founded in New England, fits in perfectly with the East Coast retro faux populism -- more 1930s intellectual than 1990s street vagabond. First, it used to have blue collar associations -- you only have to recall that bloated, goofy schmuck who used to be their spokesman. And plus, only proles scarf down donuts. But Fred the Baker hadn't been in commercials since 1997 when he died in 2005, and Dunkin' Donuts now makes most of its money selling coffee, not donuts. I just checked my local supermarket tonight, and the packaged coffee for both Dunkin' Donuts and Starbucks is 75 cents per ounce (although Dunkin' Donuts' coffee is cheaper if you buy it online for some reason). Let's not forget They Might Be Giants recording a commercial jingle for Dunkin' Donuts -- talk about desperate to look cool. (If they wanted real New England populism, they should have asked Brian Dewan to make a commercial like he did for M2 awhile back.)
The use of the nickname "Dunkin' " just makes it worse -- or better, for their phony down-to-earth purposes. Can you imagine how quickly you'd be pilloried if you referred to the store as " 'Bucks" or "S-bucks"? It's almost as annoying as referring to Baltimore's baseball team as "the O's" when you couldn't care less about them, although not quite as retarded as Marylanders who invite you to go out for pizza at "Sole D' " (Sole D'Italia). And it's not just into casual talk that this grating nickname forces itself -- this NYT news article uses Dunkin' nearly twice as much as Dunkin' Donuts. Seriously, you fags need to go get a life.
Rebranding Dunkin' Donuts as a yuppie-in-hiding coffee chain has surely been good for their business, but let's get real about whose customers are trying to look cool and oh-so-authentic. It's 2009, not 1996, and affluent posers these days much prefer Dunkin' over Starbucks. For normal people, though, the taste of the coffee served is more important than getting a little help in proclaiming your authentic coolness, and that is why Starbucks controls the market.
August 10, 2009
What's new at the data blog
I've included a table of contents for the data-driven blog in the links bar on the right, below the "detailed info" link. I'll update it with each new entry so that people will be able to see what all is there for purchase, and to ease navigation for those who have paid. The posts there aren't news or current events articles, so if you feel like paying for access once the whole thing is up, you won't be missing out on a passing-interest story. Each Monday I'll also update this site with what's come out in the past week, both the feature-length articles (numbered) and the shorter ones (called "brief").
New entries
Brief: Science knowledge across the lifespan. I use GSS data to construct a 13-question quiz of basic math and science knowledge, and see how well people do on it at different ages. Do people learn more and more, does their knowledge atrophy from lack of use, or does it pretty much stay put once it's in there during your required schooling?
4. Class and religious fundamentalism in red and blue states. Using the GSS, I find the relationship between two measures of fundamentalist religious beliefs and four measures of social class, once for blue states and again for red states. Are fundamentalist beliefs more a function of social class or regional culture?
5. Intelligence and patronizing the arts in red and blue states. Similar to entry 4, but now looking at four measures of going out to arts performances. Same question as before: is having an artsy leisure life more influenced by IQ or by regional culture?
New entries
Brief: Science knowledge across the lifespan. I use GSS data to construct a 13-question quiz of basic math and science knowledge, and see how well people do on it at different ages. Do people learn more and more, does their knowledge atrophy from lack of use, or does it pretty much stay put once it's in there during your required schooling?
4. Class and religious fundamentalism in red and blue states. Using the GSS, I find the relationship between two measures of fundamentalist religious beliefs and four measures of social class, once for blue states and again for red states. Are fundamentalist beliefs more a function of social class or regional culture?
5. Intelligence and patronizing the arts in red and blue states. Similar to entry 4, but now looking at four measures of going out to arts performances. Same question as before: is having an artsy leisure life more influenced by IQ or by regional culture?
August 8, 2009
Why I don't tweet
Simple -- it would give away my age to the under-25 year-old sugar babies.
Source: "Teens Don't Tweet" (Nielsen Wire).
I recall not long ago when one of my undergrad chick friends left a Facebook status update saying, "what is this twitter all about? i don't get it." All of her friends left similar responses about how pointless it is -- and one may have even mentioned that it was for old people, although I could be imagining that part. Of course, Twitter's pointlessness was not the reason they weren't into it -- hell, they post pointless status updates on Facebook several times a day. But they sensed somehow that it wasn't cool young people doing it, but rather the "mom from Mean Girls" crowd. Young people's social antennae are hyper-sensitive, so don't try to put anything past them.
Source: "Teens Don't Tweet" (Nielsen Wire).
I recall not long ago when one of my undergrad chick friends left a Facebook status update saying, "what is this twitter all about? i don't get it." All of her friends left similar responses about how pointless it is -- and one may have even mentioned that it was for old people, although I could be imagining that part. Of course, Twitter's pointlessness was not the reason they weren't into it -- hell, they post pointless status updates on Facebook several times a day. But they sensed somehow that it wasn't cool young people doing it, but rather the "mom from Mean Girls" crowd. Young people's social antennae are hyper-sensitive, so don't try to put anything past them.
August 7, 2009
Economic lessons of video games
The ravages of a certain sexy academic trend in economics, and its enthusiastic application in the business world, could have been easily avoided if the people involved had only looked to the world of video games, which would have handily disproved their silly little theories. I'm referring to the academic school of thought that believes in the lock-in of inferior technologies due to network effects -- even though any evidence for them is lacking. The application to the management culture was to sacrifice just about everything, including quality, to gain the all-important "first mover advantage" -- even though any evidence for this advantage is lacking.
Primarily academics and other smarties did not grasp how implausible these ideas were, and did not accept the evidence showing so, because they sprang from one of the unofficial religions among nerds -- hating on Microsoft. Every tribe needs its devil, and sub-billionaire nerds will always use Microsoft and Bill Gates for this purporse. Really, much of the seemingly unrelated pieces of their worldview and codified behavior patterns derive from a gut aversion to Microsoft. In reality, Microsoft dominated the markets that it has because its products are consistently rated as better than the others. Read Winners, Losers, and Microsoft (featured in my Amazon box above) for more data, or visit Stan Liebowitz's webpage.
Realizing that they will never snap out of this, I've thought of what other types of evidence we could show them that would not offend their religious sensibilities. It so happens that if we use examples from the hardware and software of video games rather than personal computers, they would recognize right away what the evidence shows, and would at least tone down their ranting about the spread of inferior Microsoft products.
The key idea behind inferior lock-in due to network effects is that adopters of a technology consider not only its utility to the adopter, but also the benefit the user gets from being part of the network of adopters. For example, you should buy something that plays VHS tapes rather than Betamax tapes if the former has a larger installed user base -- say, because you figure that producers of tapes will offer a wider variety in the format that's more common. The worry is that this network effect could overwhelm the intrinsic quality effect, so that an inferior technology became (nearly) fixed just because it got the snowball rolling in its direction early on.
For the same reason, real-life managers fought hard to get the "first mover advantage" during the tech bubble because they believed this theory. After all, if it was possible for an allegedly inferior keyboard design to become the standard, or an allegedly inferior videotape format to become standard, due to network effects, we shouldn't worry so much about quality that we sit on our asses and somebody else beats us to getting the snowball rolling. Get to the market first, and while it may not be a lock, we've at least got the positive feedback loop of network effects started for us.
Again, all of this was nonsense, but it was hard for the proponents to see because it validated their geek tastes for exotic keyboard and videotape formats, making them feel superior. But take another geek hobby like playing video games. The near impossibility of dethroning an entrenched incumbent would have meant that Atari would have reigned through the 8-bit era, leveraging its dominant market share and huge installed user base from the previous era of video games. Instead, the Atari 7800 went nowhere, while Nintendo -- who before had not even produced a home console at all -- readily swept them aside.
What led to this no-name newcomer stealing Atari's thunder? -- quite simply, the games for the Nintendo were better than for the Atari 7800. There are surely ratings data from the late 1980s that would support this claim, but this isn't a data-mining blog anymore, so I'll put that confirmation exercise aside for the moment.
Nintendo continued to dominate through the 16-bit era, so now we have another test of the lock-in due to network effects idea -- namely, that they would continue to do so into the next generation. But that didn't happen at all. Once more, a complete newcomer -- Sony -- easily tossed Nintendo off a cliff when it released its PlayStation, whose games appealed to people more than did the games for the Nintendo64. As Nintendo had before, Sony continued to dominate the following generation of consoles with its PlayStation 2.
So, Sony's locked in their consoles for good, right? Wrong -- in the current generation of home consoles, Sony's PlayStation 3 is in a distant third place among the big three, while Nintendo has completely reversed its beat-down years and come out on top with its Wii.
I could surely go on, but the point is that it was as clear as day that lock-in (whether of inferior or superior products) due to network effects was a silly idea, just looking at video games. Instead, it was the hardware that offered the best software that sold the most units -- sometimes it was the incumbent company (Super Nintendo, PS2), sometimes just the opposite (NES, PS1, Wii). This allows a complete virgin in the video game business to utterly annihilate the reigning giant, as Nintendo did with Atari, Sony did with Nintendo, and now Nintendo has done with Sony in turn.
The first mover advantage was equally obviously illusory, even during the tech bubble of roughly 10 years ago. Take the CD-ROM format for games -- it looked like this was the future, rather than the ROM cartridges that the Nintendo or Sega Genesis used. According to the first mover advantage idea, the first -- or at least one of the very first -- systems to use CD-ROMs should have become the standard. However, it wasn't until the PlayStation (released in 1995) that a system using CDs really took off.
Was the PlayStation the first mover into the CD-based console market? Not by a long shot. Fully three years earlier, Sega released the Sega CD attachment to the Genesis. So, not only did the Sega CD have such a huge lead in launching, but they could tap into the huge installed user base that Sega had built with the Genesis, whereas Sony had no fans to work with, as they had no previous system. But again, everyone thought the games for the Sega CD sucked, so it went nowhere.
Even before that, in the late 1980s and early 1990s NEC released a CD-ROM attachment for their TurboGrafx-16 system, as well as a combined cartridge-cum-CD-ROM system. Also, the Neo Geo CD debuted in 1994, the 3DO in 1993, and the Philips CD-i in 1991. Once again, there weren't enough superstar games to draw people into buying these CD-based systems.
Or we could look at what are called generations of video game consoles -- did the first entrant win? The Super Nintendo came out years after Sega Genesis and the TurboGrafx-16. The Sega Saturn came out four months before the PlayStation but got destroyed by it. In the next generation, Sega's Dreamcast enjoyed more than a one-year lead over the PlayStation 2, and a two-year lead over Nintendo's GameCube, although it too was quickly eclipsed by both of them. In the current competition, Microsoft's Xbox 360 should have benefited from its one-year lead over the Wii, but instead it has been leapfrogged by the Wii.
In all of these cases, the reason that the first mover advantage failed to materialize is that there is no such thing in the first place. Consoles sell based on how appealing its games are to the target audience, period. So it didn't matter that the Sega Saturn came out before the PlayStation -- it didn't as attractive of a library of games, so consumers steered clear of it. Product quality matters, not when you enter the market.
Finally, what if a company were not merely the first mover but the only mover -- in the fantasy world of geeks who lament the spread of QWERTY keyboards and Microsoft software, this would surely mean the company would have a lock on the market and in short order establish a monopoly, fleecing the powerless consumers. Rewind to the mid-1990s when everyone figured virtual reality was the future of video game-like experiences. Its hype extended to popular movies and music videos, its intrinsic appeal is easy enough to grasp, so the demand for a virtual reality toy was certainly there. Maybe it wouldn't be as cool as what the military used, but it would still be cool, and you get what you pay for.
Against this background of massive hype for virtual reality, Nintendo released the Virtual Boy in 1995, which offered something of a VR experience. Again, not the most high-tech thing you'd ever seen, but if you wanted virtual reality, it was the only game in town, and it wasn't even expensive at $180. However, the games for it were awful, and instead of passively giving in to a piece of shit technology, consumers refused to buy it -- and therefore, any virtual reality system -- altogether. Imagine it: no competition at all, plus the first mover advantage, plus benefiting from the virtual reality craze of the time -- and yet, this was Nintendo's most pathetic selling system of all, and it was discontinued the year after its release. This result is the exact opposite of what we'd expect if the inferior lock-in and first mover advantage ideas held any water at all.
It could be that the mostly Baby Boomer academics and managers who ran with these nutty ideas couldn't have drawn on video game evidence first-hand. But now that most nerds and managers are Generation X or younger, it should be much easier to offer evidence that they won't reject. They'll probably carve the universe in two, as most religions do -- the laws of supply and demand, and so on, apply to video games, sure, but anything related to Microsoft is described by inferior lock-in, bla bla bla, because they are the devil, not a real-life company like Nintendo or Sony. But as long as manage to chip away at the religions of academia and the managerial elite, it's worth it.
Primarily academics and other smarties did not grasp how implausible these ideas were, and did not accept the evidence showing so, because they sprang from one of the unofficial religions among nerds -- hating on Microsoft. Every tribe needs its devil, and sub-billionaire nerds will always use Microsoft and Bill Gates for this purporse. Really, much of the seemingly unrelated pieces of their worldview and codified behavior patterns derive from a gut aversion to Microsoft. In reality, Microsoft dominated the markets that it has because its products are consistently rated as better than the others. Read Winners, Losers, and Microsoft (featured in my Amazon box above) for more data, or visit Stan Liebowitz's webpage.
Realizing that they will never snap out of this, I've thought of what other types of evidence we could show them that would not offend their religious sensibilities. It so happens that if we use examples from the hardware and software of video games rather than personal computers, they would recognize right away what the evidence shows, and would at least tone down their ranting about the spread of inferior Microsoft products.
The key idea behind inferior lock-in due to network effects is that adopters of a technology consider not only its utility to the adopter, but also the benefit the user gets from being part of the network of adopters. For example, you should buy something that plays VHS tapes rather than Betamax tapes if the former has a larger installed user base -- say, because you figure that producers of tapes will offer a wider variety in the format that's more common. The worry is that this network effect could overwhelm the intrinsic quality effect, so that an inferior technology became (nearly) fixed just because it got the snowball rolling in its direction early on.
For the same reason, real-life managers fought hard to get the "first mover advantage" during the tech bubble because they believed this theory. After all, if it was possible for an allegedly inferior keyboard design to become the standard, or an allegedly inferior videotape format to become standard, due to network effects, we shouldn't worry so much about quality that we sit on our asses and somebody else beats us to getting the snowball rolling. Get to the market first, and while it may not be a lock, we've at least got the positive feedback loop of network effects started for us.
Again, all of this was nonsense, but it was hard for the proponents to see because it validated their geek tastes for exotic keyboard and videotape formats, making them feel superior. But take another geek hobby like playing video games. The near impossibility of dethroning an entrenched incumbent would have meant that Atari would have reigned through the 8-bit era, leveraging its dominant market share and huge installed user base from the previous era of video games. Instead, the Atari 7800 went nowhere, while Nintendo -- who before had not even produced a home console at all -- readily swept them aside.
What led to this no-name newcomer stealing Atari's thunder? -- quite simply, the games for the Nintendo were better than for the Atari 7800. There are surely ratings data from the late 1980s that would support this claim, but this isn't a data-mining blog anymore, so I'll put that confirmation exercise aside for the moment.
Nintendo continued to dominate through the 16-bit era, so now we have another test of the lock-in due to network effects idea -- namely, that they would continue to do so into the next generation. But that didn't happen at all. Once more, a complete newcomer -- Sony -- easily tossed Nintendo off a cliff when it released its PlayStation, whose games appealed to people more than did the games for the Nintendo64. As Nintendo had before, Sony continued to dominate the following generation of consoles with its PlayStation 2.
So, Sony's locked in their consoles for good, right? Wrong -- in the current generation of home consoles, Sony's PlayStation 3 is in a distant third place among the big three, while Nintendo has completely reversed its beat-down years and come out on top with its Wii.
I could surely go on, but the point is that it was as clear as day that lock-in (whether of inferior or superior products) due to network effects was a silly idea, just looking at video games. Instead, it was the hardware that offered the best software that sold the most units -- sometimes it was the incumbent company (Super Nintendo, PS2), sometimes just the opposite (NES, PS1, Wii). This allows a complete virgin in the video game business to utterly annihilate the reigning giant, as Nintendo did with Atari, Sony did with Nintendo, and now Nintendo has done with Sony in turn.
The first mover advantage was equally obviously illusory, even during the tech bubble of roughly 10 years ago. Take the CD-ROM format for games -- it looked like this was the future, rather than the ROM cartridges that the Nintendo or Sega Genesis used. According to the first mover advantage idea, the first -- or at least one of the very first -- systems to use CD-ROMs should have become the standard. However, it wasn't until the PlayStation (released in 1995) that a system using CDs really took off.
Was the PlayStation the first mover into the CD-based console market? Not by a long shot. Fully three years earlier, Sega released the Sega CD attachment to the Genesis. So, not only did the Sega CD have such a huge lead in launching, but they could tap into the huge installed user base that Sega had built with the Genesis, whereas Sony had no fans to work with, as they had no previous system. But again, everyone thought the games for the Sega CD sucked, so it went nowhere.
Even before that, in the late 1980s and early 1990s NEC released a CD-ROM attachment for their TurboGrafx-16 system, as well as a combined cartridge-cum-CD-ROM system. Also, the Neo Geo CD debuted in 1994, the 3DO in 1993, and the Philips CD-i in 1991. Once again, there weren't enough superstar games to draw people into buying these CD-based systems.
Or we could look at what are called generations of video game consoles -- did the first entrant win? The Super Nintendo came out years after Sega Genesis and the TurboGrafx-16. The Sega Saturn came out four months before the PlayStation but got destroyed by it. In the next generation, Sega's Dreamcast enjoyed more than a one-year lead over the PlayStation 2, and a two-year lead over Nintendo's GameCube, although it too was quickly eclipsed by both of them. In the current competition, Microsoft's Xbox 360 should have benefited from its one-year lead over the Wii, but instead it has been leapfrogged by the Wii.
In all of these cases, the reason that the first mover advantage failed to materialize is that there is no such thing in the first place. Consoles sell based on how appealing its games are to the target audience, period. So it didn't matter that the Sega Saturn came out before the PlayStation -- it didn't as attractive of a library of games, so consumers steered clear of it. Product quality matters, not when you enter the market.
Finally, what if a company were not merely the first mover but the only mover -- in the fantasy world of geeks who lament the spread of QWERTY keyboards and Microsoft software, this would surely mean the company would have a lock on the market and in short order establish a monopoly, fleecing the powerless consumers. Rewind to the mid-1990s when everyone figured virtual reality was the future of video game-like experiences. Its hype extended to popular movies and music videos, its intrinsic appeal is easy enough to grasp, so the demand for a virtual reality toy was certainly there. Maybe it wouldn't be as cool as what the military used, but it would still be cool, and you get what you pay for.
Against this background of massive hype for virtual reality, Nintendo released the Virtual Boy in 1995, which offered something of a VR experience. Again, not the most high-tech thing you'd ever seen, but if you wanted virtual reality, it was the only game in town, and it wasn't even expensive at $180. However, the games for it were awful, and instead of passively giving in to a piece of shit technology, consumers refused to buy it -- and therefore, any virtual reality system -- altogether. Imagine it: no competition at all, plus the first mover advantage, plus benefiting from the virtual reality craze of the time -- and yet, this was Nintendo's most pathetic selling system of all, and it was discontinued the year after its release. This result is the exact opposite of what we'd expect if the inferior lock-in and first mover advantage ideas held any water at all.
It could be that the mostly Baby Boomer academics and managers who ran with these nutty ideas couldn't have drawn on video game evidence first-hand. But now that most nerds and managers are Generation X or younger, it should be much easier to offer evidence that they won't reject. They'll probably carve the universe in two, as most religions do -- the laws of supply and demand, and so on, apply to video games, sure, but anything related to Microsoft is described by inferior lock-in, bla bla bla, because they are the devil, not a real-life company like Nintendo or Sony. But as long as manage to chip away at the religions of academia and the managerial elite, it's worth it.
August 5, 2009
Aesthetics and our evolutionary history
I might write up longer reviews later, but short plugs will have to do for now. I recently finished Denis Dutton's The Art Instinct and a write-up of the Komar and Melamid project to find out what people truly desire in paintings, Painting by Numbers. They're the first two books in my Amazon box above, and both are cheap.
The Art Instinct does a decent job of summarizing the findings of evolutionary aesthetics -- that is, what evolution-minded social scientists have discovered people like, and perhaps why that's so, given how we evolved.
But the real value of this book is that Dutton is an aesthetics philosopher, and so is far more knowledgeable about aesthetic questions and debates. Most of the participants in these debates argue mostly on intuition, without really trying to justify those hunches. Dutton puts them under the evolutionist's microscope and shows why the gut feelings of even ivory tower aestheticians are mostly adequately explained by features of sexual selection.
For example, if part of the ultimate reason that artists make art is to signal their greater genetic quality to potential mates and allies, then finding out that they somehow faked it would disgust the audience -- even though the purely aesthetic qualities of that artwork had not changed. We feel the need to penalize them for dishonest signalling, not for having created ugly art -- we liked it perfectly well before we found out it had been faked.
This is just one case, but most of the middle and later sections of the book are like this -- Dutton illuminating debates both old and new in aesthetics by casting them in terms of evolution. This makes the book much more fascinating for the target audience, who probably know a lot about evolution or evolutionary psychology but much less about art and aesthetics.
In this way, it is unlike the dry lists of "things humans find attractive" and why that makes sense in light of our evolutionary history. Those are neat too, but they don't provide you with any new data -- you already knew that hourglass shapes were hotter than pear shapes, that young girls and dirty old men go together better than young bucks and dirty old women, that we like fatty and sweet foods, etc. The fun part is in speculating about, and hopefully testing ideas about, where these tastes come from.
I'll bet you didn't know much about our innate tastes for landscapes, though, or for painting in general. Dutton, along with many others, mentions the work of the artists Komar and Melamid, who got funding to survey a random and representative sample of people about their tastes in art. The result was essentially a landscape painting, mostly blue but with a fair amount of green too, with a body of water, women and children, and domestic animals in their true habitat.
Their surveys contain much more information than the summaries of their work usually include, though, and for that you have to read Painting by Numbers. It's a very cool book. It has an opening description of the project and extensive interviews with Komar and Melamid, which are pretty funny for their view into how out of touch the Art World is with people's tastes -- or even the suggestion that artists *should* concern themselves with what people want. The full tables of data are included, as well as the resulting paintings that would most please and most offend the tastes of the people in the various countries surveyed.
There's also a pretty boring essay by Arthur Danto, a token inclusion of "the other side," but you can easily skip that.
For the data, paintings, and humorous interviews, it's well worth getting your hands on. I should add for the the data junkies out there that the data are cross-tabulated, so you can see how the preference for red as your favorite color changes across education levels, income levels, age, sex, geographic region, and so on. Or how does a preference for busy vs. tranquil painting vary across those groups. Fun stuff to flip through -- and there's a lot of it.
So this turned out longer than I'd planned, meaning I probably won't write up longer reviews. Still, I'll probably be referring to them sometime soon, as they're packed with lots of ideas to pursue further. In any case, I recommend them both.
The Art Instinct does a decent job of summarizing the findings of evolutionary aesthetics -- that is, what evolution-minded social scientists have discovered people like, and perhaps why that's so, given how we evolved.
But the real value of this book is that Dutton is an aesthetics philosopher, and so is far more knowledgeable about aesthetic questions and debates. Most of the participants in these debates argue mostly on intuition, without really trying to justify those hunches. Dutton puts them under the evolutionist's microscope and shows why the gut feelings of even ivory tower aestheticians are mostly adequately explained by features of sexual selection.
For example, if part of the ultimate reason that artists make art is to signal their greater genetic quality to potential mates and allies, then finding out that they somehow faked it would disgust the audience -- even though the purely aesthetic qualities of that artwork had not changed. We feel the need to penalize them for dishonest signalling, not for having created ugly art -- we liked it perfectly well before we found out it had been faked.
This is just one case, but most of the middle and later sections of the book are like this -- Dutton illuminating debates both old and new in aesthetics by casting them in terms of evolution. This makes the book much more fascinating for the target audience, who probably know a lot about evolution or evolutionary psychology but much less about art and aesthetics.
In this way, it is unlike the dry lists of "things humans find attractive" and why that makes sense in light of our evolutionary history. Those are neat too, but they don't provide you with any new data -- you already knew that hourglass shapes were hotter than pear shapes, that young girls and dirty old men go together better than young bucks and dirty old women, that we like fatty and sweet foods, etc. The fun part is in speculating about, and hopefully testing ideas about, where these tastes come from.
I'll bet you didn't know much about our innate tastes for landscapes, though, or for painting in general. Dutton, along with many others, mentions the work of the artists Komar and Melamid, who got funding to survey a random and representative sample of people about their tastes in art. The result was essentially a landscape painting, mostly blue but with a fair amount of green too, with a body of water, women and children, and domestic animals in their true habitat.
Their surveys contain much more information than the summaries of their work usually include, though, and for that you have to read Painting by Numbers. It's a very cool book. It has an opening description of the project and extensive interviews with Komar and Melamid, which are pretty funny for their view into how out of touch the Art World is with people's tastes -- or even the suggestion that artists *should* concern themselves with what people want. The full tables of data are included, as well as the resulting paintings that would most please and most offend the tastes of the people in the various countries surveyed.
There's also a pretty boring essay by Arthur Danto, a token inclusion of "the other side," but you can easily skip that.
For the data, paintings, and humorous interviews, it's well worth getting your hands on. I should add for the the data junkies out there that the data are cross-tabulated, so you can see how the preference for red as your favorite color changes across education levels, income levels, age, sex, geographic region, and so on. Or how does a preference for busy vs. tranquil painting vary across those groups. Fun stuff to flip through -- and there's a lot of it.
So this turned out longer than I'd planned, meaning I probably won't write up longer reviews. Still, I'll probably be referring to them sometime soon, as they're packed with lots of ideas to pursue further. In any case, I recommend them both.
August 3, 2009
Black IQ and climate, rethinking the decline in formality, and changes in arts appreciation
Those are the first three articles that I've posted to my new blog, Patterns in science and culture, where all of my data-rich posts will go from now on. Read about it in more detail here. I decided to make it pay-by-article rather than a subscription, just so you know exactly how much you'll get; plus I've made it so that each long post only costs fifty cents ($10 for 20 in-depth articles), with shorter data-containing posts thrown in free.
I took to heart what bbartlog said about free competing with not-free, so I'm not going to have the two sites compete at all. This blog, and what I write for GNXP, will have no data -- facts, arguments, sure, but not a look at a set of data -- while the for-purchase site will have all of my data-driven posts, whether in-depth or casual. That should distinguish the two brands, as they say.
So things here will be more observational and brief. If you want an original in-depth look at the topics I normally cover, or a quick look at just about anything (say, from the GSS), you'll have to visit the Patterns in science and culture site. I've put a new PayPal button up at the top of this site. You're only ten dollars away from a limited edition of 20 handcrafted artisanal articles, rich with authentic insight and detail. Own your piece of blogging history today.
I took to heart what bbartlog said about free competing with not-free, so I'm not going to have the two sites compete at all. This blog, and what I write for GNXP, will have no data -- facts, arguments, sure, but not a look at a set of data -- while the for-purchase site will have all of my data-driven posts, whether in-depth or casual. That should distinguish the two brands, as they say.
So things here will be more observational and brief. If you want an original in-depth look at the topics I normally cover, or a quick look at just about anything (say, from the GSS), you'll have to visit the Patterns in science and culture site. I've put a new PayPal button up at the top of this site. You're only ten dollars away from a limited edition of 20 handcrafted artisanal articles, rich with authentic insight and detail. Own your piece of blogging history today.
August 2, 2009
Table of contents for data-rich blog
Table of contents for data-rich blog
1. Climate and civilization among Blacks. I look at how climate affects IQ, imprisonment rates, and college degree-earning rates among Blacks, using state-level data. This is a follow-up to a similar post I wrote about Whites.
2. Was there a decline in formality during the 20th C? Here, I look at data on changes in naming preferences that question the widespread view that we've "become less formal."
3. Are the arts in decline? I've dug up annual data on theater attendance and the number of playing weeks for both Broadway and road shows from 1955 to 2006. I discuss the overall trend, the notable departures from the trend, and how in-synch or out-of-synch the Broadway and road show data have been over time.
Brief: Science knowledge across the lifespan. I use GSS data to construct a 13-question quiz of basic math and science knowledge, and see how well people do on it at different ages. Do people learn more and more, does their knowledge atrophy from lack of use, or does it pretty much stay put once it's in there during your required schooling?
4. Class and religious fundamentalism in red and blue states. Using the GSS, I find the relationship between two measures of fundamentalist religious beliefs and four measures of social class, once for blue states and again for red states. Are fundamentalist beliefs more a function of social class or regional culture?
5. Intelligence and patronizing the arts in red and blue states. Similar to entry 4, but now looking at four measures of going out to arts performances. Same question as before: is having an artsy leisure life more influenced by IQ or by regional culture?
Brief: Do Asians consume boat loads of carbohydrates? In order to see whether Asians consume lots of rice or carbs in general, as many believe, I look at USDA international data on grain consumption per capita for India, Indonesia, South Africa, Iran, Japan, China, South Korea, Russia, Brazil, Mexico, Egypt, Australia, Hungary, Canada, and the U.S. I've broken down each country's consumption by grain in two tables, as well as make a graph of total grain consumption per capita for an easy comparison. Grains studied include barley, corn, oats, rice, rye, sorghum, and wheat.
6. The rate of invention from 0 to 2008 A.D. I've found a book with 1001 world-changing inventions, and I've transcribed the dates and plotted the number of inventions over time, by century, half-century, decade, year, and a 10-year moving average of the yearly data. I've written before about the slowing pace of innovation since Bell Labs and the DoD were broken up in the mid-1980s, using a dataset of 100 modern inventions, so this allows for an independent test of that claim. (And the new post obviously gives a clearer picture since there are 10 times as many data-points.) It also puts recent trends in larger historical perspective. I discuss some plausible genetic and institutional causes for the rise of invention. There is not only a trend that stretches across centuries, but an apparent cycle on the order of human generations.
7. The changing social climate of young people from 1870 to present. I quantitatively search through the archives of the Harvard Crimson (the undergrad newspaper) to see how the zeitgeist has changed over time. Young people typically leave very little written record, let alone over such a long stretch of time, so this presents a uniquely fine-grained picture of the social forces they faced. The topics include identity politics (with five topics and a composite index), religion (also five topics and a composite), and generational awareness. There are some things that everyone knew, but there are quite a few surprises, such as when the obsession with racism or sexism peaks. There are large swings up and down over time, supporting a cyclical view of history. I discuss what kinds of processes or models are necessary to explain such patterns.
Brief: Relationship anger by political views for men and women. The stereotype is that liberal women are more combative and temperamental in relationships, compared to the more docile and even-headed conservative women. I look at the GSS and see if it's true. I look at the same question for men to see if the pattern is different.
Brief: Have we gotten more or less sympathetic since Adam Smith's time? Here I look at the NYT's coverage of Japan and Indonesia over the past 30 years to see if it is driven more by sympathy for their plight or fear about the threat they pose to us. This tests Adam Smith's claim that we care more about nearby disasters than faraway ones -- and so, whether things have changed much since his day. By splitting sympathy into two components, I argue that the data show we've become more sympathetic in one way, but have stayed the same in another.
8. Youthful exuberance: Age and the housing bubble. Here I investigate the basis for the string of stories we heard about 4 years ago about the increasing number of 20-somethings who refused to grow up and were more and more living at home through their adult years. I use homeownership data broken apart by age group to see how the age distribution of homeowners has changed since 1982, how the homeownership rate has changed for the the various age groups -- especially the very youngest group (under 25) that the stories were talking about -- and how the changes in homeownership rates during the housing bubble compare when we look at different demographic groups. For example, did under-25 people enjoy a larger jump than Hispanics or single mothers? This provides a useful way to compare the trends among age groups.
Brief: When did elite whites start obsessing over blacks? I search the Harvard Crimson archives for "negro" to see when African-Americans started to enter the consciousness of elite whites, and at what speed it increases afterward. This allows me to test whether whites were complacent / ignorant before the Civil Rights movement, as one popular view has it, and were woken up by the events of the 1950s and later.
9. Has the free market been taken too seriously or not seriously enough? Paul Krugman recently claimed that one reason economists failed to predict the current crisis is that they weren't sufficiently skeptical of the market to bring about desirable outcomes. I search the econ journals in JSTOR for various phrases relating to market failure -- asymmetric information, adverse selection, network externalities, and irrational exuberance -- to see whether or not market skeptics have gotten a lot or a little attention over the past 30 to 40 years. To put things in context, I also compare the popularity in each year of these new ideas to the popularity of standard economic concepts like supply and demand.
10. What predicts income dissatisfaction? I use GSS data to test the idea that the higher your status, the more dissatisfied you'll be with your income. For example, it could be that people's expectations of their livelihood rise faster than their income. I show how income dissatisfaction changes according to income, class identification, job prestige, intelligence, education, age, race, and sex. Surprisingly, the sex difference is the largest of all.
11. How are religiosity and teen pregnancy related? States with higher religiosity scores also have higher teen pregnancy rates, but does this pattern reflect individual-level patterns or not? I use the GSS to see whether age at first birth predicts greater religiosity -- that is, if the state-level pattern is just an individual-level pattern writ large -- or if teen mothers are less religious, so that their state's greater religiosity is just a response to their reckless behavior. Using three measures of religious beliefs and three measures of religious practice, I find evidence of both forces at work.
1. Climate and civilization among Blacks. I look at how climate affects IQ, imprisonment rates, and college degree-earning rates among Blacks, using state-level data. This is a follow-up to a similar post I wrote about Whites.
2. Was there a decline in formality during the 20th C? Here, I look at data on changes in naming preferences that question the widespread view that we've "become less formal."
3. Are the arts in decline? I've dug up annual data on theater attendance and the number of playing weeks for both Broadway and road shows from 1955 to 2006. I discuss the overall trend, the notable departures from the trend, and how in-synch or out-of-synch the Broadway and road show data have been over time.
Brief: Science knowledge across the lifespan. I use GSS data to construct a 13-question quiz of basic math and science knowledge, and see how well people do on it at different ages. Do people learn more and more, does their knowledge atrophy from lack of use, or does it pretty much stay put once it's in there during your required schooling?
4. Class and religious fundamentalism in red and blue states. Using the GSS, I find the relationship between two measures of fundamentalist religious beliefs and four measures of social class, once for blue states and again for red states. Are fundamentalist beliefs more a function of social class or regional culture?
5. Intelligence and patronizing the arts in red and blue states. Similar to entry 4, but now looking at four measures of going out to arts performances. Same question as before: is having an artsy leisure life more influenced by IQ or by regional culture?
Brief: Do Asians consume boat loads of carbohydrates? In order to see whether Asians consume lots of rice or carbs in general, as many believe, I look at USDA international data on grain consumption per capita for India, Indonesia, South Africa, Iran, Japan, China, South Korea, Russia, Brazil, Mexico, Egypt, Australia, Hungary, Canada, and the U.S. I've broken down each country's consumption by grain in two tables, as well as make a graph of total grain consumption per capita for an easy comparison. Grains studied include barley, corn, oats, rice, rye, sorghum, and wheat.
6. The rate of invention from 0 to 2008 A.D. I've found a book with 1001 world-changing inventions, and I've transcribed the dates and plotted the number of inventions over time, by century, half-century, decade, year, and a 10-year moving average of the yearly data. I've written before about the slowing pace of innovation since Bell Labs and the DoD were broken up in the mid-1980s, using a dataset of 100 modern inventions, so this allows for an independent test of that claim. (And the new post obviously gives a clearer picture since there are 10 times as many data-points.) It also puts recent trends in larger historical perspective. I discuss some plausible genetic and institutional causes for the rise of invention. There is not only a trend that stretches across centuries, but an apparent cycle on the order of human generations.
7. The changing social climate of young people from 1870 to present. I quantitatively search through the archives of the Harvard Crimson (the undergrad newspaper) to see how the zeitgeist has changed over time. Young people typically leave very little written record, let alone over such a long stretch of time, so this presents a uniquely fine-grained picture of the social forces they faced. The topics include identity politics (with five topics and a composite index), religion (also five topics and a composite), and generational awareness. There are some things that everyone knew, but there are quite a few surprises, such as when the obsession with racism or sexism peaks. There are large swings up and down over time, supporting a cyclical view of history. I discuss what kinds of processes or models are necessary to explain such patterns.
Brief: Relationship anger by political views for men and women. The stereotype is that liberal women are more combative and temperamental in relationships, compared to the more docile and even-headed conservative women. I look at the GSS and see if it's true. I look at the same question for men to see if the pattern is different.
Brief: Have we gotten more or less sympathetic since Adam Smith's time? Here I look at the NYT's coverage of Japan and Indonesia over the past 30 years to see if it is driven more by sympathy for their plight or fear about the threat they pose to us. This tests Adam Smith's claim that we care more about nearby disasters than faraway ones -- and so, whether things have changed much since his day. By splitting sympathy into two components, I argue that the data show we've become more sympathetic in one way, but have stayed the same in another.
8. Youthful exuberance: Age and the housing bubble. Here I investigate the basis for the string of stories we heard about 4 years ago about the increasing number of 20-somethings who refused to grow up and were more and more living at home through their adult years. I use homeownership data broken apart by age group to see how the age distribution of homeowners has changed since 1982, how the homeownership rate has changed for the the various age groups -- especially the very youngest group (under 25) that the stories were talking about -- and how the changes in homeownership rates during the housing bubble compare when we look at different demographic groups. For example, did under-25 people enjoy a larger jump than Hispanics or single mothers? This provides a useful way to compare the trends among age groups.
Brief: When did elite whites start obsessing over blacks? I search the Harvard Crimson archives for "negro" to see when African-Americans started to enter the consciousness of elite whites, and at what speed it increases afterward. This allows me to test whether whites were complacent / ignorant before the Civil Rights movement, as one popular view has it, and were woken up by the events of the 1950s and later.
9. Has the free market been taken too seriously or not seriously enough? Paul Krugman recently claimed that one reason economists failed to predict the current crisis is that they weren't sufficiently skeptical of the market to bring about desirable outcomes. I search the econ journals in JSTOR for various phrases relating to market failure -- asymmetric information, adverse selection, network externalities, and irrational exuberance -- to see whether or not market skeptics have gotten a lot or a little attention over the past 30 to 40 years. To put things in context, I also compare the popularity in each year of these new ideas to the popularity of standard economic concepts like supply and demand.
10. What predicts income dissatisfaction? I use GSS data to test the idea that the higher your status, the more dissatisfied you'll be with your income. For example, it could be that people's expectations of their livelihood rise faster than their income. I show how income dissatisfaction changes according to income, class identification, job prestige, intelligence, education, age, race, and sex. Surprisingly, the sex difference is the largest of all.
11. How are religiosity and teen pregnancy related? States with higher religiosity scores also have higher teen pregnancy rates, but does this pattern reflect individual-level patterns or not? I use the GSS to see whether age at first birth predicts greater religiosity -- that is, if the state-level pattern is just an individual-level pattern writ large -- or if teen mothers are less religious, so that their state's greater religiosity is just a response to their reckless behavior. Using three measures of religious beliefs and three measures of religious practice, I find evidence of both forces at work.
Subscribe to:
Posts (Atom)