June 29, 2020

Rent cathartic experiences, own addictive ones (theory and examples from media, arts, entertainment, sex)

The discussion of ad-infested streaming entertainment made me seriously think about buying mp3s for the first time in my life, rather than rely on YouTube for songs that are too recent for me to own on CD, and that probably wouldn't be worth owning the whole album for anyway. This led me to a more general conclusion on whether to rent or own, based on how frequently and repeatedly you want to re-experience something. In short, whether it is cathartic vs. addictive. But more on that later.

Till now, for songs that I didn't own on CD, streaming via YouTube has been the best option for price and convenience: the only cost is an internet connection, and with an ad-blocker installed on your web browser, you don't have to pay the cost of watching ads. However, with the recent aggressiveness of YouTube in the arms race against ad-blockers, it's possible that the era of cost-free and ad-free streaming will be as brief as the era of pirated file-sharing in the early-mid 2000s.

Today, the option that the IT cartel is pushing is paid streaming services. Even Apple threw in the towel on that one, when they were pioneers in buy-to-own digital music distribution. See here for an overview. You pay a flat fee per month, and get "unlimited access" to their music library. Spotify Premium is the leader there, costing $10 per month to stream anything in their library without ads.

The alternative, older model, is buying mp3s a la carte from some site that is still offering them, say the iTunes store. I didn't use that model when it was popular, but only because I was buying CDs -- not because I was pirating 100s or 1000s of mp3 files. I was still buying-to-own, and supplementing that in the 2010s with streaming via YouTube (free, no ads) to get songs that were too hard to find on CD or weren't worth the cost of the entire album.

There are certain advantages of buying over streaming that are obvious to normal people -- though not necessarily to the IT geeks who dominate discussion of these matters -- but have already gotten a fair amount of treatment.

Briefly, these things stem from uncertainty. Will the streaming service exist in 1, 2, 5, 10 years down the road? If not, you no longer have access to jack shit, and you have nothing to show for all the years that you did -- you were not renting-to-own. What if the corporate board of some record label decides, for any reason, to remove songs you like from a streaming library, or refuses to make them available in the first place? Now you have unlimited access to far fewer songs that you want, just like the absence of Star Wars and other mega-popular movies on Netflix. What if they merely put an expiration date on their songs for streaming access, just like movies on Netflix? Great -- unlimited access to a limited-time-only experience.

Here, the instability comes from the fact that the streaming platforms are distributors only, they did not produce the songs to begin with. The actual producers -- the record labels, the movie studios, etc. -- have final say over what is available in the libraries of any streaming service.

But there is a separate, undiscussed matter of genre -- is some form of entertainment worth experiencing again and again, or are you unlikely to want to experience it again -- even if you liked it the first time? If you will feel like experiencing it again, you should own it for good, so you don't have to pay an ongoing fee, time after time. If you are unlikely to experience it again -- even when you already liked it the first time -- then you might as well rent it once, rather than own something that will sit idle potentially forever.

The contrast between feature-length movies and songs is revealing. We'll start with history because that tells us about evolution and adaptation. If some model has passed the test of time, it means more than a model having passed through only the "possible fluke, but not absolute failure" test.

The overview in Variety linked above quotes Steve Jobs as stating flatly that people don't want to rent their music, they want to own it -- as though Jobs had been proven wrong by subsequent developments in the streaming music model. But he was correct -- if people wanted to rent music, they would have done so at any point in the past, in at least some format of recorded music. And yet, nobody rented music at any time in the past, in any format (reel-to-reel tapes, vinyl records, 8-tracks, cassettes, CDs, mini-discs, etc.). Public lending libraries might have had a music section, but they were free to check out, and nobody relied on that as their primary way of listening to music, only to sample albums occasionally.

The closest example of renting music is live concerts or nightclub attendance -- you pay to hear music that you cannot play back or re-experience later on. But notice that a concert or night at a club lasts for hours, unlike a single song or even album, and that it is much more of a spectacular event than listening to songs or albums. And again, that was never the primary way of listening to music for anyone at any time. Nor are streaming services offering anything similar to an IRL concert -- just those same old single songs or albums that you could've bought on CD, not the spectacular event.

There were no technical reasons that prevented people from renting music in earlier times. Hit songs were released as singles, if listeners just wanted that one song, and albums were popular as well. So why not rent a single 45 / tape / CD that was your earworm du jour? Or the album that all your music friends recommended? For some reason, people just don't want to rent music.

That means that the current moment of paid streaming music services is the anomaly of dubious durability, not the old iTunes store. Especially when it is used as the primary way of listening to music. It doesn't matter what exactly is causing the anomaly -- IT geeks chasing a shiny new app for novelty's sake, a professional class so flush with free money from the central bank that they have nothing better to spend it on than music subscriptions, an increasingly cartelized supply side dominating the demand side against the consumers' preferences, or whatever else it may be. The point is, people do not want to rent music, they want to own it.

And yet look at movies. For decades the main way people saw movies was renting them -- paying at the box office to see it in a theater, after which they could not see it again without paying another rental fee. Renting vs. owning does not depend on where you experience it, or where you pay for it. Watching a movie in a theater is renting.

After theaters, the most common way was renting from a store, across all manner of formats -- Betamax, VHS, LaserDisc, DVD, Blu-ray, and even film prints for niche audiences. In the 2000s, people rented movies through the mail, with the original Netflix. Then Redbox stands popped up to cater to yet another form of demand for renting movies.

Even when people bought these formats, it was not necessarily to own them. It was also common to buy a movie, watch it a few times max, and then sell it into the second-hand market. That is a form of renting -- paying an a la carte fee to temporarily enjoy the thing. Maybe the rental period was a bit longer than 3 days, and was decided by the viewer, and maybe the rental fee was steeper as a result. (You buy for $20, sell for $5, net rental cost is $15 as opposed to $5 or whatever for a standard 3-day rental.) Still, buy-then-sell is just a form of renting, not owning.

For the past decade, as Netflix switched from physical to digital rentals, the most common way of seeing movies outside of theaters -- certainly if they are no longer in theaters -- is renting via streaming. That does not depend on the particular terms of the rental -- a flat subscription fee for "all access," a rental per title akin to the old video rental stores, or whatever else comes down the pike.

* * *

Over all these decades of watching movies, renting was always more common than buying-to-own, no matter the format. So, unlike songs, movies are something that people do want to rent rather than own. What distinguishes songs from movies? At the surface level, people want to listen to specific songs over and over again, while they can be satisfied after watching a specific movie once in their entire lifetime.

At a deeper level, this reflects two separate traits. First, movies are narratives with clearly defined informational content (the who, what, when, where, why and how of the plot), whereas songs are not narratives and do not have such clearly defined semantic content, nor is this information of central concern to listeners (the lyrics are more evocative, and listeners care more about the music itself).

A major part of watching a movie is learning the who / what / when / where / why / how, which is why people don't like having the entire plot spoiled ahead of time. Once you've learned that, it's hard to forget, so why watch it again? Song lyrics are evocative, not narrative, so there's not much to learn about what they're conveying, and we wouldn't mind if somebody told us in advance what the lyrics were.

Songs are more about evoking a mood, whether vague or highly detailed, for the listener to resonate with. So whenever you want to get into that mood -- or are already in that mood, but want some amplification -- you will feel like playing a specific song that you know works in setting the desired mood. That's why you keep coming back to that one, and the others like it.

Second, movies are a more cathartic emotional experience than a single song or even an entire album. Movies put your mind in a refractory state at the end. After watching a movie that you like (we understand why you wouldn't want to re-watch a movie that you hated), you can't watch it right afterward. Probably not even the next day -- maybe the next week, although more likely within the next year, or years later. Listening to a song you like does provide a little catharsis, with its build-up / climax / winding-down of tension, but not nearly as much as a feature-length movie. You can easily put a favorite song on repeat 10 times in a row.

The same goes for entire albums -- and although some are as long in duration as a movie, they are not as cathartic. So it isn't duration per se that matters. What matters is that albums are usually not structured into a holistic gestalt like a movie, but more like an anthology. They can be decomposed into the separate songs, and you could play them in any order and still enjoy it. Albums are not narrative either, but lyrical and evocative. Because the scale of the catharsis is smaller for albums than movies, it's no big deal to put an album on repeat, or at least play it once a day, week, etc., unlike playing the same movie multiple times per day, week, etc.

Novels are like movies, and most readers would rather rent them from a library or buy-then-sell. Today it's more of an activity for collectors, enthusiasts, and the like, so there is more of a buy-to-own behavior, but not when it was a mass-based medium.

TV shows can go either way. The heavily serialized narratives of prestige TV are like movies, and those ones people prefer to rent (stream). Anthologies that are less drama and more comedy, like the golden age of the Simpsons, people would rather own than pay an ongoing rental fee. The bits, the sight gags, the punchlines are like riffs from a song that never get old no matter how many times you experience them. Others may rent such shows for trial-and-error purposes, or to briefly re-visit a show for nostalgia reasons. But for those who really like the early years of the Simpsons, they'd rather own them than rely on streaming them.

Video games can go either way as well. A pseudo-movie with a narrative focus and cathartic experience -- just rent it. Even if you liked it, you couldn't start it over for awhile anyway. I've been out of the loop on video games since the 2000s, but I have regularly heard video game players complain about buying one of these narrative games, and then it just sits on the shelf without getting played again, even if they liked it. They would've preferred renting it.

For a non-narrative game that is more of a steady addiction than a catharsis with a refractory phase at the end, own it so you don't have to pay an ongoing fee to feed your ongoing addiction. Early video games were like this. They were non-narrative and did not produce a soaring high and satisfying ending, but rather more of a steady engagement of interest. That was true for side-scrolling platform games like Super Mario Bros. 3, racing games like Super Mario Kart, or first-person shooters like GoldenEye 007. Not to mention arcade games from the '80s or '90s.

It may be true that, during the first couple decades of the medium, video games were popular at video rental stores, seemingly against the theory here. But that was just because the target audience was children, and renting them was only for cost reasons -- not because they were a standalone experience that the players only wanted to rent once and be done with, perhaps for life. The same goes for arcade games -- players would have rather owned them than rent the machine in the arcade by pumping in quarters. But that would've been far too expensive.

The pattern is easier to see today, when the audience is more grown-up and has more disposable income, and where the expensive original physical devices are not the main way people play them (arcade cabinets are rare, and home cartridges and discs are hard to come by as well). They're going to be a low-cost digital emulation of the original game from the '80s or '90s. And it turns out people would rather download them to own -- e.g., from Nintendo's Virtual Console store on its home consoles since the Wii -- than pay a subscription or a la carte rental fee to access them for a limited time.

Sports events are like movies. They have a highly narrative focus -- who did what when, how did the score progress, and who ended up winning. And they tend to be highly cathartic, with tension building up, reaching a climax, and leaving viewers satisfied for awhile after. The action may be unscripted, unlike a movie, but it follows a similar course and produces a similar effect in the audience. And sure enough, almost no one wants to own a recording of a sports event they've already seen, even those they highly enjoyed. Just rent it by forking over the pay-per-view fee, or buy a ticket to see it in-person (like a music concert), and then never worry about it or pay for it again. Only events that reward repeated viewing (historical records reached, or whatever), would people want to own.

The same goes for other performing arts. If you can't or won't make it in person to the ballet, opera, theater stage, or circus arena, first there must be a recording of the performance. Assuming there is, and even assuming you like it, would you rather rent it or own it? Rent, of course. Those performances, even when not strictly narrative like some ballets, still have a holistic gestalt organization that cannot be decomposed into pieces that could be rearranged and enjoyed in any order. So there still is some long, complex sequence of events that unfold in a certain order. And they are more cathartic than a song or album.

For high / classical music, operas would come last for owning (rent a performance or recording), followed by symphonies, which are highly structured and holistic. What most people buy-to-own in classical music is anthologies, whether intended to be that way by the composer or curated later by a publisher. That's why a typical listener's classical music collection is less likely to have the complete symphonies of Beethoven or Wagner's operas, and more likely to have Chopin's nocturnes, Bach's fugues, and Schubert's lieder. As much as those latter three may differ, they share a lower level of "narrativity" compared to symphonies and operas. It makes them more able to be played regularly, perhaps even on repeat if it's catchy enough and puts you in the mood you want to be in.

Finally, to show the generality of the theory by providing a case study outside of media, arts, and entertainment, consider what kinds of experiences people have paid prostitutes for over the millennia. They are renting her body, not owning it. What do you know, the theory checks out again -- the acts are the highly cathartic types of physical intimacy, not the acts that just set a mood and that do not produce catharsis and a refractory period afterward. They all involve the guy climaxing.

Why not those like kissing, petting, fondling, and so on? Assuming he liked the girl, and he's in a horny mood, but either didn't want to or couldn't afford to pay for climaxing, why not at least pay for making out? Because that just leaves the john wanting more, like a catchy song, and would lead to paying over and over for an indefinite make-out session that kept him in that mood. Those addictive rather than cathartic acts are more for a girlfriend or wife, someone whose body you come closer to owning than renting. If he rented those acts, he would be visiting the prostitute every day, rather than only once in awhile for climaxing.

Strange as it may seem, the guy's primary partner (gf / wife) is defined more by her engaging in these lower-intensity addictive acts than in the high-intensity cathartic sex acts. He may have a regular side piece, irregular one night stands, or visits to prostitutes -- but those all involve climaxing only, not necessarily making out and other lower-level acts. The sign that he wants to own your body, rather than just rent it, is engaging in the mood-setting, addictive, touchy-feely, lovey-dovey stuff in an indefinite, ongoing manner -- not the climax that he might feel OK experiencing with a rental body in a one-and-done manner.

Women realize that, and are more likely to get jealous and vengeful if they picture their bf / husband in a hot-and-heavy make-out session of indefinite duration, where even after he's left her room he may want to go back to her in only a matter of hours. Not so much if they picture a pelvic jackhammering bound to end with him in a refractory period sooner rather than later, therefore wanting to GTFO of her room for the next several days / weeks / months, and return to his gf / wife.

The generality of the theory holds up so well that when we see departures from it, we can treat it as anomalous and requiring special, perhaps case-specific explanations. Fad, fluke, fashion. Imbalance of forces between the two sides that render the decider unable to do what they prefer. And so on and so forth. We don't just assume that some micro-trend du jour proves, or disproves, some grand claim about human nature -- especially in light of the relevant history. To reiterate the original motivation here: people do not want to rent music, they want to own it.

June 27, 2020

Ads as mandatory patronage markers, not attempts to persuade consumers

I won't be watching or linking to YouTube on my PC for the time being, since they've rolled out new anti-ad-blocker software. It ruins the experience, and they must know that anyone who has gone to the trouble of installing an ad-blocker is not susceptible to the effects of advertising. They'll never click on them, never buy their products, and will downgrade the reputation of the brand in their mental filing system for being annoying and invasive.

Installing an ad-blocker is simply a convenient way to let the advertisers know not to waste their time, resources, and money showing their ads to us, and to re-direct those attempts to people who either don't care or actually like ads and consumerism enough to look at them.

From a strict efficiency standard, going out of your way to circumvent ad-blockers -- all the more costly because it's an ever-evolving arms race, requiring an ongoing investment of resources, not just a one-time sum -- is the worst possible way to show ads to potential targets. You wouldn't advertise HIV tests to anyone other than gay men, and you wouldn't advertise anything at all to the minority who insist on an ad-free experience.

So why do the corporations insist that ads of their products and services be shown by the "content" distributors? It's clearly not to persuade potential customers.

They were most honest about their role back in the New Deal era, when they would say something like, "Today's program is brought to you by the good folks at the Willowby Cereal Company," or how PBS would say, "This programming is made possible by generous grants from the Stickler Foundation".

It was not about persuasion, but patronage. And it was not targeted at potential customers of the patron's products and services, but actual audience members of the program. The people who provided the funding for the program wanted its audience to know it was them, not anyone else, who made it happen. Without their largesse, you wouldn't be enjoying your entertainment. So, be grateful and think well of them, since they didn't have to fund it. It was designed to improve their status or reputation as benefactors. The ad, then, was more of a dedication to honor the patron of the "content creator".

But unlike the respectable patrons of earlier times, the new ones sought to repeatedly interrupt the audience's experience to remind them who the benefactors were. Not like a single page at the front of a book, or a logo / credit at the start of a movie, or a name above the entrance to a building. Repeated hijacking of the cultural experience oversteps the role of the patron, and naturally forces the audience to take counter-measures. In the TV and radio era, you simply muted the audio, station-surfed, or recorded it to watch / hear later while fast-forwarding through the commercials.

It was bothersome, but necessary for maintaining your dignity as an audience member, and ensuring that the corporations were not rewarded for violating the norms of the patron-client-audience relationship.

The internet patrons are even worse than for TV, and not for technical reasons, but rather things getting worse over time regardless of medium. There was no technical impediment to TV advertisers using a simultaneous overlay method, rather than a serial interruption. The emergency broadcast test did so just fine. Rather, the internet ad people's resort to pop-up ads and overlay ads on videos -- in addition to ads running before, interrupting, or after a video, a la TV -- is just a further move in the same direction. If TV were still the most important medium, they would be running overlay ads on the "content".

For that matter, there was no technical impediment to internet patrons using these methods during the heyday of Web 2.0 in the late 2000s. YouTube itself did not have content-hijacking ads back then. Nor did MySpace's super-popular music player, or early-era Facebook's imitation of it. All these media players do is retrieve digital content from some database, by the file's ID -- why not program the player to first retrieve an ad by the exact same method, and then retrieve the desired file? They could have, but they did not.

You might think the crucial difference between old and new patrons is that the old ones had loads of money, but did not amass it by making products or offering services for sale, whereas the new ones are for-profit businesses who want not only the dedication but hawking their own production to potential consumers.

But then look at all the PR these businesses are doing for the Black Lives Matter protests and related events. They're inserting their names and images into the public consciousness while the audience watches a spectacle in rapt attention. And yet, they don't try to hawk their products and services. It's just, "Capital One stands with those who demand racial justice" -- not, "If you're sick of police violence, we're offering a new credit card tailor-made for justice seekers." Or, "Starbucks supports the call for greater police accountability" -- not, "When you're gearing up for the latest anti-cop protest, make sure you fill up your energy level with a trip to Starbucks first".

The same goes for electoral endorsements. "Starbucks endorses Hillary Clinton," not "After you've saved America from Trump's fascism, at the voting booth, treat yourself with a Pumpkin Spice Latte".

Political and social actions are still treated as sacred by the business patrons -- not in a supernatural way, but in the sense that they are inviolable regarding the hawking of wares by patrons, who get some simple recognition and reputational boost, and that's it.

That means the patrons of contemporary culture -- much of it distributed over an internet connection -- view culture as an utterly profane domain of society. It's not sacrosanct from hawking wares, so shove all the ads into it that you feel like. Interrupt it, hijack it, overlay it, whatever. That is a real change from the old patrons, who treated culture with a certain level of sacredness.

This change is not because today's culture is so coarse, while the old culture was so refined. There was plenty of aesthetically questionable stuff made centuries ago, and there's great stuff being made in the past 50 years.

In fact, the question of quality is a red herring anyway. If the cultural domain is sacred, it doesn't matter whether any particular work is high or low, good or bad -- it's all supposed to be protected. That is clear regarding free speech, censorship, etc. But it extends to the matter of letting the audience know who the patron was -- high works should not be hijacked, interrupted, or overlaid with reminders of patronage, and neither should medium or low works. A reminder that takes less than 5 seconds to process -- name on the first page of a book, facade of a building, etc. -- before hours of uninterrupted cultural experience.

Nobody in the YouTube audience would mind if the website were re-named to reflect its patrons, as long as they held the cultural experience sacred from ware-hawking, as they already do for social and political activities. An unobtrusive piece of text, perhaps below the title, that you were watching music videos from within "The David Geffen Wing of YouTube". Anything but interruptions of the "content" itself.

I doubt whether any good can come of internet-mediated culture, though, because it is increasingly dominated by Silicon Valley, where the local elites have the blindest faith in techno-libertarianism. MySpace was from SoCal, and so are the Hollywood studios, while early-era Facebook was from back East. Libertarians, as free market fundamentalists, are least likely to hold some domain of society sacrosanct from ware-hawking, and that includes culture.

Whenever the next political realignment happens, it will shift the dominant coalition from the right (dominant since Reagan) to the left. Since the left coalition is controlled by the informational sectors of society -- including IT and the media-entertainment sector -- the realignment may only worsen the existing trends. The tech oligarchs have enough power as it is, let alone if their political vehicle were the dominant rather than opposition party.

Still, realignments don't happen by magic. The former opposition has to peel off large chunks of the supporters of the old dominant party, by delivering on big promises. Why else would you switch allegiances? Restoring culture to sacred, protected status would go a long way for voters who currently hold the media / entertainment / social media corporations in such low regard. Imagine a culture with no more ads, just brief dedications to the patrons.

Of course the tech companies would resist that realignment, but then they are not the senior member in the opposition party -- that would be the financial elites, headed by the central bank. They can tell the tech companies that YouTube et al. would all be broke now if it weren't for the trillions printed in quantitative easing and other bailouts from the central bank. So, if the central bank is to keep them afloat, no ad revenues for them, and no angering the general population.

If YouTube refuses, the central bank removes them from the QE program, and re-directs those newly minted dollars to some old-money WASP family from the Upper East Side -- who are probably part of the financial, rather than IT, elites themselves -- and they agree to accept the funds in order to patronize YouTube or some similar site, with no interruptions but with their name commemorated somewhere prominent.

Regardless of how, the finance elites have to find some way to discipline their coalition members who are bitterly despised by most of the population, especially the entertainment and media sectors, which now includes most of the IT companies in Silicon Valley too. Otherwise they will be in the impotent opposition begging the dominant Republicans for permission to go on a piss break for the rest of the nation's future.

That will be just like the Republican elites from the New Deal era, principally the manufacturing elites from the greater Northeast, kicking out the Bible-thumping evangelicals from positions of power and influence. They alienated too much of the society, and over the course of the Reagan revolution, they were steadily demoted to being an ignored fringe element, or even a group bitterly denounced by the party's own leaders. George H.W. Bush did not win the liberal yuppies of Montgomery County, MD, by running on a Moral Majority platform.

Now the country awaits the senior members of the Democrat coalition to stop angering and alienating the majority of the population.

June 21, 2020

Naturalization of teen sexuality, including age gaps, during restless phase of excitement cycle

I was planning to write this last year when I watched The Crush from 1993, but figured I'd wait until this year when we were solidly out of the vulnerable phase of the excitement cycle, and had left behind its Me Too hysteria. Since entering the restless warm-up phase this year, people will be able to get the point without filtering it through the lens of an ongoing sexual moral panic.

To cover each restless phase in detail, I'll break the survey up into a series -- this introductory post, and separate posts on each of the restless phases (late 2000s, early '90s, late '70s, and early '60s). In each of those, I'll give a brief background on the vulnerable phase that preceded it, to show how attitudes changed, as well as a brief look at the manic phase that followed, to show how the attitudes persisted afterward (before reversing in the next vulnerable phase).

First, a reminder on the phases of feminism that change in tandem with the excitement cycle. Feminists are not the only group to weigh in on debates over sexuality, but they always do, so they're a useful indicator of the zeitgeist. (On the Right, there's typically an evangelical Christian counterpart of each feminist phase.)

During the vulnerable phase, they're in a refractory state, so they externalize their feelings of "attention = pain" into an emphasis on victimhood and trauma. Once they're out of that refractory state, during the restless warm-up phase, they don't need to portray all men as predators and all sex as rape. In fact, now that it's time to come out of their shells, they will ditch the victimhood Olympics and get flirtatious again. When that coming-out process has been completed, by the time the manic phase begins, they will resonate even less with victimhood and passivity. Their energy levels will be soaring on a sustained spike, making them feel active and invincible -- and therefore, carefree rather than hysterical.

The topics of teenage sexuality, and age differences involving a teenage partner, clarify an otherwise nebulous debate about "sexual predation," and naturally come to the forefront of the discussion. The anti-horny side can make an easier case about one partner overpowering the unwilling other partner when the former is more mature (mentally and physically), while the latter is still developing.

That is the debate within liberal morality, which focuses on matters of harm and fairness. But there is a related debate within conservative morality about the wholesomeness of such relationships, or of teenage sexuality in general -- regardless of whether any meaningful "predation" is taking place. Conservative morality includes matters of purity and taboo, violations of which are disgusting and ought to be prohibited.

While the anti-horny side may be zealous in both cases, it's not true that "left-wing sex police are just as puritanical as the right-wing sex police". Anti-horny liberals will focus narrowly on matters of harm, power, and consent -- unlike anti-horny conservatives who will focus on a gut reflex of "that just ain't right, you're just not allowed to feel things for / date / have sex with that type of person".

Regardless, anti-horniness rises and falls according to the excitement cycle, even if it comes in distinct liberal and conservative flavors. So we won't focus on the justification given for or against, but only whether the zeitgeist is tolerant of or opposed to age differences.

And this survey will be descriptive rather than normative. I might occasionally wade into the debates myself, but having already experienced several cycles worth of these debates, I've concluded that they don't affect anything other than the short-term feelings of individuals and perhaps the cultural portrayals in the mass media. It's not a substantive debate, but the expression of society's current mood, which goes through five-year phases that cycle every 15 years.

So I won't weigh in on whether some state, whose age of consent is 16, ought to raise it to 17 or lower it to 15. Or whether the age of consent should be structured -- e.g. 16 for a partner who's however-close in age, but 18 if the partner is however-older. These are fake debates that never go anywhere, and never change anything real. They are the most boring epiphenomenal reflection of the underlying zeitgeist. I'll stick to the more relatable and moving examples from popular attitudes and pop culture.

I chose the word "naturalization" deliberately for the title of this post. The restless-phase zeitgeist is not so much about an overall celebration of teenage sexuality, or of age-gap relationships. It's merely the elimination of the taboo quality that the discussion used to have during the trauma porn atmosphere of the vulnerable phase. Some may be in favor and others opposed, but it will be a mundane and naturalistic discussion, not a zealous moralistic panic.

But when the topic is naturalized, it will receive some naturalistic and approving portrayals in pop culture -- however tempered, qualified, and ambivalent these approvals may be. And that will be a pronounced change from the vulnerable phase culture, when the topic was taboo and portrayals were limited to caricatured trauma porn.

To close, here's a hint from a recent Red Scare podcast of how the discussion is shaping up at this early stage of the current restless phase. You can already sense the boredom, lack of resonance, and overall restlessness to move beyond the Me Too hysteria where all sex is predatory. I've been dropping hints from real-life experiences since the end of last year that teenagers are both coming out of their shells, and specifically to make blatant passes at older guys, to show how the sentiment is changing from their end.

But it still is early in the current restless phase, so there isn't much to survey. The next post will focus on the late 2000s -- not only the horniest period in recent history, but including for age-gap relationships (both older-man and older-woman).

In the meantime, a teasing reminder of the complexities of such interactions. It's not as though a 16 year-old is not feeling urges for a hot older guy, and willing to deploy whatever weapons she has available to her -- both physical and emotional -- in order to gain his trust, wear him down mentally, and gradually increase levels of touch, until -- she hopes -- he'll give in.

Adrian Forrester from The Crush. Who's groomin' who?


June 20, 2020

New York no longer cool or interesting enough to star in pop culture

Watching Gossip Girl for the first time, to re-familiarize myself with the late 2000s, I'm struck by how New York-centric it is. Every reference in the show is so hyper-trendy, localized entirely within the late 2000s, that it suggests the New York-iness is also something we haven't seen in a long time.

Below is my own brief history with the place, then a survey of pop culture specifically about the city, to see how its appeal has faded rapidly over the past 10 years, possibly for good.

I started going on group day-trips by car to New York, sometimes crashing at local friends' places, back in high school (late '90s). Our common activity was seeing They Might Be Giants, but we shared all sorts of other interests, musical and otherwise. Naturally our pilgrimage destination was St. Mark's -- back before it turned into Buddhist cafes, vegan cupcake bakeries, and other painfully uncool yuppie transplant crap. The old musical stores had stuff you couldn't find anywhere else in the country -- you had to visit there during a trip, or miss out for good. Now, it's just an overpriced version of places you can visit in any old suburban strip center (catering to foodie strivers).

During my anti-globalization and anti-war days in college, we made the occasional trip there for marches and protests, but never long enough to see the city. Senior year ('02 and '03), I started taking solo day trips about every month or so. I was scoring "edgy" finds at Tokio 7 when today's art hoe transplants were still doing homework with their TI-83 calculator. And that still left plenty of time to people-watch on Broadway in SoHo, savor a Reuben from Katz's Deli, track down rare albums on St. Mark's, and get lost in the Met for hours, before barely making it in time for the last express bus of the night back to college.

After graduating, I made two overnight trips, one in 2004 to hand out my resume to ESL centers, and another in the summer of '06 just for fun. Same haunts as always, though now committing my once-in-a-lifetime episode of shoplifting (from an unnamed store). It was more for the thrill of it than to avoid paying -- a mini-wallet by Costume National that reminded me so much of a close girl friend from college, who I sent it to. (She loved it, and sent some cool stuff of her own, while we were writing each other letters during those first few years after college.)

And after that... nothing. It's been nearly 15 years since I've set foot in New York, and honestly I've never even thought about it. I actually did consider moving there after undergrad, too, before setting my sights on Barcelona. But I'm sure I would've also been perfectly happy spending the late 2000s in Da City. It still held a powerful romantic appeal for Gen X-ers and the budding Millennials. And even though it's set among Upper East Side socialites, Gossip Girl captures the fascination with New York life at that time.

* * *

So was I alone in letting that city fade out of awareness, after having been so keen on it earlier? Looks like not. Although New York occupied prime real estate in the pop culture of the '90s and 2000s, it all but vanished over the course of the last decade. (And of course, it was popular before the '90s, but that portrayal was ambivalent and marred by soaring crime rates.)

For reference, see these lists of TV shows set in New York City, and songs about the city.

During the 2010s, there was not one TV show set in the broader NY metro area, despite many before (the Dick Van Dyke Show, Bewitched, Who's the Boss?, Everybody Loves Raymond, the Sopranos, Gilmore Girls, etc.). There were only a handful of hit shows that drew from, and tried to contribute back to, New York's iconic status for its setting -- 2 Broke Girls, Girls, and Broad City. And even those mostly belong to the first half of the decade. Brooklyn Nine-Nine is the only popular one still running (begun in 2013), and it's mostly interior shots filmed in LA, thus not so New York-centric after all. Blue Bloods is set in Staten Island, not the sought-after real estate in Manhattan or Brooklyn.

Tellingly, the only hit show to have begun in the past 5 years is the Marvelous Mrs. Maisel -- set back in the 1950s, when the city was halfway affordable during the New Deal era. It stars a Millennial woman, catering to an audience of Millennials who are consciously pining for a bygone time when they could've afforded to live there. That is also a central theme of one of the others, right there in the title -- being broke in New York.

Also, the 2010s shows are uniformly female-oriented and leisure class-oriented. Savvy men have given up on the city by now, while stubborn women (and their boylike gay BFFs) are still trying to keep it hip, relevant, and happening (to little effect, evidently). As recently as the 2000s, there were themes of career optimism and a life that guys could enjoy too -- the Apprentice, How I Met Your Mother, 30 Rock, Mad Men, Castle, and of course Gossip Girl. And the various Law & Order series offered something for responsible, fatherly men to vicariously enjoy, not just media deals and coke-fueled parties.

The realm of music is no different. To filter the list of songs by some measure of broad resonance, I checked out only those with their own Wikipedia page. There are loads of them as recently as the 2000s, and going back even into the '70s when crime rates were soaring. One of them, "Nolita Fairytale" from 2007, was prominently featured on a first-season episode of Gossip Girl, because audiences just couldn't get enough references to New York at that time. But all of a sudden in the 2010s, they all but disappear. There are so few that we can list all 6 of them below:

"Marry the Night" by Lady Gaga (2011)

"212" by Azealia Banks (2011)

"Ho Hey" by the Lumineers (2012)

"Don't Leave Me" by Regina Spektor (2012)

"Welcome to New York" by Taylor Swift (2014)

"New York City" by Kylie Minogue (2019)

Again, notice not only how few there are over the course of 10 years, but that they are almost all from the first half of the decade. Only the Kylie Minogue song is from the second half, and she was 50 years old when it came out. Among the younger generations, the city has lost its resonance. Taylor Swift actually included a song about Cornelia Street on her album from last year, but it was not a single and did not become a cult classic either. It's part of the "Millenial leisure-class woman in New York" genre that had already died in the second half of the 2010s, along with the Girls TV show.

So far there are no entries in either the TV or song lists for this decade, and that was before Da City became the global epicenter of the coronavirus pandemic, further degrading whatever was left of its you-just-gotta-be-there appeal. Now it's the city that never stops coughing to death.

* * *

What accounts for the overnight extinguishing of New York dreams? It's not enough to blame it on the Great Recession, since it only lasted a little while, and the central bank then pumped trillions of dollars back into the finance sector, and its media and tech beneficiaries. New media outlets were sprouting up one after the other during the 2010s in New York. None of those people who were fascinated by New York in the 2000s were working-class stiffs, they were all professional-class strivers, who barely felt the brunt of the Great Recession. Downwardly mobile, perhaps, but not in dire straits.

Plus, a deep recession in the informational sectors of the economy should also have killed off the romance for New York during the 2000s, after the Dot-Com bubble burst. Not to mention the real fear of terrorism in the wake of 9/11. And yet neither of those major recent catastrophes fazed young people at all about moving to the city.

Rather, the difference is generational turnover. Recall this foundational post on the generational structure of status contests. Boomers competed over material wealth, and hoarded it all for themselves, locking out younger generations from the wealth contest. That left Gen X-ers with competing over lifestyle / hip activity points, which does require money, but not nearly as much as owning desirable real estate, multiple cars / boats, etc. Millennials have less wealth still, so they can't even compete over lifestyles. They compete over the construction and presentation of personas, struggling to max out their stats on social media platforms. All they need is a smartphone and wi-fi.

Facing these pressures on their status-striving, why would Millennials want to join other transplants in New York? The primary distinction of that city is its career opportunities, i.e. as a portal into the wealth contest. But Millennials will never be able to compete over wealth or income in any city, let alone where the cost-of-living is so high. So scratch that motivation.

How about lifestyles? There certainly are lifestyle activities that you can only do in New York -- or, only accrue major points by doing them in New York as opposed to some other city or suburb. Which cafe you lounge around at, which bar you hit up after work, etc. Still, Millennials don't have enough wealth to blow on these activities regularly, so they won't be competing in these contests either. Scratch that motivation.

That only leaves persona points. Sure, you can try to brand your persona as "New Yorker," "Brooklynite," "Meatpacking District party girl," etc. But again, where do you get the money to pay the rent to live in or frequent those places? You can't leverage your social media stats into cold hard cash with which to pay the rent for living in that neighborhood, or getting into that club. You can't pay your landlord or the club owner in likes, retweets, or followers.

A few might get a sponsor deal, but because that is informational, it scales up infinitely for free. To reach millions of eyeballs, the sponsor will pay one influencer a hefty amount, rather than pay a million influencers a moderate sum apiece. Influencing is not labor-intensive -- they don't go knocking doors, or standing around stores. Ditto for crowd-funded endeavors -- one podcast will suck up thousands, tens of thousands, even millions of small donors, rather than a bunch of podcasts receiving moderate-sized audiences and donations apiece.

There is simply no way for Millennials, or late Gen X-ers for that matter, to live in New York while pursuing the all-important project of our neoliberal era -- striving for higher status. They can't compete for wealth, for lifestyles, or even for persona points, while paying the cost of living-and-striving.

They might as well take a stab at shooting a viral selfie in a 2nd-tier neighborhood in a 3rd-tier city, where they won't be homeless and starving. Meticulously compose ironic tweets for fellow lib-arts failsons -- in a flyover suburban subdivision. Nobody has to know. As long as your social media stats keep increasing, who cares what geographical background it's taking place in? For, it is not taking place in physical reality at all -- your status contests are entirely online, where zip code prestige is immaterial.

* * *

One final thought: just because New York (and similar cities) has lost its romantic appeal does not mean it will vanish altogether from the culture. It will simply become a niche elite obsession, by and for those who are stubbornly clinging to it, in order to justify their wasted time, money, and effort. Still, it will not be romantic -- it's too late for that charade to convince even the bitter clingers. It will turn toward the abject and pathetic existence of the wannabes, who will have no other option than prostitution of one kind or another, to avoid homelessness and starvation.

Oh, how dramatic and full of meaning to be coughing to death, after mass-texting the latest series of pussy pics to my Only Fans paypigs! I'll never leave the most dramatic and meaningful city in the world! (Unless, of course, some STD-riddled oil sheikh wants to host me on his yacht in Dubai...)

If you doubt this endpoint, just look at the last time we were in a status-striving Gilded Age -- did elite niche culture give up on downwardly mobile, decadent urban strivers? And focus on whom, exactly -- normies with lower persona points than the wannabes? Yeah right! No, it was glorification of consumptive degenerates who were committed to racking up persona points, long after it had become impossible to compete over wealth or leisure-class lifestyles. This time will be no different from the last fin-de-siecle zeitgeist.

You just won't know about it outside of the culture consumed by the 1%, unlike the pre-crisis culture where the popular fascination with top-tier cities could not have been more palpable.



June 17, 2020

Teenagers no longer phone addicts in public places: short-term pendulum swing, or long-term shift out of cocooning?

It didn't hit me until after I'd gotten back home, but during my walk around the park today I don't think I saw anyone staring down at a phone. Must've been one or two, but not the ubiquitous pattern that it used to be. The only place with lots of phones out was a baseball game where the parents were taking pictures or video of the game, not browsing an endless stream of "content" to distract themselves from what was going on around them.

What was really striking was the lack of phones among young people. Boomers not endlessly scrolling some retarded feed -- OK, the technology came too late for them to adopt it. But the teenage Zoomers who are internet / digital / social media / smartphone natives? Highly unexpected. And it wasn't just at the park -- it was the same with every small group of them walking down the sidewalks along the streets I drove on. Or hanging out in front of an ice cream parlor. Or really anywhere else.

They were talking to each other while sustaining eye contact (not while staring down at separate screens), observing and commenting on the goings-on around them, making eye contact with the random hot guy walking by, and all those other things that "kids used to do outside before smartphones and the internet".

How long ago was it that everyone was complaining about young people staring down at screens all the time? Well I did plenty of that, so I searched the blog and found out that it was around 2013 to 2015, which means it had already gotten started before then for me to have griped about it, maybe back to 2010. The worst I recall was seeing people with their laptops out at a park, and signs posted all around cheerfully letting people know that there was free wi-fi -- at a park.

Nor could you watch a movie with friends or family, without most of them staring down at a screen and often nodding off out of boredom. Way to strengthen social bonds through shared activities!

Even before smartphones, I remember young people constantly looking at and using their flip phones in the late 2000s -- at bars and dance clubs, coffee shops, the college dining hall, really anywhere that was supposed to be social, not to mention in other places like class.

The late 2010s are more blurry for my recollection, partly because everyone had been so hunkered down during the vulnerable phase of the excitement cycle, there were so few observations of the public to be made. I don't remember feeling this vivid of an impression, though, that phone addiction -- at least in public or social places -- had started to decline.

So what accounts for this shift?

One possibility is the phases of the 15-year excitement cycle. You'd think it should be inversely correlated with energy levels -- higher energy and invincibility, less need for retreating into the cyber-world -- but if anything it is positively correlated with them. Phone addiction was pretty bad during the last restless phase of the late 2000s, and was rampant during the manic phase of the early 2010s, before becoming less bad by the late 2010s. So maybe the phone addicts were directing their high energy levels into texting, scrolling, etc., and are doing so less now because their energy levels are a lot lower than they were 10 years ago.

I don't buy this one, though, because phone addiction should be dropping across the board since energy levels have plummeted across the board. And yet phone addiction does seem to still be a very real problem, just not with the people I'm observing -- crucially, the age groups. It seems like people in their late 20s and 30s are still heavily addicted to scrolling through their various retarded feeds and timelines, using it to supplement their SSRIs without having to fork over a co-payment to Big Pharma (only forking over their life-data to Big Tech).

That suggests another possibility -- generational turnover. Back in the late 2000s, it was Millennials who were constantly looking down at their phones as teenagers, and it was still them in the 2010s when they were aging into their 20s, and they're still the worst even as they approach or already pass the big 3-0. Gen X-ers, especially the later ones, have had cell phones and smart phones throughout that entire time, yet they have never been as addicted to them as Millennials.

But under the view that technology or any kind of progress only ever moves in one direction, the Zoomers ought to be worse than the Millennials. I just don't see that, particularly those born later into the 2000s. (My strict definition of Gen Z would begin at 2005 births, and Millennials at 1985.) If anything, Gen Z seems to resemble Gen X in a kind of swinging pendulum pattern. One generation is attention-craving, they over-saturate the niche for it, so the next generation becomes attention-eschewing, they over-saturate that new niche, so it swings back to attention-craving, and so on.

Side note: it's also uncanny to see how much the Zoomers are dressing like it's the early '90s Gen X heyday all over again. High-waisted pants, ripped jeans, baggy shirts, light / desaturated / pastel colors, center part in the hair, etc. Not at all what the Millennials looked like as youngsters 10-15 years ago -- low-rise, dark-blue, non-ripped skinny jeans, bold color on top, heavy side part in the hair (whether preppy guys or scene girls).

This seems plausible given the Zoomers overall social media usage. They're not engaged with Facebook, Instagram is for older lifestyle hustlers, and nobody born after 2004 will ever GAF about the tard chat du jour on Twitter. Those are the platforms where attention-craving Millennials go for likes and followers.

TikTok and certain YouTube channels (like React) are more their thing, and that is just passive entertainment rather than putting your own thoughts, feelings, images, and videos out there. Maybe entering the chat of a video game streamer, or playing video games themselves. But not so much smartphone-based social media platforms.

That wouldn't mean they are unplugging from the internet, or even using it in non-parasocial ways. Maybe they'll just retreat into playing video games with strangers, which would be parasocial and very-online. But it would not leave the house with them because screaming about how laggy your connection is, is not a behavior anyone would do in public. And phones aren't the best platform for video games. So when they did go out in public, they'd be less tethered to their phones or the internet, although they'd bury themselves in online video games once they got back home.

The final possibility is that this is a broad signal of leaving their cocoons, and that the cocooning era since roughly 1990 is about to reverse and go back to outgoing like it was from roughly 1960 to 1990. The previous cocooning period was the Midcentury, from roughly 1930 to 1960, and the previous outgoing phase before that from roughly 1890 or 1900 to 1930.

This all parallels the trends in crime rates, as I detailed at length on this blog in the early 2010s (along with extensive discussion on the cultural output of outgoing vs. cocooning periods). Briefly, it's akin to a predator-prey cycle in ecology. When people are more outgoing and trusting, they present more opportunities for criminals to prey on, sending up the crime rate. When the crime rate gets so high, it erodes trust and people begin cocooning to avoid omnipresent criminals, depriving them of so many opportunities, and sending down the crime rate. When the crime rate reaches such lows, people see so few predators around, so trust and outgoing-ness increases. That then presents more prey for criminals, and the crime rate rises again, completing the cycle.

Since these phases last around 30 years in each direction, we're due for another reversal (2020 is 30 years after the last peak of crime rates circa 1990).

If this is what's going on, why are Zoomers leading the way, rather than every generation taking part at the same time and to the same degree? Because of formative experiences that differ over the generations. Late Boomers and all Gen X-ers grew up during the last crime wave, so they've imprinted on an environment marked by danger and predation. They have been helicopter parents, trying to prevent their kids from suffering through the same widespread crime that they themselves lived through.

But Millennials have only grown up during falling-crime times, so they are less helicopter-parenting. And Zoomers have grown up in even safer times than that -- they don't even remember the one-time externally caused spectacle of 9/11, like Millennials do. So they won't feel, on a visceral level, what the point is to cocooning away in the home all day long, or why you should put your guard up in public when you do go out. The dangers of going out in public, and letting your guard down, have been receding for a decade or more before they were even born.

We couldn't see this shift before because this critical generation was only 5 or 10 years old, when their public appearance is controlled by their parents. Now that they're 15-20, they're more independent and can go outside if they want, and ditch their smartphone in public if they want.

It's still too early to tell one way or another, but there does seem to be a generational change at least. The only question -- also too early to tell, by a few years -- is whether that is merely a pendulum swinging that will revert to the Millennial-esque mode in the next generation after Zoomers, or this is just the start of a 30-year period of outgoing and guard-down social behavior, to be paired with steadily rising crime rates.

June 15, 2020

Rhythm & exercise video games surge during restless and manic phases, crash during vulnerable phase, of excitement cycle

I haven't looked at video games within the framework of the 15-year cultural excitement cycle, since the last time I regularly played contemporary games was Super Nintendo, Sega Genesis, and Street Fighter / Mortal Kombat in the arcades. I do try to keep up somewhat with what's going on, out of curiosity, but even there I stopped paying attention around 2015. So, I generally don't have enough fine-grained knowledge to write about how the excitement cycle is reflected in that domain of pop culture.

There is one exception, though -- video games based on rhythm and exercising. Those are straightforward adaptations of real-life physical activities, so their popularity should mirror the popularity of the real activities.

Specifically, they should be least popular during a vulnerable phase, when people's energy levels are in a refractory state. They should catch on during the following restless warm-up phase, when energy levels are back to baseline and people want to do simple exercises to get back into the swing of things, especially if it involves a social setting where they can mix it up with others rather than continue hiding under a pile of blankets. And they should last into the following manic phase, even if they won't be quite so popular because they've already gotten used to the simple-step exercises and now want something more physically demanding, if anything.

What do you know, that's exactly the pattern of popularity across several cycles. And unlike other genres, these two were both highly popular with females as well as males, and were often played in social groups, even in public places.

* * *

The rhythm game genre first hit it big during the manic phase of the late '90s with PaRappa the Rapper and more importantly Dance Dance Revolution, which involves moving your feet onto various positions of a pressure-sensitive dance mat according to the timed sequence of moves shown on the screen, as music blares in the background.

During the vulnerable phase of the early 2000s, the genre nearly vanished. Only two titles are even remotely familiar to a non-gamer like me -- Donkey Konga and Samba de Amigo, which use special percussion controllers. Neither was a mega-hit like Dance Dance Revolution, though. The DDR series itself saw only a couple releases during this five-year phase. Everyone was in too emo of a mood, and their energy levels were negative.

Then all of a sudden in 2005, when the restless warm-up phase began, a mass phenomenon of rhythm games took over the video game world, lasting through the final year of the phase in 2009. Most notable were the Guitar Hero and Rock Band series, which used all manner of special controllers shaped and played like real guitars, drums, and so on. The games test the player's rhythm by having them press various buttons, use the whammy bar, etc., according to the timed sequence on the screen, as the song plays.

Dance games also saw a revival with the Just Dance series in 2009, not to mention a flood of releases from the DDR series.

These instrumental games were so popular that you could find groups of people playing them together in public nightlife places like bars. They weren't just for teenagers playing with a group of friends inside their home.

As the manic phase began in 2010, the instrumental rhythm games were still popular, although less so than during their restless phase peak. The dance games became more popular, as the Dance Central series joined Just Dance in 2010 (plus more DDR games). Perhaps people had become comfortable with simpler rhythmic activities only involving the hands, like Guitar Hero, and wanted to move onto whole-body rhythms now.

However, when energy levels began crashing in 2015 with the arrival of the vulnerable phase, the rhythm genre nearly vanished again, as in the early 2000s. In 2015, the last major installments in both the Rock Band and Guitar Hero series were flops, and they haven't bothered with either series since.

Similarly, Just Dance stopped receiving numbered titles after 4 in 2012, and stopped receiving new system titles after the one for Wii U in 2014. The Nintendo Switch system came out in 2017, but did not get a dedicated "Just Dance Switch". Now they're just yearly updates with more topical songs. The Dance Central series did not last into the vulnerable phase at all -- its four games were released from 2010-'14, entirely within the manic phase. (In 2019, they did make another for a virtual reality system that no one owns.) And the DDR series only saw a couple releases, just like during the early 2000s.

Now that the restless warm-up phase has begun again in 2020, this genre is ripe for revival. TikTok has already shown the (re-)emergence of dance fever, and the rhythm genre hit its peak in mobile games during the last restless phase (the Tap Tap series of the late 2000s). As for home consoles, I have no idea whether kids born after 2004 (and who have no experience with the last wave of instrumental games) would go for the physical instrument approach. But why wouldn't they? And the older Millennials could fuel their late 2000s nostalgia by having mock instruments to play along with -- akin to the nostalgic fitness craze of "adult kickball" in the late 2000s among late 20-somethings.

* * *

Exercise games show the same pattern over time. The Power Pad for the original Nintendo (late '80s vulnerable phase), and the only game anyone played with it, World Class Track Meet, are not exercise games because they don't involve sustained activity. It was not a hit in any case, but I did know one friend who had it. It was basically 10 seconds of furious running in place on the pressure-sensitive mat, then the event was over; repeat a few more times, then turn the game off. Not much exercise. And there was no rhythm to the motions, so it was not a rhythm game either.

Dance is a kind of exercise, so Dance Dance Revolution also made the exercise genre popular during the late '90s manic phase.

Then nothing during the early 2000s vulnerable phase, when energy levels plummeted into a refractory state.

It wasn't until 2005 -- when else? -- that the genre came back to life, with EyeToy: Kinetic. The motion-sensitive camera used for the game had already been released in 2003, so why didn't they do the obvious and make a fitness game for it at launch? Because people didn't feel like exercising in '03 and '04, being mired in a refractory state. Suddenly in 2005, their energy levels were back to baseline, and they felt like moving around more.

By far the most popular examples of the genre, though, were from the Nintendo Wii, whose pack-in game was Wii Sports that had players using motion-sensitive controllers to simulate tennis and other sports. The Wii Fit from 2007 included its own balance board to track the player's center of gravity, and became one of the best-selling games not to be included with a console. Its sequel in 2009, Wii Fit Plus, was also a mega-hit.

The genre lasted through the early 2010s manic phase, albeit to a lesser extent than before (Wii Fit U; Nike+ Kinect Training; Zombies, Run!). People had become used to simple exercises, and felt successfully awakened from their early 2000s hibernation.

By the time the next vulnerable phase struck from 2015-'19, the whole motion-sensitive mode of play was over. Energy levels crashed into a refractory state all over again, and people went back into hibernation for the first time in 15 years.

But as with the rhythm games, now that 2020 has seen the beginning of another restless warm-up phase, it's the perfect time for exercise games to make a comeback. There's novelty value for the younger kids, and nostalgia value for the older Millennials, in playing an exercise game with some device that is motion- or pressure-sensitive.

And it would fit in well with the quarantine atmosphere, where people are afraid of getting bed sores from being holed up indoors all day long for months on end. That would re-create the social setting of the original Wii games, where parents and children would play them at the same time in the home.

Who knows exactly what form the revivals of rhythm and exercise games will take, but the demand for them will be shooting through the roof now that people are restless again.

June 13, 2020

Even Disney darlings did dance-rock in the late 2000s

As a sign of just how extensive the late 2000s dance-rock phenomenon was, even singers who had first broken into fame as Disney actresses were making it. The style was not only coming from indie bands with art-school pretensions. Nor were the Disney stars merely jumping on the bandwagon -- they began and ended in parallel, as two manifestations of a single underlying trend, mixing rock with dance.

During the restless warm-up phase of the excitement cycle, it doesn't matter whether you're a Disney-watching teen or a dance club regular. Everybody feels like coming out of their vulnerable-phase cocoon and mixing it up with other people again. Having slumbered for so long, they need to perform simple exercises first, and a rhythm with an accented offbeat is perfect for that, marking the winding-up motion on the offbeat, before the main beat marks the delivery-of-force motion.

Most of the Disney-derived acts had a rock background, whether power pop, soft rock, country crossover, or pop punk. There was no rap, minimal R&B, and only a bit of club dance music. As such, they were not part of the backlash against UNH-tss techno music of the '90s, which was confined to the club music scene ("electropop," which replaced drums with synths to set the rhythm). They had no problem using a disco rhythm with accented offbeats, since they were using rock instrumentation that could not have been confused for UNH-tss drum machines.

To begin the survey, it's worth noting that one of the bands discussed before had a strong Disney connection. The two founders of Metro Station ("Shake It") had younger siblings who were acting in Hannah Montana at the time, and they themselves were only in their late teens. So while not Disney stars in their own right, they were still very close to that domain of cultural production.

First the songs, then some remarks. As before, not all have a percussion instrument dedicated solely to accenting the offbeat, as in a true disco rhythm. But most do, at least in parts -- and that was far more than could be heard in the electropop club music of the period.

* * *

"Beat of My Heart" by Hilary Duff (2005):



"Outta My Head (Ay Ya Ya)" by Ashlee Simpson (2007):



"Potential Breakup Song" by Aly & AJ (2007):



"See You Again (Remix)" by Miley Cyrus (2008):



"Falling Down" by Selena Gomez & the Scene (2009):



"Remember December" by Demi Lovato (2009):



* * *

The purest example is the earliest, "Beat of My Heart," with its irresistibly bouncy hi-hat offbeat. When the cymbals really ring out continuously for that long, rather than just sounding a quick discrete tap, it's as though they're carrying your limb along for a ride. Your leg feels picked up when the hi-hat begins, then guided through its full delivery motion during the rest of the offbeat interval, until it lands on the main beat. With quick taps, the hi-hat is more like a metronome that feels apart from your body, although helping it keep time. Combined with the shooting-star whole notes from the synthesizer in the first verse, this way of playing the hi-hat gives the song a soaring feeling. But it's not uniformly bubblegummy, turning dissonant during the bridges, which sound like they're from a mature brooding song by Franz Ferdinand. It's amazing that this came out in 2005, before the dance-rock trend had become firmly established.

Ashlee Simpson was not technically a Disney kid, but still was famous for acting on a teen-audience TV show. "Outta My Head" does not accent every offbeat, but it does for those before the backbeat on 2 and 4. And it's not a hi-hat followed by a snare, as usual, but a strange pair of crashing sounds -- like a giant maraca with chains inside it. As with standard percussion, though, the offbeat is higher-pitched and quieter, and the main beat is lower-pitched and louder. Familiar pattern, yet exotic timbre. Way better than anything else she did.

Electropop is more the style of "Potential Breakup Song," with its heavy reliance on octave-alternating synth notes to set the basic rhythm. But there is a hi-hat playing the main and offbeats for most of the verse. The first part of the chorus does not have offbeat percussion ("You're not living..."), but the castanets do this in the second part ("This is the..."). There's a double-hit before 2, and a single hit before 4, setting up the backbeat only, rather than all main beats. Another interesting choice of timbres, mixing castanets with a mainly electro/synth sound. A different song from the same album, "Like Whoa", does use a hi-hat to accent the offbeat, and the instrumentation is mainly rock, but it's too low-key to be a dance club song -- more of a foot-tapping rock song.

In "See You Again," the verses after the first have a hi-hat playing both main and offbeats, while the chorus uses shakers to accent the offbeat. The original recording from 2007 did not have as much dance-y percussion, which seems to be the main change made for this remix. During bouts of dance fever, the goal is to make it more danceable, not less.

In the verses of "Falling Down," the offbeat is mainly accented by a rhythm guitar, although there is a cowbell on the offbeat before 3 (and also the one before 4, for the second verse only, for some reason). Tambourines play during the main and offbeats of the chorus. The way that the percussion builds in danceability from verse to chorus, they should have put a clear hi-hat offbeat at the end of the chorus to make it climax ("When you're falling down, you're falling down").

"Remember December" is hard rock during the chorus, similar to Paramore of the same time. But the verses tap the hi-hat on the main and offbeats, inviting you to move your feet around, unlike other songs from the heyday of scene music. Also worth noting the surf guitar riff toward the end, recalling a signature sound of an earlier restless phase, the early '60s.

Honorable mention to "Lovesick" by Emily Osment (2010), which does have a hi-hat to accent the offbeat during the chorus, but is not rock enough to fall under the dance-rock trend. It's more pure club music, just being an anomaly for having a disco beat when the rest of its peers avoided it. And there's "Make Me Wanna Die" by the Pretty Reckless (2009), fronted by Gossip Girl actress Taylor Momsen. It flirts with an accented offbeat, but does not get groovy enough to mix dance in with the rock. A real missed opportunity -- they could've made another intriguing fusion of dance with post-grunge, a la "Paralyzer" by Finger Eleven.

June 12, 2020

Summer night anticipation music

Summer 2008, somewhere in the Mountain West.

Cruising around after dinner with the A/C off, caressed by the crepuscular warmth that's wafting through the open windows.

I can't even tell what day it is -- I'm out getting warmed up before nightclubbing later on, but I'm going dancing four nights a week, so it could be any of those. Every evening feels like the weekend during summertime anyway.

The university is deserted, and I've got the four-lane road out of campus all to myself. Farther away, signs of life start to emerge. Housemates shooting the breeze on the porch of a bungalow. A group of teenagers laughing while dangling their legs over the bleachers, across from their film-famous high school. Lamps being turned on inside the Storybook style houses.

I don't want to over-stimulate my social sense before showing up to the dance club, so I avoid the main drag nearby and settle into a small strip center instead. The sparse crowd at Barnes & Noble was under-stimulating, so I walk over to the only other place that isn't a restaurant, a used video game store.

Not a soul there either -- except for a curvaceous girl behind the counter, who locks on with her large green eyes, and goes on to babble about video games through a coquettish smile. At one point she makes a conspicuous aside about how some game earlier in the decade came out "when I just in middle school," letting me know she's still under 20, and studying my face to see if I'm intimidated that she's nearly 10 years younger than I am. (Of course I'm not.)

Having worked myself up enough for tonight's opening course of socializing, I drive back playing the same CD as I had on the way over -- Lust Lust Lust, the new album by the Raveonettes. One song in particular -- "You Want the Candy" -- makes me play it on a loop, amplifying the mood of anticipating the wild night that is still to come.

The melody is just stimulating enough without going over the top, while the droning layers of noise give you that floating-through-space feeling of lacking responsibility. The surf guitar timbre and the high-pitched pining vocals transport you back to southern California during the carefree early '60s, on a night overflowing with possibilities.

Overtly about drug use, it is equally enabling for those whose addictions are driving around with no destination in mind, flirting with whoever strikes your fancy, and moving your body to the rhythm with the rest of a crowded dance club.



June 11, 2020

Girls born in early '90s are the most social-seeking and touchy-feely (pop culture examples across all phases of excitement cycle)

I've mentioned before that girls born in the first half of the 1990s are the wild child type. Their first impression of the world at birth, and their secondary social birth around age 15, were both shaped by the restless warm-up phase of the 15-year cultural excitement cycle. The zeitgeist is freewheeling, anything-goes, come out of your shells, question everything, hold nothing taboo. It's not surprising that it produces the wild child.

Eagerness, instigating others to join you in mischief, enabling their daredevil behavior -- all traits that are adapted to prodding people out of their shells from the earlier vulnerable phase, which is the primary task of the restless phase. They've been slumbering for so long that they need to be noisily and playfully shaken awake and even pulled out of bed, to join everyone else in the day's adventures.

I've been going over the late 2000s culture a lot, and something else struck me about the girls who turned 15 then -- something that has stayed with them ever since, in a true generational fashion (not just a phase they went through). Their attachment style is much more on the anxious or insecure side than the late '80s and late '90s cohorts, on either side of them.

They are easily infatuated, fall in love quickly and hardly, and have trouble getting over someone. They're hyperactive, attention-seeking, love-bombing, and at the extreme, needy and clingy. They're very touchy-feely, naturally. They're liable to go over the top in femininity, girliness, and cutesiness, both to initially attract someone's attention, and to keep them from leaving once they've become close. When the social mood tends toward isolation, they will feel that as a stinging rejection of their craving for connection, rather than a welcome reprieve into coziness. They find little value in "me time".

It's not about fear of abandonment -- that puts too much on parent-child relationships, which don't have much formative effect. It's about social dynamics among your peer group, especially when you're being born as a clique-joining creature during adolescence.

When this process is taking place during an anything-goes atmosphere, it may make teenagers worry about their relationships not lasting through the topsy-turvy turbulence. Or perhaps it's the sense of urgency that marks the zeitgeist, once the vulnerable phase is over, when everyone is supposed to come out of their refractory state and get back to connecting with each other again. Those are the formative forces when you're turning 15 during the restless phase of the excitement cycle.

Before looking into the broader and enduring consequences of such a formative experience, let's look at some typical examples from pop culture. There are two examples each from the three phases of the excitement cycle -- the restless phase of the late 2000s, when they came into their own as late teenagers, the manic phase of the early 2010s when they were in their early 20s, and the vulnerable phase of the late 2010s when they were in their late 20s.

These enduring traits show that they are a true generational stamp, not just some phase they went through. They also show that this cohort is more connection-seeking than the ones on either side of it, since they're the most likely to still be crushing on others and looking for contact and validation even during the vulnerable phase. Most other people then are in a refractory state and would only feel social contact as painful over-stimulation.

Boxxy (2008-'09; born '92). A half-ironic persona of Catie Wayne that went viral on YouTube and various forums.



Miley Cyrus, "7 Things" (2008; born '92). The most anxiously-attached wild-child of that whole early '90s Disney cohort (Selena Gomez, Demi Lovato, et al.).



Overly Attached Girlfriend (2012; born '91). An ironic persona of Laina Walker that went viral on YouTube and became a widespread meme format.



Miley Cyrus, "Wrecking Ball" (2013). I include a second example of hers to show it wasn't just a fleeting phase she was going through. But I could've included something by Charli XCX (born '92), like "Lock You Up" from the same year.



Ava Max, "Sweet but Psycho" (2018; born '94). Such a welcome burst of clingy stalker feminine chaos, at a time when the overall zeitgeist was so avoidant, isolating, and numbing.



Tessa Violet, "Crush" (2018; born '90). Although half-disguised by ironic self-awareness, the pure sincerity of the longing-and-pining vibes cannot be concealed.



Aside from the obvious caricature of the Overly Attached Girlfriend, none of these feels creepy or off-putting. They're endearing and charming -- just like the girls intended. That includes the Ava Max persona -- that irresistible, heady, intoxicating spell of psycho-pussy. So much attention and energy being directed all-out onto you. Maybe it's the ego boost of knowing that a girl is driving herself crazy at the thought of not having you. Or maybe you need to be an anxiously attached person yourself, to enjoy a girl who's equally touchy-feely as you are. (The worst mismatch is a contact-seeking guy and an avoidant girl.)

I've had a weak spot for the early '90s girls ever since the late 2000s, when they were teenagers relentlessly flirting with, and often trying to scandalize, their hot 25 year-old tutor. Or as college girls circling around him at '80s night in the early 2010s. Everyone separated from everyone else during the late 2010s, but it's reassuring to see in hindsight that the early '90s girls were still trying to keep the social flame burning.

And I'll bet they're the most eager to break out of the quarantine, when they had been expecting the Roaring Twenties to finally re-establish social bonds that had frayed during the vulnerable phase.

Moving outside of my own perspective as an early '80s manic-phase birth, I can see how this type would not be so endearing -- indeed, would absolutely turn off -- a guy who was born in, and then turned 15, during a vulnerable phase, whose formative influences were a social refractory period. That would be guys born in the late '80s, early 2000s, early '70s, and late '50s. I expect them to have more avoidant attachment styles, and an anxiously attached girl would strike them as a homing missile to take evasive maneuvers against.

Future posts may look at other cohorts born during a restless phase -- late '70s, early '60s, late '40s -- to see how similar they are to the early '90s cohort. And of course, in just a couple years there will be another crop of Boxxys and Mileys, born in the late 2000s and turning 15 during the current restless phase. Me Too, Stay at Home, Billie Eilish, sad boy hip-hop, etc. will seem like ancient history, before their time, and not formative influences.

I seriously doubt they'll be able to top the early '90s cohort, because the late 2000s were so anything-goes, and the early 2020s don't appear to be quite so freewheeling of a restless phase. But we'll just have to wait and see.

Related post: Manic Pixie Dream Girls are born during the manic phase. They have a different personality and role from the wild-child type. They're more like nurses or guardian angels during the restless phase, when men need to be coaxed out of their shells, and then pursue their own needs during the manic phase once that nursing job is done. They're free spirits, more socially independent. Wild-child girls tend toward co-dependence, looking for a partner in crime rather than someone to nurse back to health.

June 9, 2020

Ditch social media and start a blog if you're not parasitic on a news cycle (Angela Nagle, Alison Balsam, Heather Habsburg, Shamshi Adad)

I'm planning a history of social media platforms to see what factors are related to using them parasocially as opposed to not. Nothing mind-blowing, but sorely needed.

For now, though, I'll stick with the overall analysis, not the development over time of various platforms, and the "What is to be done?" section. Specifically, I'm calling for the abandonment of parasocial media -- Twitter especially -- and a return to blogs. And at the end I'm tagging a handful of people, off the top of my head, whose efforts would benefit most from this RETVRN TO TRADITION.

To begin with, parasocial media are inextricably incorporated into the broader media ecosystem, especially Twitter. When you log on, you are joining one great big talk-radio food fight that is owned, controlled, and managed by the media elite. The topics of discussion are not set by you -- they are set by the elites. You can react however you please, but you will be reacting to topics of their choosing. This could be the senior elite who choose what to cover on cable news, or it could be laundered through a "what's trending" algorithm that is prominently displayed on the platform.

Parasocial media reduce you to being a "talking head" -- or a "reacting avi," as it were. Such media only serve the interests of status-strivers who are wannabe pundits. If that's your goal, parasocial media is right up your alley. If you're not an aspiring cyber-pundit, then they are not for you.

You may think you're not reacting to the mass media news cycle, and are part of a niche that discusses its own pet interests. But most of that stuff is reactions to the news cycle, filtered through some niche lens. What are the left-Cath takes on the riots, for example? Or it's standalone stand-up comedy bits (see below), as filtered through that lens. A joke for insiders only about some 18th-century philosopher.

Fundamentally, these media prevent structured thought, and only promote discrete chunks of emotional energy -- hence the popularity of terms like "reaction," "take," etc. They're reflexive, reactive, emotive, and basic -- not in the sense of "pedestrian," but meaning not built upon or elaborated.

That is driven by the user base -- emotional cripples who require external injections of emotional energy, which they cannot get from their real lives. Some may be fanboys craving uppers -- white pills -- while others may be doomers who prefer to drown in downers -- black pills. Again, it's no accident that the popular terms are from addictive psychoactive substances. If you do not supply what is being demanded by the users, you will either go nowhere or get punished and hounded off the platform.

Structuring your thoughts, observations, experiences, and, yes, feelings into something meaningful like a message, an argument, a story, or an analysis, will be rejected by the parasocial media users. Digesting an argument, story, etc. requires understanding each of the pieces and how they relate to each other. Normal brains have no trouble handling this level of comprehension, which is one level above "getting a particular point," because you have to also get how the points fit together into an arrangement or pattern.

But emotionally crippled brains are so starved for emo injections that they can't deal with higher-order cognitive processes. It's akin to Maslow's hierarchy of needs. They cannot follow a line of argument, plot of a story (except for a bunch of unrelated events happening in sequence), or anything else. And even if they could manage this on occasion, they won't be able to perform it regularly and consistently.

"Wow, can't believe I managed to read an entire article!" [goes back to scrolling takes on Twitter for five hours straight]

It's not the character limit of Twitter that is to blame, either. Reddit allows multiple paragraphs per comment, but they are still just rapid-fire takes aimed at amassing upvotes and avoiding downvotes, as part of a cyber food fight, on a topic of discussion derived from the mass media news cycle -- whether a news cycle for politics, entertainment, or whatever else. It rarely promotes a structured, thought-over message -- and if so, that's only the original post that could just as well have been a blog entry, while the majority of stuff is comments on the post, i.e. a food fight of reactions and takes.

Podcasts are another "long-form" medium that are nevertheless devoid of structure. Except for lectures or other scripted forms, they are recorded on-the-fly -- no way to arrange things properly, develop thoughts, let thoughts ripen, and so on. They are mostly shooting-the-bull sessions (again, mainly on topics derived from some news cycle), whose purpose is to make users feel socially included in the conversational circle. They are not meant for delivering arguments or telling stories.

The telltale sign that parasocial media promotes reactive takes over structured messages is the upvote / like / fave. "Did this satisfy your emotional craving?" Clicking "like" is akin to a lab rat pressing a bar to receive another pellet, sending the signal to the content-supplier to keep on giving what they've been giving. Withholding your "likes" is refusing to pay your supplier because their content wasn't the real strong-feeling stuff.

Parasocial media intensify this atomized structurelessness even further by slicing up the entire message into discrete takes -- each one of which is put under the like-meter to gauge how hot the take was. It's worse than a generic thumbs-up for an entire movie, or book recommendation. Every single micro-remark is rated for potency by the junkie user base. They wouldn't do that if they were interested in your overall, big-picture view.

It's as though you made a meal, and the consumer is trying to throw out all but the most taste bud-inflaming ingredients. There it is -- the ingredient with the most likes, not the other ingredients with fewer likes. Just pump that simple sugar straight into their desperate pathetic veins.

That's assuming you were trying to structure your thoughts and feelings to begin with, e.g. in a thread on Twitter. Soon you'll learn to stop trying threads and structure, and reduce yourself to standalone riffs and takes, after which the audience either laughs or does not. Aside from the talk radio atmosphere, parasocial media increasingly resembles a stand-up comedy club whose topics are all topical news-cycle fodder. It seems like a coalition-building cohesion exercise, but it's less of a pep rally and more of a group therapy session. It does not lead to any meaningful victories -- rather, success is avoiding thoughts of suicide for another night. It is numbing, narcotic, enervating -- not motivational, joyful, or satiating.

Sure, there are a handful of exceptions where someone writes a structured message in a text editor, then screenshots that message into an image, and uploads the image into a tweet, for digestion by normal brains, rather than a continual IV-drip of white-pill junk or black-pill junk. Nassim Taleb, for example. But that is more like composing a blog entry and linking to it on a social media platform, it's not really tweeting per se.

And in fact, Taleb was already famous outside of Twitter / before Twitter. You are not. Readers find Taleb's structured messages because they already knew who he was, followed him on that basis, and saw the screenshots in his feed. How will anyone find your screenshots of structured messages? Twitter is a verbal platform, you can't search images with text recognition software. Maybe write a brief headline -- but then people could only find keywords from your headline, not the actual body of text. Taleb's screenshots come with some expectation about content -- they're going to be a mathematical proof, or logical argument, etc. Your screenshot is coming to users from an unknown source -- as far as they're aware, it's just a pointless wall-of-text that's too small to read. Why bother inspecting it?

Just start a blog instead. If you want to retain a bare presence on Twitter to link to your blog, OK. But why else would you continue to be on there? To cling to an unfulfilling role as a pseudo-pundit in a neverending talk radio show, with millions of spazzes crammed into the same conference call? Respect yourself.

* * *

Let's end with some concrete examples of people who should revive the blog form. This is partly a suggestion to them, if they somehow come across this (it's not urgent enough for me to message them directly), but more to use them as examples that others may resemble, including their devoted followers.

First, Angela Nagle is not on Twitter, but is part of the commentariat in the media ecosystem. She's already logged off, and is halfway there. She's observant, insightful, empathetic, curious, sincere, and uncloistered (unlike many aspiring / former / actual academics). She already writes long articles and books on political or economic topics, but there's a lot more going on than just that. It'd be interesting to see what strikes her curiosity enough to write a blog entry about.

Next is a soul too generous for Twitter, Alison Balsam (AKA @foolinthelotus when she's not deactivated). She's come the closest to trying to use social media for thought-structuring and storytelling, while also doing standalone riffs. But the two cannot co-exist: it's either ironic lib-arts riffs that go viral and attract a following of take junkies, or sincere stories from her childhood, amusing encounters with the critter world, the state of the culture (overall, not following some news cycle), and frank but relatable confessions from the present (not emo trauma porn). She would never have to apologize for sincere-posting on a blog -- it's understood, and appreciated.

Those two were born in the first half of the 1980s, what I'd call the tail-end of Gen X. We are more introspective and less exhibitionistic than Boomers or Millennials, so I think we make the best bloggers. Millennials are generationally unable to amass great material wealth, and mostly unable to afford the lifestyle contests that Gen X-ers compete in, so they resort to persona-construction for their status contests. (See these two posts here and here.)

That puts immense pressure on Millennials to amass currency in the form of clicks, likes, retweets, followers, and for the lucky few, donations via Patreon or Venmo. They generate this currency and status through their persona construction, which is disseminated over social media. I don't know that they can resist the urge to keep investing effort in their persona / brand maintenance and do something that doesn't generate likes and faves. At most, you see how many views a specific post got, or your blog domain overall. But even those stats are not public -- no way to display your status to your rivals. It's a hobby or an ongoing project -- not a hustle.

Still, there's a critical mass of Millennials who are growing tired of social media toxicity that could revive the heyday of blogs. I'll just mention two, both of whom show enough other-orientation through their interest in the past to rise above typical Millennial self-absorption (they can have a little, as a treat).

Heather Habsburg (@HeatherHabsburg on Twitter, when she's not locked or deactivated). Aside from anti-woke left takes on topics from the news cycle, she's also got the seeds for an oral/social history of the cultural period from roughly 2005 to 2019, which is only just concluding and thus not a fully formed narrative. I think she would lean more toward evoking the zeitgeist through personal stories than describing it through clinical examination. And she's illuminating on lesbian patterns of thoughts and feelings, again more through showing or expressing it directly than through describing it analytically. Even if you're not the target audience, you come away understanding a lot. Pleasantly and refreshingly feminine mindset and expressive style.

Marina (@Shamshi_Adad on Twitter, when not deactivated). I only recently found her through comments on Heather's posts, but what little I've read is interesting enough to wish she kept a blog. Pre-Axial Age history especially. But also Long Island life and ethnography -- I'm so sick to death of being aware of Da City, which is full of transplants with no collective histories to tell. Or whatever else strikes her fancy -- lots of things are fascinating when you're in your early 20s and have an inquisitive mind.

Those are just a few whose interests overlap with my own -- even if I couldn't care less about your subject matter, you should still be discussing it on a blog rather than takes on Twitter.

I keep saying "when not deactivated" because the non-retarded minority on Twitter are increasingly locking or deactivating their accounts on a regular basis, to avoid the gay slap fights that the retarded majority want to keep dragging them into. At that point, you might as well start a blog instead and just moderate comments to keep out the emotionally addled take junkies jonesing for their dopamine fix.

The parasocial and parapolitical phase of 2015-'19 is over. There's no point in participating in it anymore, and that's all that remains for most social media people now. You know you hate it, so starve it rather than feed it. Your friends will follow you anyway.