Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

April 30, 2025

Gawr Gura memorial song: "In the Real World" by the Little Vir-maid

The virtual shark-girl streamer who took the world by storm officially graduates today. I have a whole backlog of tribute songs I'll be posting here. This "in memoriam" song is set to the tune of "Part of Your World" from The Little Mermaid (lyrics, music).

As I said with Mumei's memorial song, Millennials and especially Zoomers' native habitat is online, so IRL is this strange exotic territory that they're alternately fascinated and frightened by. Growing up, maturing, leaving the nest -- these all have to do with finding their place in meatspace, and navigating relations with the perplexing creatures called "other people" (as opposed to, "other accounts").

In Goob's case, she got pulled out into IRL without intending to. Fandom taboos aside, it's pretty clear that she became a mommy -- like the time she came back and while casually chatting with Ame, asked out of nowhere if she had ever lactated, totally matter of factly, as if to compare notes with her own experience.

Details like that are important, not as gossip about e-celebs, but to make it clear that she has a perfectly respectable and noble reason for having largely left behind her turboposting memelord career for the past couple years. And to emphasize that IRL still has a powerful attractive pull, yes even on terminally online, algo-poisoned Zoomer brains.

And that's what this memorial song is about -- her feeling restless after living and doing so much online, and wanting to escape out into a normie IRL existence (notwithstanding the occasional visit / reunion). For the veterans of irony-poisoned toxic content wars, IRL normie life is not "settling" or "retiring" -- it's liberating and rejuvenating! ^_^

(Atypical stress patterns: CARE-ee-oh-KEY, ee-MOTES, meat-STAN, tar-ZAN. And "nendie" is short for Nendoroid. Also, do Millennials and Zoomers realize that "cut the cord" is an allusion to cutting the umbilical cord? That's where the phrase came from relating to devices that we have become dependent on -- if it were literal, you wouldn't cut that kind of cord, but simply unplug it.)

* * *


Look at these subs, all at tier 3
Breaking the 'net during karaoke
Wouldn't you think I'm the girl
The girl who sets trendy things?

Custom emotes, membership gold
How many fan-arts can one hard drive hold?
Lurking around /here/ you'd think
"She's the trending thing"

I've got ad rev and anime nendies
Several spots 'top the meme leaderboard
A million followers? I've got twenty
But fresh air, can't be streamed -- cut the cord

I wanna be, in the greenest yard
I wanna breathe, all the flowers they're planting
Grilling their ribs with that -- what do ya call it?
Oh, mesquite

Clicking your keys, you don't bond too hard
Hands are required for shaking, planting
Climbing your way through a -- what's that word again?
Tree

Out where they talk
And call you by "hon"
Out where they're face to face, one on one
Shooting the breeze
Wish I could be
In the real world

Trade all my clips, to trade some quips
Not just spam "poggers"
Pay 'em top rate, to elongate
My attention span
Bet in Meatstan, Jane finds Tarzan
Bet they don't shadowban mom bloggers
Online women, sick of simpin'
Won't trust the plan

I'm ready to grow where the green grass grows
Ask 'em my questions and get some answers
What are tires and why do they -- what's the word?
Turn
New routes to learn
It'd be such a buff
Forevermore live outdoors, off the cuff

Off the PC
Climb out the screen
To the real world

April 29, 2025

Memorial song, "IRL-mei" by Moomlan

Now that Hololive's owl-girl Mumei has left the nest, here's one last tribute song to memorialize her, in case she's still lurking (she does like to watch, y'know...).

Set to the tune of "Reflection" from Mulan (lyrics, music). Her fandom uses "-mei" to refer to various personas of hers, like "lolimei" for when she talks about her school days. So IRL-mei is who she is in real life.

I really had a think about this, which way should it be framed? -- her online persona is the real, primary one, and her IRL persona is a disguise? Or the other way around?

For Gen X-ers and earlier, our IRL selves are the real thing, and we adopt masks or disguises online.

For Millennials and Zoomers, though, they live and grew up so online, that's their primary unmediated self -- strange as it may seem, given that it is technologically mediated. But in the sense of not disguised -- not very much anyway. And their IRL persona is the more heavily guarded, disguised, not so recognizable version of their true online self.

Moom, in her role as a paranoid schizo conspiracy theory Disney princess, always kept her online persona heavily guarded from her family and friends. And although she shared lots of personal details with her audience, she still kept her IRL life at some distance. Leading two lives, or trying to live in two worlds at the same time.

But I think her online persona was/is the real one -- whether as Nanashi Mumei from Hololive-English Council, or in her earlier online existence(s), she used her cyber-persona to confide in people, vent, open up, express herself, and in general be her true self. Her IRL persona, as she shared many times with us, was mostly a blurry cloud to those around her, a ghost in a black hoodie (or something like that), as one of her schoolmates described it to her.

So I wrote this from the perspective of her online persona being the deep-down true one, and her IRL persona being a secondary, shadowy projection of it.

Recently she mentioned that she's going to open up to her family about what her vtuber persona and experience were, to some degree anyway. That's the story of character growth and maturity for raised-online Millennials and Zoomers -- being able to discuss your username, avatar, posting history, content archive, and so on. That's the *real* you, and you don't just share it with any ol' group of people from IRL!

It took getting such a fascinated welcoming reception from online audiences, to convince Moom that she really is talented, lovable, and... interesting! She would never want us to call her "cool". ^_^

We're glad we could play a role in giving you the confidence necessary to Just Be Yourself (TM) with those close to you IRL, you sweet schizo songbird, you. ^_^

* * *


Look at me
I could never last out in normie life
Or sail normie waters
Can it be
I was meant to spark fan-art?
Now I see
If I talked to them through my live2D
Their view of me would press restart

Who is that owl on screen
Singing proud in worldwide streams?

Why is IRL-mei someone I won't post?

Somehow I cannot priv
The girl who lives in my archive

When will IRL-mei share
Who I am online?

When will IRL-mei share
Who I am online?

March 30, 2025

Further failures of AI slop "art", wannabe Ghibli edition

Gonna post the comment thread from the previous post into a standalone post, to get a new ball rolling, and start some more aesthetics posting in the comments, putting the 50-year civil cohesion cycle on the back burner for a few days (although I have plenty more to say already).

* * *


Well, if no one else is gonna spell out why the Ghibli-fied AI slop doesn't actually look like Ghibli, I guess I will. I already wrote a more expansive post about AI slop in general. This'll just be a couple comments since it's more focused and won't be wide-ranging.

I'll skip the typical midwit crap that no one cares about, but that generates all the buzz in the media -- no one cares about copyright, it's fake.

Employment for artists does not depend on how they perform vs. AI, it's solely a matter of the patrons' willingness to give up their money for something great, or cool. If they're unwilling, the artists don't get hired -- whether this is rationalized as "no one does good art anymore" or "AI does an equal or better job than human artists" or "artists are Democrats and I'm a Republican".

So AI is not going to eliminate jobs for artists that would have been there, if not for the AI. "AI" is just an excuse for slashing jobs that were going to be slashed no matter what.

The American Empire is collapsing, so there's less wealth to spread around, and the elites are greedier than ever. *That* is why there are hardly any jobs for artists, compared to the good ol' days.

At the heart of the matter is "does the Ghibli-fied AI slop even resemble Studio Ghibli works?" And the answer is -- no. It's just a re-skin of an image, usually a digital photo but perhaps another piece of AI slop.

And the aspects of the image that it re-skins are the most superficial -- mainly the facial features on human faces, giving them the standard in-house proportions, lines, and shapes, and as a result the expressions, of Ghibli characters.

It also re-skins the use of line / line art into a more illustrated look, and blocks in the color as in an illustration, with minor use of sculptural shading.

However, just cover up the facial expressions, and ask how Ghibli it looks -- it doesn't, it looks like any ol' Photoshop filter that makes a photo look like an illustration instead. E.g. the ubiquitous Photoshop rotoscoping of the 2000s, which detects the outlines of major shapes and gives them dark bold outlines, and then you can fill in the interior with whatever color block you want.

Someone went further to make an entire movie out of digitally rotoscoped film footage (i.e., the alterations were done by computer programs, not by a hand moving a pencil or pen over tracing paper on top of a lightbox with a film still being projected up through it).

That was the 2006 movie adaptation of Philip K. Dick's iconic novel A Scanner Darkly -- it was such a snooze that I literally fell asleep in the theater. I didn't go out to the movies much after the '90s, when they all started sucking. But I did venture out for that one, and I wrote it off as just boring.

After reading the novel some years later, it really struck me how terribly they butchered it in the movie, and the visuals were a key part of that. There was nothing in the novel to suggest visualizing it as taking place in a '90s virtual reality aesthetic. It looks so stupid.

So, the Ghibli AI slop is just a reheated Photoshop rotoscope filter. Depending on which illustration style you're telling it to emulate -- Ghibli, 1940s Disney, or whatever else -- it renders its rotoscoped trace-over lines in the intended line art style. And then it fills in the color blocks in the same style, with or without sculptural shading depending on the intended style.

It really is mind-blowing how technologically retarded, and aesthetically blind, everyone has become by now. It's just a Photoshop filter, belonging to an existing class (rotoscoping), that requires a full input image to operate on, then spits out the output. This slop is literally 20-25 years old, not cutting-edge at all. I don't mean that Studio Ghibli's signature style is old, which it is, I mean the tech used to "make my image look like Ghibli" is old.

It doesn't qualify as AI either -- no more than "Photoshop" counts as an AI image-generator. When AI generates images from verbal prompts alone, is where the real slop comes in, and I already covered that in the standalone post from a few months ago. When it's just transforming an existing fully rendered image file, it doesn't even count as "generating" the output -- it's just an alteration or re-skin or transformation, a la Photoshop.

Putting aside the datedness rather than cutting-edgy-ness of the tech being used, how good is it at emulating a certain coherent style, e.g. "Studio Ghibli" or whatever else you prompt it to emulate?

Not good at all. As mentioned, 95% of the dum-dums' "gee wowzers!" reaction is due to the human facial expressions alone, which does not count as an entire aesthetic or style.

Damningly, the AI gets the Ghibli *animal* expressions completely backwards. I image-searched "Ghibli AI cat" to see representative examples, and the cats all look very naturalistic in line, shape, proportion, and expression -- with some basic line art and color-blocking to make them look like drawings rather than photos.

But Ghibli never renders animals that way -- their signature, distinctive in-house style is to make animals look caricatured, from the mundane ones like Kiki's black cat or the fantastical Totoro. Their animals always look unusual, exaggerated, even surrealistic, compared to the human beings from the exact same movie, who look much more naturalistic -- just with a little line art and color-blocking. But the people are rarely caricatured visually.

As I said in the previous standalone post, AI slop is biased toward photorealism rather than stylization. Even when you specifically tell it to emulate an illustrated / animated style, and where the animals have a distinctly stylized and caricatured look, it can't help but portray them naturalistically, by illustration standards, rather than the caricatures that are truly and already present in the training set data.

So, even if you were as lenient as possible, "OK, let's just grade it on how Ghibli-esque the faces or bodies look," it fails. It does well with people's faces, although Ghibli doesn't have very distinctive human body shapes (unlike, say, The Simpsons, South Park, Peanuts, Garfield, etc.), so the fact that the AI slop matches the original on body shapes is no proof of its intelligence or accuracy.

But it fails completely for animals -- and in order to achieve a Studio Ghibli aesthetic, how the hell can you ignore animals? They're central to every single one of their works -- sometimes they're the main characters, like in Pom Poko! It's like with Disney, an imaginary world filled with animals who have more personality than ordinary persons.

The line art and color blocking and minor sculptural shading is the remaining 5% of the "gee wowzers!" reaction. It does all right, but that's cuz Ghibli doesn't have very distinctive line art and color blocking -- that's just a generic illustrated or animated look, not specifically Ghibli.

The programmers would get more credit if they tackled a more distinctive target, like Disney's Aladdin, which has very specific line art, itself derived / inspired by the illustrator Al Hirschfeld. *That* movie, hand-drawn from 1992, is impressive -- matching the line art to the original inspiration's style, and doing it throughout an animated movie rather than still illustrations.

So far we've only tackled aspects of (Florentine) disegno, not (Venetian) colorito. And as any art appreciator knows, disegno is basic or irrelevant, and colorito is where all the artful liveliness... well, comes alive!

The reheated Photoshop rotoscoping filter does fill in the interior of outlines with color blocks, but which colors does it choose? And which color combinations? And in what lighting conditions -- evenly lit bright, evenly lit dark, evenly lit hazy twilight, chiaroscuro?

It makes no decisions on these central facets of the image's aesthetic. It blindly copies them over from the input image. It flattens a range of colors into a color block, and likewise for lighting variations getting flattened into a "shaded color region," like animation.

But the range of colors it's choosing from, and brightness or darkness conditions it's choosing from, is what is already present in the image.

If a person is wearing a brown shirt in a photo, there are in fact a zillion different shades of brown present at the pixel level. The filter chooses from within that range of browns, and expands that one shade of brown throughout the entire region of the image.

But the filter didn't choose the person to be wearing a brown shirt, rather than a red, blue, yellow, or purple shirt.

Ditto for the lighting conditions -- copied over, and simplified, from the input image. Not over-writing them or second-guessing them, like making a relatively bright region dark, or making a high-contrast image into an evenly lit one, or whatever.

Therefore, the distinctive Ghibli-ness of the output image is entirely dependent upon the input image already possessing the distinctive colors, color combinations, and lighting conditions, of a Ghibli image. Whether deliberately or coincidentally -- but given that this filter showed up after the photos were taken, we can assume any resemblance of the input image to Ghibli is purely coincidental.

That is, because the input photos were NOT made to look Ghibli-esque to begin with, regarding colors and lighting, the output of the Ghibli filter will look no more Ghibli-esque. It adds no value, passively copying over the original choices, simplifying them somewhat to look illustrated rather than photographed.

No wonder none of those Ghibli AI slop images look like they were taken from a Ghibli movie -- where are the rich blue skies, the verdant green grass or foliage, the pale buttery creamy yellows to contrast against the saturated blues and greens, billowy white clouds, and all the other fixtures of a characteristic Ghibli image?

Where are the brightly lit exteriors and landscapes? Where are the chiaroscuro interiors, or outdoor interiors like a clearing in a forest? Where is the connection to the ukiyo-e woodblock prints, which all iconic Japanese art afterward derives from? Something as basic as the background of Super Mario Bros looks more Ghibli-esque and ukiyo-e derived, regarding color and light, than the latest dud of a Photoshop filter that purports to be oh-so-much smarter and cutting edge. It looks dumb and dated.

Then there's composition, or the arrangement of the separate objects in relation to one another to yield a single coherent scene. Since Japanese animation is heavily influenced by photography, regarding composition, this implies things like "camera placement," "camera angle," and so on.

As with colors and lighting, the reheated Photoshop rotoscoping filter does not make any decisions about camera placement -- height off the ground, angle in any direction, proximity to subjects, blurry vs. sharp focus, and so on. Just blindly carried over from the input image.

Therefore, any resemblance to Ghibli images is coincidental, and due to the creator of the input image, not to the AI programmer.

And of course, very damn few of those photo-snappers were going for a Ghibli look, meaning they largely do NOT look Ghibli-esque. If Ghibli had totally naturalistic camera placement, angle, etc., then perhaps a fair share of ordinary candid photos would resemble it.

But they go for more stylized camera placement, like very high or very low angles (especially in those iconic landscapes, a low-angle camera somewhat close-up, showing the people or animals appearing to tower right up into those rich blue skies and billowy white clouds).

Most ordinary photo-snappers don't opt for off-center compositions, cropping, or really consider composition at all. That's why none of them looks like a still from an anime, where such concerns are central to every scene.

They look exactly like a typical candid photo shot by someone with no aesthetic concern while pressing the button -- cuz that wasn't the point, it was just to record a memory or event in visual form, not to be artistic, let alone to emulate a certain aesthetic like Ghibli or whoever else.

The fact that line art and color blocking is slapped on top of these totally ordinary compositions, ordinary colors, and ordinary lighting, does not change the fact that the original images -- and therefore, the superficially re-skinned outputs -- do not look like anime, of any studio's style (Ghibli or otherwise).

Final meta-observation, about the state of commentary or criticism in both art and technology. I see no evidence that anyone commenting on these topics majored in art history, or is self-taught in it.

Maybe some of the practicing artists, who all uniformly hate AI slop -- but then the dum-dums just write that off as professional jealousy against their computer program job market rivals, rather than taking their opinion more seriously since they have demonstrated some level of "having a good eye" through their art.

Otherwise the terms I used would be standard in tHe DiScOUrSe about this AI slop. Again, perhaps the practicing artists have, but I don't think so. They just say, "Wow, this looks like shit". Fair enough -- they're artists, not commentators or critics. But anyone else should have a basic toolkit of terms, and the visual and perceptual skills needed to analyze images, along with practice from studying art history.

As I said in the previous post, though, all the AI slop cheerleaders are wordcels, not visual people. Forget artist vs. critic -- the more important difference is wordcel vs. shape-rotator or color-perceiver.

Again, their choice of words and their arguments are never about the visual nature of what they're bla-bla-bla-ing about. It's too vibe-y or meta-, like does this represent the human spirit or not?

Who cares about what you think represents the human spirit? -- just tell me what you're looking at. You can't build an argument about art without first knowing what it is you're seeing. And these wordcels can't tell you what's staring them right in the face. They're just not visual-brained. They can analyze narratives and dialog and word choice, but they can't talk about visual art at all. It's beyond their ken.

Nor have they written a computer program of their own -- OK, that's forgivable, like not being a practicing artist. Do you at least know what programs do, have you used them before? Maybe you could be a decent critic of the tech, despite not being a practicing computer coder. How many hours did you spend Photoshopping digital photos during the 2000s?

These dumbos can't even recognize a Photoshop rotoscoping filter when it's staring them in the face -- and the output of that filter was ubiquitous, not a niche thing. How about the program itself -- Photoshop?

Their only awareness of that seminal piece of tech is in their verbal wordcel meme-world, where "haters will say it's photoshopped" was verbally altered into "haters will say it's AI". They view that as a mere verbal riff, updating an older and semi-outdated joke -- or so they think. What if this use of so-called AI is functionally identical to a Photoshop filter? Well then, there's no need to update the joke.

In fact, the datedness of the tech's functionality needs to be called out, and the pretense of it being cutting-edge / the future must be cut down to size. It's not progressing and cutting-edge when it's 20-25 years old -- eons, in computer tech lifespans.

As I said before, these AI slop-slurpers are just gadget-diddlers, they don't know any math or computer science. Jesus, they don't even remember what Photoshop already did 25 years ago! And they're not visual people either. They are the last people to ask about the matter of "AI art".

They just really get a dopamine rush from playing around with gadgets and devices, and the AI prompt is just another gadget for them to diddle and feel dopamine hits from.

Some of them are paid to hawk this slop, some are just really obsessive about their favorite gadgets and shill them for free.

Either way, it's a sign of our collapsing culture that the legacy and Millennial media outlets won't track down, let alone pay, someone who can do what is necessary to comment on these matters.

But then, that's why you keep returning to these ruins of the blogosphere, to ask the cliff-dwelling sage what he thinks about all this crap. ^_^

"This'll just be a couple comments since it's more focused and won't be wide-ranging."

Famous not-so-last words that I never, ever stick to... but you've probably picked up on this quirk of mine by now. I just can't help it, and I'm like that with in-person presentations too, not just online / in writing.

But it's worth it, you wouldn't want some crisp, terse, just-the-facts bullet-point slideshow, if you're trekking up the Cliffs of Wisdom. You can get that from any ol' talker. My meandering is always coherent, on a zoomed-out-enough perspective, not pointless. ^_^

All for now.

November 21, 2024

AI slop is stylistically schizo and contradictory, human art is coherent and unified in style

A recent post from Scott Alexander on AI slop vs. human art asked respondents to guess whether certain images were made by AI or a person. If AI could fool enough respondents, was it not real art, a la the Turing test for judging whether a machine was intelligent?

Well, they did not fool respondents -- the average and median score was 60% correct, compared to 50% if they flipped a coin. We just had an election -- 60 to 40 is not a close election. This contradicts the title of section 1, that people had a hard time identifying AI art just cuz it wasn't near 100% correct. Having a hard time would mean they did worse than a coin-flip.

And he admits that he put a massive thumb on the scale by screening out AI images that had the telltale signs of being AI -- which is to say, the telltale signs that are already known about. This experiment revealed, to me at least, other telltale signs, but more on that later. E.g., disfigured hands and fingers, garbled text, "wrestling" poses where there's a lot of interaction between two bodies, and the entire style of the recognizable DALL-E model. Even with this thumb on the scale, people were still not fooled.

Of course, when it comes to computer models, the question is not can a computer program do something, but how much complexity does it cost, and how good is the output? If it has a 137-degree polynomial to connect a line between 7 data points, that's over-fitting the data. How many prompts, with what degree of specificity, does an AI generator require before it gives sufficiently passable results -- plural, as in reliably replicated, not just a fluke success?

The more complexity that needs to be built into the model by these prompts, the less smart and talented it is. The real comparison is, how many prompts or constraints, and with what degree of specificity, would you have to tell a human artist before they gave sufficiently passable results? Not many at all! The computer is a massive downgrade, if you're telling someone or something else what you have in mind.

The real fascination with AI slop is that the turnaround time for results is relatively fast, compared to the labor-intensive work of human hands. And so even though the results are slop, they're at least 20% real-ish, so you're OK with that trade-off -- crappy quality, but fast results for AI, instead of high quality results that take much longer for a person.

So then AI is not superior or equal to a person, it's a different point along a trade-off continuum, and nothing to gawk at as though it were a higher form of intelligence or existence than our own.

* * *


My main interest, however, is in further analyzing what gives away AI art, beyond the already well-known signs like mangled hands, garbled text, and interactive bodies that turn into Mr. Potatohead abominations.

Those are specific to the subject matter -- what is being portrayed. But scrolling through the images -- and I was not fooled by more than a few (more on why they can fool, later) -- I discovered a more fundamental and stylistic giveaway of AI art, which gets to its very nature, or perhaps lack of a nature, compared to human nature.

Namely, AI -- being a program without a mind or spirit of its own -- can easily be of two minds, even in contradictory ways (not just divergent), at a stylistic level. Not what subject matter is portrayed, but the manner in which it is portrayed. Human beings, possessing a single mind of their own, are of one mind about the manner in which they portray the subject matter.

Consider image 4, which I instantly felt was AI (and it is). There is a clear main subject, close to the viewer, and it's a person. I don't think it matters if the subject is a non-human animal, plant, inanimate object like a boulder, or human artifact like a chair or door -- something that is the focus of attention, in the foreground, near the viewer. Then there's a background environment in which the subject is embedded -- not a portrait in a vacuum.

The subject being a person means it takes the form of a portrait, while the environment takes the form of a landscape.

Yet the styles of these two forms are different and contradictory. The landscape is Impressionist, although who cares exactly what period it's mimicking -- the point is, the level of detail is low-resolution, blurry, with blobs and patches and planes of color more than crisply delineated and complex shapes. This applies not only in the distance, where things are naturally more blurry, but right up in the foreground -- look at the flowers directly around the girl, their stalks look like single thick brushstrokes, and the petals are thick daubs of color. Low-detail, blurry.

Then all of a sudden, the girl in the portrait section is rendered in fairly high detail, in focus rather than blurred. It's not 100% photorealistic, but it's far more in that direction than the highly stylized rendering of detail for the landscape section. You can see multiple folds on the fabric of her clothes, with light / shading for sculpting purposes -- which is NOT used on the grass, flowers, dirt, trees, etc. in the landscape. You can make out individual wisps of hair on her head, each tiny curving line inside her ear (with shading-for-sculpting again), and so on.

This detailed focus gets more blurry and Impressionist as you look toward the bottom of her dress and shoes, and I notice in the other images that the trigger for photorealism seems to be a human face or other exposed parts of the human anatomy. So even just her dress -- which is a single garment, not a separate top and bottom -- looks schizo stylistically, with a more photorealist upper region and a blurry Impressionist bottom region, further from the trigger of exposed human anatomy.

The machine doesn't understand that a single self-contained work of art is supposed to be coherent and unified in style or presentation. It has clearly been trained on photorealistic portraits and Impressionist landscapes, one not-so-stylized and the other highly stylized. And so when asked to combine a portrait within a landscape, it figures why not combine the best of both worlds? -- a high-detail portrait in a landscape that is blurry immediately surrounding her, not to mention farther away as well.

This is not just shallow focus from photography or cinematography -- at the exact same distance from the "camera," there are simultaneously a sharp-focus object and blurry objects. That's not physically or technologically possible -- and could only be done by deliberate choice of the artist, in some warped form of artistic license.

But artists never use that license, cuz it violates the fundamental requirement to present the subject matter in a coherent unified style -- all blurry and Impressionist, or all sharp-focus and photorealistic, but not some of one and some of the other in the same work.

To give a pity point to the machine, it at least does the sensible contradiction instead of the wacko contradiction -- it renders subjects in sharp detail (as though we're giving them our attention), while leaving environments in blurry detail (as though they're in our peripheral vision, not as important), rather than an Impressionist portrait set within a photorealistic landscape (akin to animated figures superimposed on a photographed real-world environment, like Who Framed Roger Rabbit?).

This schizo clash of styles within a single work is how I identified most of the other AI images.

Photorealist portrait in blurry landscape also told me the following were AI: 7, 10 (again, not a portrait of a person, but with a clear subject taking up much space), 13 (cartoon head, realistic water), 16 (the background being just a fairly uniform color plane), 21 (the environment's flowers are blurrier than the decorative flowers on her clothes, despite both being close to the camera), 23 (background looks like an Abstract Expressionist painting, and even within the mother's clothing, the colored pieces are blurrier than the white pieces), 27, 33, 40, 46, 49 (the wacko contradiction, where the close-up buildings are blurry while the distant water is in sharper focus)...

And the most insane is 26, whose subject looks like he was photographed under pristine studio conditions -- while the landscape outside the window is a highly stylized Venetian type portrayal. Is it supposed to be a painting within the artwork, hanging on the wall of this room? To me it looked like a landscape shown through a window, that ol' trick. There's what could be a decorative frame just below it, but not running up the left side of the landscape... so it's a bit schizo in its subject matter, but also in the style, with totally opposite styles for the landscape and the portrait.

Related, there are some whose subject matter is a bunch of abstract geometric shapes, with no 3D depth cues, no lighting variation, etc. -- and then a single human face or body, with multiple features (eye outline, iris, pupil, lips, individual teeth, etc.), sometimes with shading-for-sculpting. The dum-dum AI doesn't understand that a single work has to be entirely abstract or entirely representational. This gave away 6, 17, 24, and 50.

I could tell that 19 was by a person cuz although there are geometric shapes and a stylized human head, the geometric shapes are not separate abstract objects from the representational head -- they're used to form the lines around the head and its features, or to fill up volume within these features, suggest texture of the features, etc. They are building blocks to render a representational object -- not a separate array of abstract shapes, plus a representational head in their midst.

The Impressionist landscapes with no dominating subject are less obviously AI, cuz the contradictory rendering of subject and background cannot happen. Still, their subject matter or compositions look more like photographs, which were then passed through a blurry / stylized / Impressionist filter. The point-of-view, angle, perspective, cropping objects at the frame's edges, etc. Very photographic in composition, if not photorealistic in detail. And painters or illustrators rarely did this -- they create more of a staged array of figures or natural elements if there's no dominant subject.

This gave away 11, 20, 31, and 43. I could tell 22 was by a person cuz there's a semi-prominent human and plant subject, and they're both rendered Impressionistically along with the landscape. However, 38 and 45 do not look like photographs in composition, and have the same approach to detail throughout. 38 has a little wacky of subject matter, with fairly crisply intact ruins amidst a sprawling pasture, and maybe the level of detail on that building is a bit too much compared to the landscape and figures, but it's not as obvious, and the figures are pretty blurry.

44 was the only one that really got me, glad to know it got everyone else too. Kinda photographic in composition, but could easily be a painter as well. Everything rendered in blurry brushstroke blobs, nothing is contradicting that with sharp focus. The presence of multiple people is not triggering the high-detail tendency for portraits. And the arrangement of them looks somewhat staged for dramatic effect, not a typical photograph. Very consistent and coherent stylistically.

Well, one getting through is just a fluke, as far as I'm concerned. By random chance the algo didn't do the many wrong things it is tempted to do. And if you could somehow spell out what is different about this one, to try to replicate it, it'd need so much more complexity in its instructions, that it wouldn't be worth it -- over-fitting the data.

I also missed the most commonly misidentified human picture -- 25. It has that wacko subject matter that makes you think it's AI. And the insane level of detail on the front of the ship, but far less on the bottom, the blurry / misty right and left sides of the landscape (including the smaller ships on the right), are the common contradictory styles of AI.

I wonder if this one was purposely made to resemble AI. If so, that still proves the larger point -- humans are better at imitating AI slop, than computers are at imitating human art. We are superior to them, so we can understand them and imitate them better than vice versa. Their output is a subset of ours, so it's interpolation and valid when we imitate them. Our output is a superset of theirs, so it would be extrapolation and invalid for them to imitate us.

The only human one I was fairly convinced was AI, was 30 -- there's such insane photorealistic detail on her dress, far less detail shown on her face, almost none on the walls of the room, and fairly low-detail on the scene outside the window. I don't think this painter from the turn of the 19th C was trying to imitate AI -- she was just obsessed with painting the details of a dress, and the rest of the composition was an afterthought. Not a very coherent portrait or mini-landscape through the window.

* * *


So, the main points remain. People are much better at identifying AI from human art than just coin-flipping -- even when the really egregious examples are removed.

And crucially, AI models do not have a single mind of their own, like people do, so they frequently violate the fundamental rule to maintain coherence of style within a single work. It's so fundamental that most of us probably didn't even consider it necessary to spell it out explicitly -- like, what other approach would you take, clashing and disjointed styles? Computers are too analytical and slicing-up and zooming-in, not holistic and gestalt-oriented enough to appreciate what coherence, unity, and harmony among parts are.

Presumably they would do the same with a verbal medium -- parts of it would be verse with a strict meter and rhyme scheme, while other parts were dull drab prose. Or where entire paragraphs are dull drab terse prose, then others are highly ornate and full of figures of speech with sentence diagrams that look like someone smashed your windshield with a tire iron. You'd wonder whether the person had a schizo episode while writing a single chapter / story.

But verbal media are more serial, not as all-at-once parallel in processing. So harmony among elements isn't quite as salient of a property of speech as it is of images. IDK what AI story-slop reads like, but at least on the visual side, it's overly analytical schizo nature really comes through, and accounts for why we reject so much of it as decent or good art. It didn't even fulfill one of the most basic requirements -- stylistic coherence!

And again, I don't care how many trillions of parameters they add to these models to make them less ridiculously off-putting. That's over-fitting the data. And it's certainly a worse model to choose than "give prompts to a human artist" -- way less explicit detail needed there, cuz so much is already built-in to human nature, as well as during their training.

But something like stylistic coherence is too obvious and universal and unspoken to be picked up during training. It's part of innate human nature, and machines will never possess that, without ever more risible degrees of complexity-explosion. Sad!

November 7, 2024

Unstolen election mega-thread

Just re-posting two initial comments here for now to get the ball rolling, will add to it in the comments as usual.

* * *


Why didn't Dems steal it this time? Well, Dems were promising to steal it -- the state election boards in battleground states, the media, and Obama himself on the campaign trail.

Why didn't they this time? Perhaps the election steal of 2020 was part of the broader civic breakdown of 2014-2020 -- most of which was marked by political violence, hostile rhetoric, etc. Stealing an election is not physical violence, or even heated rhetoric, but it is hyper-competitive, antagonistic, anti-social, etc.

It was also part of the broader hostile crusade by woketards, like censoring and deplatforming everyone during the 2014-2020 abyss. That's also hostile, anti-social, war-like, etc., but not physically violent.

This is part of the Peter Turchin 50-year cycle in civic breakdown, whose last peak was the late '60s and early '70s, then the late 1910s and early '20s, late 1860s and early '70s, a missing explosion circa the late 1810s and early '20s (which was instead the Era of Good Feelings), and another burst around the Revolutionary War of circa 1770.

It's a kind of energy that builds up, and then dissipates, over a cycle lasting 50 years, or 25 years in either direction.

By 2024, it was already clear that the violent symptoms of this pattern had abated -- BLM and Antifa did not burn down half the country in '24, there were no roving executions of cops caught on camera like in the mid-late 2010s, Democrats didn't roam around assassinating Trump supporters for no reason and getting off with no bail, etc. Although there were 2 assassination attempts on Trump himself -- the violence hasn't gone to 0, but it's only 5% of what it was during the 2014-2020 abyss.

Libtards didn't even hold marches when the Supreme Court over-turned their sacred cow of Roe v. Wade in '22. There will be no pussy hat marches when Trump is re-inaugurated.

Twitter allowed itself to be bought out and taken over by Musk, which would not have been allowed in 2014-2020, and they submitted to the new orders about no more crazy censorship and ban waves.

So, the failure or unwillingness of Dems to carry out the steal this time must be part of that general dissipation of policitized zeal from its 2014-2020 peak (abyss). There will be no Russiagate, #MeToo, Resistance, etc. bullshit like there was during Trump's first term, during the peak of politicized zealotry.

I thought since stealing an election wasn't violent or confrontational, they'd still do it -- especially since that's what they were promising for the past few months, right up through most of election night, with Philadelphia halting their vote count early in the evening, waiting for the rest of the state to return their numbers, anticipating a steal. Who am I to second-guess the same message, from the same top-level figures, that was followed up on by a successful insane steal in the very last election?

The energy level declining across all dimensions -- violence, censorship, stealing elections -- is also bipartisan. There was WAY less zeal on the Trump side this cycle, compared to 2015-'16, and even 2020. No one is sincerely posting God-Emperor memes anymore, no one is champing at the bit to lay the first bricks in that Big Beeyooteeful Wall, which never got built the last time. And there's just been far less trolling and teabagging this time than in 2016, and certainly 2020 when it got stolen, preventing the teabagging.

Politicized zeal overall in American society has fallen off of its 2014-2020 explosive peak, and will reach a minimum circa 2045, which will be as non-partisan as the mid-1990s were 50 years earlier. Then the next explosion will happen in the 2060s and early '70s, and the cycle will keep on repeating...

* * *


Also a quick dunk on tech determinist dum-dums, who blamed / credited the explosive zeitgeist of 2014-2020 on newfangled tech (social media, smartphones, "meme magic," online in general).

Well, Americans are even more online than they were in 2016, yet the zealotry has fallen off a cliff after 2020, and will continue plummeting toward a minimum in 2045 -- all while Americans continue to be as online, or even more online, than they were in the 2014-2020 period.

That's the cross-temporal proof. Then there's the cross-sectional proof -- Japanese people have become more and more online since they first adopted the internet. Yet they have experienced no such explosion of politicized zealotry -- whether leading to violence, censorship, heated rhetoric, stolen elections, or whatever else.

All technologies are mere tools, indifferent to how they're used, and impotent to shape, channel, or nudge human societal systems or individual behavior. Rather, the dynamics of society and individual psychology lead to some people using some tech for some purpose in some state of affairs, and some others to use some other tech (or even the same tech) for some other purpose when they're in some other state of affairs.

Americans didn't need social media or the internet or online anonymity to carry out an equally explosive bout of zealotry in the late 1960s and early '70s, or the late 1910s and early '20s, or the Civil War or the Revolution -- or the civic breakdown of the 60s AD during the Roman Empire, most of whom weren't even literate, let alone employing a communicative medium other than speech sounds coming out of the mouth.

When the cycle enters a crazy zealous phase, they use whatever means / media they have at their disposal, and when the cycle leaves the crazy zealous phase, they either use different media that have no stain of the zealous-associated media, or they use the same ol' media for a different purpose.

Technologies are utterly indifferent to how they're used, and they have no deterministic or even probabilistic influence stemming from inherently from themselves, toward human behavior, at any scale (person, group, society, etc.).

January 23, 2024

Wide-ranging thread on shoot 'em up video games, vidya in general, and Japanese vs. American aesthetics

Might as well put a new post marker here, since the comments section for the last is getting a bit long. I'll be adding post-length-comments to this post, to make an ongoing thread.

The basic topic is shoot 'em up video games, inspired by watching Fuwamoco play a 2000s Touhou "bullet hell" game the other night. It is rare for non-Japanese people to play video games, rather than simulators, so I take notice and appreciate it every time it happens! But then, they're turbo-weebs, and you can't integrate yourself into Japanese culture without playing video games (created by the Japanese, with an illustrated, not photorealistic, style).

Below is the first "post in the comments" that kicked it off, which I'm putting here to get the ball rolling. More to follow in this post's comment section...

* * *


Frogger was the original "bullet hell" game -- not even appropriate to call the genre a "shooter" or "shoot 'em up" etc.

*You* are the one getting shot at, like crazy, and you don't shoot back -- you can only navigate your way through the moving geometric minefield of bullets, much like the frog navigates his way through the geometric formations of moving hazards, i.e. the vehicles that make up the several lanes of traffic moving in opposite directions, the alligator teeth in the river section, etc.

In "bullet hell" games, you shooting the enemies is only 5% of the gameplay, and it's like shooting fish in a barrel, after the difficult other 95% of gameplay has been performed -- i.e., dodging the bullet waves.

Frogger is only missing that 5%, but it would be trivial to program it in -- right before you land on the safe space at the end, you have to lash out your tongue to hit a dragonfly that's sitting in the way of the lilypad you're trying to land on.

Surprisingly, no one has drawn this clear parallel before. However, the wiki on Frogger says that it was created explicitly to tap into the female demographic, as opposed to the highly popular shooter genre which girls were not very into (e.g., Space Invaders, Galaga, etc.). And they succeeded.

This may explain why "bullet hell" games are at least semi-common among female streamers -- Fuwamoco just played Touhou: Mountain of Faith, and Marine is a huge Touhou player and fan. They're more about fine-scale motion, not large-scale swerving and zigging / zagging, slow speed, not racing all around the screen, defensive rather than offensive, hide-and-seek rather than being aggressive and chasing down the enemies.

They still take a lot of spatial skill, so they're not very common among female players -- but if she does have spatial skill, this defensive and cautious style of playing is better suited to her personality, as opposed to an offensive and risky style that characterizes "shoot 'em ups" proper, which are for guys with spatial skill.

Then there are the bona fide "gamer girls" (not just empty branding) like Korone, who take on Salamander (Life Force in America), which is not only a shoot 'em up, but one of the hardest ones ever made! Much respect. ^_^

And yet even "bullet hell" games have lots of male fans -- it's part of the broader trend in video games towards taking away your offensive abilities, and making you passively hide-and-seek from an all-powerful enemy. Same time-frame as the survival horror genre, which largely robbed you of weapons and ammo (mid-'90s through IDK), and then took them away altogether (from IDK through the 2010s and '20s).

A Euro-LARP-ing pseud would use a fake & gay term like "slave morality," i.e. glamorizing the behavior of slaves. Gamer nerds call it "masocore", a more straightforward term. They're not slaves, they're just downers or masochists or hide-and-seekers, rather than aggressive, offensive, and active. It's a reflection of the broader end of our imperial expansion (and ditto for Japan's failed imperial ambitions), and with it, the end of the heroic age of our culture (and those in our orbit, like Japan).

June 29, 2023

Ancient aliens: America's divine intervention genesis myth about civilization and life itself

Having looked at the distinctly American genesis myth of our prehistory -- inhabiting the same land as dinosaurs and missing links, threatened by a volcanic rather than a diluvian apocalypse -- let's look at the other distinctly American genesis myth about our even deeper history. How did life itself ever come to be on Earth? It's actually the same myth regarding the birth of terrestrial civilizations, at a far later stage of our species' history -- being seeded by aliens!

In contrast to the creation myths of most cultures throughout the world and over time, ours does not dwell on the creation of the Earth itself, the stars, sky, oceans, and so on and so forth. You can believe in the Abrahamic universe-creation myth of the Old World, the Big Bang, or whatever else. Those inanimate things are taken for granted. What we really want to know is, how did life begin and get to where we human beings are today? And for us compared to other animals, how did civilized societies begin and get to where they are today?

The myth is not interested in evolution as much as the initial birth from apparent nothingness. Notice that the "cavemen and dinosaurs" myth doesn't say where primates came from -- they're just there, in media res of their drama. And the myth about the origins of life itself doesn't concern itself with any particular species that is present far later on, human or otherwise. Evolution is boring, while creation from nothing is interesting.

This is another stark contrast with the Old World creation myths, where human beings are created in their more-or-less current form (e.g., Adam and Eve). Sometime in the distant past, a creation of some kind occurred -- whether it was creating life where there was none before, or primates where there were only non-primate animals before, or hominids where there were only apes before, or human-like cavemen where there were only missing links before.

Somehow -- it doesn't matter how -- that initial creation led to us here today. We did evolve from earlier forms, but how that happened is irrelevant. How far back does the creation process go? And who if anyone was in charge of the initial creation?

Notice that this creation myth accommodates the 19th-century debates on the evolution of human beings. Not being an Old World culture, we never felt very threatened by the idea that homo sapiens evolved from earlier primate forms, rather than being created as we are now, back in the Garden of Eden, according to Abrahamic myth which took root in Europe during the Middle Ages via Christianization.

We have never had a national church, de jure or de facto (although during the mid-20th C., the United Methodist Church came the closest). Nor, therefore, any hierarchy of national church officials who could enculturate Americans in the Genesis creation myth. And no, contrary to clever-sillies, nothing is a "church" outside of Christianity. Academia is not a church, and the two most popular creation myths held by the general public -- Genesis for Christians, ancient aliens for non-Christians -- have taken deep root *in spite of* constant pressure by the hierarchical officials in the schooling sector to kill them off.

Nor is civic philosophy and dogma a "religion", let alone a "church". Church refers to a Christian institution, in contrast to mosques for Muslims, temples for Buddhists, etc. And all stripes of American civic philosophy and dogma are entirely silent about creation -- of the Earth, of life, of homo sapiens, etc. There's no primeval narrative of how things began, let alone one bringing supernatural or at least more-than-human actors and supervisors into the cast of characters.

And so, because we're not committed to where contemporary human beings came from, we can avoid the whole controversy arising from Darwin, who only says how things evolve once life-forms have existed, not whether or not there is a first created form of life and how that came into being. That controversy vexed all Old World religions, but not ours -- we're so new, we could just build in an agnostic stance regarding evolution at the beginning!

The Mormons -- America's global religion -- are also famously equivocating on evolution, with high officials officially saying don't ask, don't tell, it doesn't matter. What matters is the creation of life, the creation of god-like beings, the creation of civilizations in the New World, the appearance of Jesus in the New World, and so on and so forth. Don't worry about whether or how today's human beings descended from earlier primates.

Our creation myth also avoided the controversy about the Big Bang vs. static universe from the early 20th C., right as our myth was starting to take shape. Ours is not about cosmogenesis, unlike many other major religions and folk cultures, including Christianity. We could already sense that controversy as it was developing, so we built in an agnosticism about it from the outset. Only focus on the creation of life, humans, civilizations -- not the universe itself, stars, planets, and all that other inanimate and non-societal stuff.

* * *


The ancient aliens myth only began -- when else? -- during the 1890s, after our integrative civil war was wrapped up, and our ethnogenesis could get going for real, as in the lifespan of every empire. And where else could it have been born but out West? -- Flagstaff, Arizona, to be exact. Although hailing from a Boston Brahmin family, Percival Lowell used his wealth to build a world-class observatory in Arizona, where viewing conditions would be superior than back East -- but also because it would be more Romantically American to explore the next frontier of outer space, from our defining meta-ethnic frontier out West (against the Indians and later Mexicans).

Although later famous as the site that discovered the ninth planet Pluto, whose existence was predicted by Lowell, it was initially dedicated to the study of Mars -- specifically, what Lowell thought to be its canals. The overview of his vision of Mars can be skimmed in the Conclusion section of his book Mars (1895).

The canal structures suggested that not only was there water on Mars, there was life, it was intelligent, and it was advanced enough technologically, and organized in a socially complex way, as to complete irrigation projects.

If anything, he thought they were more advanced than anything on Earth -- inventing and using technology far beyond our own, and rising above petty partisan politics, to undertake such a planetwide project. He says that human beings are not even the highest of the mammals, putting us in our lower place relative to Martians. And he says Martians and their civilizations are far older than ours, Mars being an older and dying planet. These elements of the narrative are all necessary for the next step, where they intervene in Earthly matters.

He does explicitly state that life on Mars will likely have evolved into different forms from life on Earth, owing to the different environments they're adapting to. But that doesn't contradict a belief that they could have visited us in the past, seeded our civilizations, or even seeded life itself on Earth. It only requires them to have a somewhat different superficial form, and that we were not made entirely in their own image -- rather, at the abstract level of "life-form" or "intelligent life-form" or "civilizational being".

Although Lowell didn't go that far in his non-fiction work, a contemporary of his -- also a popularizing astronomer -- did in an early work of science-fiction, Garrett Serviss' novel Edison's Conquest of Mars (1898). Here, Martians are hostile to Earth, engaged in a War of the Worlds kind of battle with it. During one of their missions to capture slaves from Earth, 9000 years ago, they built the Great Pyramids and the Great Sphinx of Egypt (the Sphinx being made in the image of their leader).

While the Earth-battling Martians hardly resemble the benevolent steward / supervisor gods of later versions of the myth, this is still the beginning of the myth of ancient aliens directly intervening in the course of events on Earth, seeding a major civilization.

And true to our Europe-obscuring identity, Serviss located the ancient alien intervention in Egypt, not even an Indo-European culture like the Greeks, Romans, Celts, etc. That would have been too much of a Euro-LARP, so if it has to be set in the Old World, it must be within the Saharo-Arabian sphere (Egypt, Israel, Mesopotamia, etc.). This was decades before the Egyptian craze of the 1920s -- it's simply the most obvious solution to "Old World civilizational ancestor of America that is not related to Europe". The only others would be from the Far East, and that's too much of a stretch of the imagination, compared to the Fertile Crescent.

If you're an American, and want to learn a dead language to study our civilizational ancestors in the Old World, you want to learn hieroglyphics, cuneiform, or maybe Biblical Hebrew / Aramaic -- not Greek and Latin (back-East Euro-LARP). I'm sure the Saharo-Arabians find this imagined heritage of ours comical -- "you Faranji people come from Europe!" But we are American, and Americans are fundamentally not European, so no, we do not come from Europe. Where else could we have derived from in the civilized Old World? -- China? C'mon, the Fertile Crescent is far more believable than China...

* * *


After the European empires, aside from Russia, bit the dust after WWI, and became occupied by America after WWII, the American myth of ancient aliens began to take root in Europe as well. This process reached maturity by the late '60s, when Erich von Daeniken wrote Chariots of the Gods? It was soon made into a feature-length documentary movie, whose English dub you can watch on YouTube here.

This is far and away the best audio-visual telling of the narrative, with amazing photography, ethnographic portraits, voiceover, and conveying the sublime nature of the archaeological record. It's superior to the more plodding, meandering, and less artistic renditions associated with Rod Serling from the same time period (In Search of Ancient Astronauts, In Search of Ancient Mysteries, and The Outer Space Connection, all available on YouTube as well, but you can stick to the last one, which incorporates the first two).

I think von Daeniken being Swiss was important, since he was not part of a collapsed empire, and was not subject to the hangover effect that had wiped out native cultural innovation in the collapsed Euro empires. Similar to Le Corbusier in architecture, who was a footnote to the American pioneer Frank Lloyd Wright of many decades earlier, yet still more original and influential than the Bauhaus people from Germany and Austria (like Mies van der Rohe and Marcel Breuer).

You can tell how well the Europeans had incorporated the American framework by their avoidance of their own European ancestors. The focus is on ancient Egypt, Israel, Mesopotamia, and New World cultures like the Maya, Tiwanaku, Easter Islanders, and so on. Nothing about China, nothing about Greece or Rome. The book, but not the movie, does include Stonehenge among its examples. Indeed, in the movie there's only a single passing mention of any Indo-European culture -- purported descriptions of ancient astronauts in the Ramayana of the Indo-Aryans.

From the ancient aliens narrative, you'd hardly know that there were people and civilizations in Europe during ancient and Medieval times! But that's unsurprising given its American origin.

Some local adaptations did work in their own history, such as the British movie Quatermass and the Pit (1967), in which contemporary people discover a Martian spaceship in the London Underground from millions of years ago, along with skeletons of primate ancestors just as old, the preserved remains of the insectoid Martians, and the revelation of Martian intervention in the evolution of the hominid lineage on Earth. That could be totally American, but the story also uses this Martian spaceship's effects to explain historical accounts of the devil, spectral phenomena, and other witchy goings-on -- within England, during the Medieval and Early Modern periods.

* * *


How about further back, to the creation of life itself on Earth? This view, strangely titled "directed panspermia", goes back to an American and Soviet collaboration (as in many other areas of 20th-C. culture, the only two empires left standing coincided, both sharing outsider status vis-a-vis the Early Modern Euro empires that defined high culture up until then). Namely, the astronomers Carl Sagan and Iosif Shklovsky, whose 1966 book Intelligent Life in the Universe raised the possibility that extraterrestrial life-forms could have purposefully delivered life to Earth.

Where *those* life-forms are supposed to come from, who knows? And who cares? The genesis myth is only meant to account for the ancestry of us, the story-tellers, and perhaps our fellow animals. Just as we are not interested in cosmogenesis, we aren't interested in whether the alien race that seeded life on Earth was itself seeded by a third alien race, and if there was a prime mover alien race, and so on and so forth.

Likewise, American culture is not really concerned with the other direction of panspermia, whereby we would seed life on other planets. That is about our future, whereas this concept is really to account for our distant past.

For my money, the best telling of this myth is the 1993 episode of Star Trek: The Next Generation, "The Chase" (from the amazing season 6). It's not just a high-concept "what if?" story, but brings to life the excitement of high-stakes archaeological fieldwork, collecting clues, solving puzzles, and trying to stay one step ahead of your competitors in the race to the finish. This version is about the spread of humanoid life, not life in general, but that is to keep the focus on the ultimate subject of narrative interest -- us, not plants or viruses or whatever. If aliens could seed humanoid life, certainly they could send mold spores to other planets as well.

* * *


Redditards, Wiki-brains, and other midwits love to deride the ancient aliens creation myth -- creation of life itself, of humanoids, or of civilization -- as a "pseudoscientific hypothesis" or "conspiracy theory," terms that they never use for Adam & Eve, Noah / the Flood, the World Tree, Persephone and the harvesting cycle, and so on. By now, so many Americans believe, or are at least open to the possibility, of the ancient aliens story, that it cannot be a hypothesis -- common people don't know what a hypothesis is, how to test it, how to analyze results, weigh in on counter-arguments, etc. It's a story that you believe or don't, and science has nothing to do with it.

None of the most popular entries in the genre present the concepts in the manner of a scientific method, experiment, etc. On the surface level, they're trying to make sense of seemingly unbelievable phenomena, while on a deeper level they're trying to connect us with our distant ancestors through narrative, myth, and storytelling. And as such, there's little that "science" can do to push or pull anyone.

Very few people have "beliefs," let alone a system of beliefs. It's not about belief, in the sense of a theory. It's about whether the story gives meaning to that person, not individually, but as part of something larger than themselves -- to their distant ancestors, the chain of transmission up to the present, and the universe beyond our own world. It's more about emotional and social and cultural satisfaction, which nerdy arguments, "data", etc. cannot move one way or the other.

Exactly like Adam & Eve, Noah and the Flood, and other such myths from the Old World. It's just that, as with most clueless back-East academics and media-ites, they deny that America is a different culture from anything in the Old World. But just cuz we're a young civilization, doesn't mean we aren't distinctive, and these various origin myths -- Cavemen and Dinosaurs and Volcanos, Ancient Aliens, and the Book of Mormon -- are all a testament to that. They're as American as burgers and blocky buildings.

The rAtiOnAL SkEPtiCs who think they're smart or insightful for trying to deboonk origin stories involving aliens, are the same who labor fruitlessly to convince Americans that cavemen and dinosaurs never lived at the same time (somebody's never watched the Flintstones), that there was not a worldwide flood that destroyed all life except for Noah's Ark, etc.

The haters' arguments require no math, problem-solving, pattern recognition, specialized knowledge, breadth of knowledge, or anything like that. Any idiot can make them -- and plenty of total numbskulls and ignoramuses do.

What they are is autistic, not able to empathize with normal human beings, who have a deep need for the social / cultural / emotional satisfaction of belonging to something beyond their individual personal private self, across both time and space. Autists have a broken social lobe in their brain, and being incapable of empathy, they project their broken social lobe onto everyone else as well.

"Why would anyone want to feel connected to others across space and time? Nah, they must be making scientific-method claims subject to experimental testing..."

There's a heavy overlap between know-nothing rational skeptics and libertarians, both highly autistic and clueless. Libertarian morality is only about "avoiding harm and fraud", excluding matters of purity, sanctity, and taboo (Jonathan Haidt, The Moral Mind). So when they see a sacred narrative, they don't mind pissing all over it -- not as a vindication for their side of a debate, since there is no debate. They're cluelessly assuming the other side is involved in scientific claim-making, rather than cultural bonding through narrative and myth.

This is why no one regards them as smartypants or intellectuals, who happen to use their big brains for sacrilegious purposes -- they're just clueless midwits or dum-dums. It takes no IQ to piss on something sacred, it's entirely a matter of attitude.

And like typical self-centered semi-children, they pat themselves on the back for how clever they are, when it's only a matter of their attitude, not brainpower or knowledge, which are middling and spoonfed from some online midwit clearinghouse / group chat like Reddit, Wikipedia, etc.

Normal-brained Americans will keep alive the stories of "When dinosaurs towered over cavemen," "When Martians visited ancient Egypt," and the like.

June 7, 2023

Disney World's Brutalist and primitive futurist origins

Although discussion of Brutalist architecture in America, where it was born, focuses only on its more elevated settings -- civic buildings, libraries, universities, research labs, and so on -- it was just as widespread of a style in suburban office buildings and malls. Before getting there, though, let's take a quick look at another mass-market, working and middle-class, all-American, consumer-driven setting, to establish how popular and populist it was -- not at all an elitist style reserved for ivory tower eggheads.

Disney World itself was founded on Brutalism in 1971, in the form of the Contemporary Resort, which was offered along with the Polynesian Village Resort in order to hit both the primitive and futuristic themes that define American cultural identity. Notice the continuation of the Midcentury tiki / Googie theme of Polynesia in particular to stand in for "New World primitive" as opposed to various Old World primitive environments.

And yet, even the Contemporary has a pyramid-esque shape -- albeit stepped only side-to-side, not also front-to-back like the later Luxor in Vegas -- to evoke New World ancient civilizations like the Maya. This continued another enduring theme in American culture, using the Maya instead of Rome or Athens to represent the RETVRN to ancient times. The gigantic mosaic inside the Contemporary also depicts New World native cultures, to reinforce the combined theme of "ancient and futuristic, entirely within the New World".

These were the only two places to stay, and set the tone for the entire amusement park. For extensive picture galleries, along with verbal histories that you can skip if you just want the overall impression, see here and here for what it was like during its New Deal utopian heyday (and here for how it has evolved since then). Then there's this old promo, which showcases both resorts until the 4-minute mark, and this old home movie from the same time.

There are shots of the exterior, interior atrium, leisure spaces, the Midcentury Modern rooms, and the Top of the World Lounge -- we'd usually associate being on top of the world with an unstable equilibrium, a delicate balance, not a place for a carefree lounge. But this was the Midcentury American utopia, so nothing sounded more natural than lounging around at the summit of existence. Just like the SkyCity restaurant, calmly revolving at 500 feet up the Space Needle tower in Seattle, built less than a decade earlier.

Much of the finer details of the original Contemporary atmosphere have been steadily adulterated during the neoliberal era, but we cannot judge Brutalism for what it was corrupted into later -- only by what it was.

If you never got to experience such a place during the good ol' days, including those that kept going even during the neoliberal era, nothing can prepare you for it. The warm color palette, the plush carpeting, the simple-not-busy geometric lines and arrangements of elements, the dark cozy intimate lighting, the lush vegetation and water elements, not to mention the futuristic atmosphere -- nothing could make us feel so welcomed, integrated, and belonging to a singular utopian American culture.

Notwithstanding the mixture of primitive and futuristic within the Contemporary, and the park as a whole with the Polynesian as well, it was the monorail transportation system, that decisively tilted the balance in favor of the futuristic and Brutalist theme. Its concrete supports, sleek cars with streamline profiles, dark tinted glass windows, with simple bands of warm colors on the shell to make this futuristic mode of transport feel lively and exciting rather than cold and utilitarian.

Integrating the monorail system so that it traveled right into the main concourse / atrium of the resort, only heightened the futuristic feel -- who ever saw a train pull right up to the base of your residence, so you don't have to hike, hop a cab, or drive to the station? It was not merely a matter of convenience and efficiency -- it proclaimed that this is a utopia, where there are no trade-offs from a single rail system having to service a wide network of residential areas. Everybody was staying at the Contemporary compound, so there was no need to build a station between it and dozens of other neighborhoods, towns, and cities. The resort was so removed from competing residential sites that the public transit could almost pull right up to your front door!

Nobody among the blinkered Bauhaus blackpillers could've dreamed up such a visionary utopian thing.

In fact, the Contemporary was designed by prolific architect Welton Becket, who was at the time participating in the Brutalist movement (Xerox Tower and the Gulf Life Tower, just a few years earlier). It was only natural for Disney World's inspiring foundational resort to be built at monumental scale, out of concrete, shaped as though it were a single large sculpture, casting an imposing and sublime presence from the outside, while filling the interior with a warm, lush, sophisticated, and dynamic atmosphere.

This was standard practice for Brutalism, and all complaints about how cold and alienating it is come from people who have never explored the interior of these buildings that are austere fortresses on the outside, but soothing and even sultry social happening-spaces on the inside. Perhaps they are not quite so seductive nowadays, after decades of neglect and outright desecration, but then it's your responsibility to see what it was actually like when it was created.

Haters of Brutalism never show the "before" pictures or the interior pictures, because that would blow up their arguments for why these structures must be demolished and replaced with fishbowl flex-spaces instead (barf-o-rama). That's why I linked to those other sites with extensive galleries -- to set the record incontrovertibly straight.

Steadily over the course of the neoliberal era, Disney World has headed toward making every attraction, resort, etc., a branding opportunity for pop culture figures. But Disneyland and Disney World, when they were under Walt Disney's New Deal vision, hardly included Disney characters or other characters from outside the park at all, only as an afterthought.

These parks were built to celebrate America's past, present, and future as a unique and special civilization and culture, and the rides and resorts reflected that purpose. Be sure to watch the entire promo video linked earlier, "The Magic of Walt Disney World" from the early '70s, to see what all it encompassed -- and what it did not include even remotely.

There is nothing more all-American than Disney World, and the fact that a Brutalist style was chosen for its foundational resort reflects the sense of marvel and wonder that Americans felt in the presence of buildings in that style. It was not an unwanted oppressive style foisted on them by PhD's -- it was a style that resonated with their desire for a monumental expression of the utopian zeitgeist of the Midcentury, as the American Empire had reached its all-time peak, or perhaps plateau.

And they did not have to travel to Ivy League campuses to enjoy it -- it was built for them in their own neighborhoods, and at affordable mass-market tourist destinations. There was nothing stuffy or elitist about it -- it was enshrined at literal Disney World!

May 30, 2023

Exposed concrete: the American architectural style's defining material, from Frank Lloyd Wright through Brutalism and beyond

I had no idea how backwards the history of architecture & design from the 20th C. and after has been, until I began researching American ethnogenesis and its cultural reflections. This has led me to an Americanist defense of Brutalism, which will be an ongoing series.

The standard cluelessness from back-East academics (and their media-ite confreres), who are trapped in the least American region of the country, is that there is no such thing as a distinctive American culture, and that we inherited or imported everything from the Old World, primarily the Early Modern empires of Western Europe -- including in their degenerate collapsing stages, such as Cubist paintings and Bauhaus architecture and design.

The reality is that American pioneers beat the stultified Europeans to the punch, usually by several decades, and that Americans developed the superior standard of that form, whereas the Europeans could only manage an inferior copy of it, or didn't adopt it at all.

That's not a knock against European culture -- they just had their ethnogenetic heyday centuries before we did, so they already developed their own impressive standard forms. And as we see now, as the American Empire enters its degenerate collapsing stage of life, we too will become stultified non-creators having to either preserve / revive our previous foundational styles, or try to imitate others around the world if they are dynamic.

However, there are no other ascendant empires in the near future, undergoing an intense ethnogenesis, so there is in fact no one else for us to copy, as the Europeans finally managed to do with Midcentury Modern design (imported from America during the Pax Americana). So that leaves Americans with the task of preserving, reviving, canonizing, and celebrating what we have already made, and to limit any degenerate and warped extensions of it during our collapsing-empire stage of life.

* * *


One major example of the backwards thinking about 20th-C. architecture & design is the nature of Brutalism, which the received cluelessness of back-East cerebrals holds to be European. They may bicker over whether its parent is Swiss (Corbusier) or British (the Smithsons), but it's definitely -- and distinctly -- European, in their view. And they place the birth in the post-WWII 1950s time period.

They never overtly argue against an American origin, and not for cynical reasons -- like, they would have to give up their silly initial views -- but because "American culture" is simply a non-force in their model of historical dynamics. Because America has no culture of its own, it could not have influenced anyone else, let alone the Europeans, whose combined forces exceed everything else out there. So why even bother exploring that hypothesis?

As far as the time period of its birth, they might allow an earlier "influential" stage -- as long as it were European, e.g. Bauhaus practitioner Mies van der Rohe in the late 1920s (Barcelona Pavilion). They would never entertain the possibility of an American influence in that decade, let alone earlier -- earlier, in fact, than any other European contemporary in a Modern style.

But just cuz back-East ignoramuses wear these ideological blinders, doesn't mean we have to. We owe no allegiance to a sector of society whose raison d'etre is supposedly "figuring things out," yet who not only come up with the wrong answer, but sanctify it into unarguable dogma. Nor do we owe cultural deference to anyone from back East, the black hole of culture in America. They simply do not get American culture, and perhaps have never been exposed to it in their lives, outside of movie portrayals -- or a visit to Disney World, but that's the topic of another post on primitive futurism in American design, and Brutalism specifically.

* * *


While the exact criteria for Brutalism may vary somewhat, most people from any background agree on the central role played by the materials used -- and in particular, concrete, especially if it is exposed, i.e. not clad behind marble, ceramic tiles, brickwork, stucco, heavy coats of paint, or other materials that would disguise what the building is mainly made out of. It also cannot be assembled in such a way as to suggest it's not concrete -- e.g., if concrete is poured into individual blocks the size of traditional stone blocks, and those blocks are laid as in traditional masonry. That would be concrete imitating or disguising itself as masonry.

That is what this post will focus on, not other aspects of the style -- but those are distinctly American in origin as well, which contrast with European traditions, and which were pioneered in America long before they caught on among the avant-garde in Europe who were trying to rebel against their own centuries-old traditions (which we were not encumbered by in America, being a young nation undergoing ethnogenesis). For example, the blocky assemblage of masses, the rectilinear nature of lines, the relative sparseness of superficial ornamentation, the rough-hewn nature of shaping mass rather than delicate finesse -- these all go back to Chicago in the 1890s, not Berlin in the 1920s.

And so it is with the use of exposed concrete as not simply a utilitarian building material, which could be hidden by other ornamental materials, but as a surface-level one contributing to the aesthetic value on its own.

We'll start our exploration by exploding two related myths from the clueless back-Easterners -- from both the fanboys and the haters of the style. One, that Brutalism was an elitist style that only college graduates appreciated, or that was confined to their everyday territory. And two, more importantly, that it was pioneered by Europeans in the 1950s.

If you went to any park anywhere in America over Memorial Day weekend, you likely saw one of these, a drinking fountain made of concrete with its aggregate exposed, and whose metal parts are given a gleaming chrome finish, making it a textbook example of primitive futurism, something that looks like it's partly from the Stone Age and partly from the Industrial or Space Age:


It does not resemble European drinking fountains at all. They use metal (stone if fancy), and work it into fine-level shapes. The American style requires a more blocky, pure simple geometric volume, and the avoidance of European materials -- because we are not European, and had to create a new material for our new culture in our new empire.

Technically, the Romans created concrete 2000 years before we did -- and they did leave it exposed as an architectural / aesthetic element, and they even used it in a lattice of repeated simple geometric shapes (the coffered ceiling in the dome of the Pantheon, which the vaults of the Brutalist DC Metro stations perfectly resemble). But they did not expose the aggregate -- theirs looks like fairly smooth concrete, while ours has all those small pebbles adorning its surface.

Concrete is somewhat like masonry, where a large number of solid stones are held together by a connective network of binding material (cement for concrete, mortar for masonry). The main differences are the scale of the stones -- pebbles you can pinch between your fingertips, vs. stones hefty enough that you can only hold one in your hand.

And the assembly is totally different -- masonry lays down the stones (with or without mortar) in a planned, calculated, deliberate fashion. They don't have to be of uniform size and laid in a simple pattern (like rows of uniform height), but their placement is deliberate as each stone is set into the whole assembly. Whether you're looking at a brickwork facade of a house, or the impenetrable walls of Macchu Pichu, you can tell that the arrangement of individual stones into the whole was decided by human actors the whole way through.

The placement of individual stones within concrete is the opposite -- not even a single one was deliberately placed where it is, after deliberating about the others around it in the existing whole and where future ones would be placed after it. Rather, the stones are mixed up like balls in a hopper during the mixing process, and as the whole composite mass is poured (or sprayed or whatever else), the arrangement of stones does its own thing before settling into its hardened final state. Workers are not intervening to move this stone here, that stone over there, before the whole thing hardens. They wind up wherever they wind up.

And so, although the whole thing was made from human civilized technology, it has the look and feel and impression of a natural rock like sedimentary conglomerate. It doesn't look artificial because it is not artificial -- we introduced natural randomness during the mixing process, and did not intervene during the pouring and hardening process. It's somehow natural and the output of human technology at the same time -- maybe geological husbandry, like animal husbandry, not designing animals in a laboratory or factory.

At any rate, when the aggregate (the small gravel stones) in concrete is exposed, it looks like a Stone Age material, not an Industrial Age material -- not even a Metal Age material. It looks just as prehistoric in age, natural in formation, and organic in shape and texture, as traditional rocky materials like marble, granite, etc. But it's actually new, created by America -- not even the Romans exposed the aggregate like we do. We needed an ancient material to establish our primeval connection to this land, so we invented one that did just the job!

* * *


These days, you can't go to any public space in America without seeing at least one example of exposed aggregate concrete -- drinking fountain, trash can, cigarette ash receptacle, wall / column support, bench, sidewalk / pavers, curb, etc.

And you *won't* find those things in Europe or anywhere else in Ye Olde Worlde. Theirs are made out of metal or stone.

This material is not only distinctly American, it is ubiquitous in America. We take it for granted that any random strip center in any ol' American suburb will have a trash can made from this material, and that the drinking fountains in the same suburb will be made from it as well. No material is more all-American than exposed aggregate concrete.

This also shows how populist and popular the material is -- it is not restricted to elite university environments, appreciated only by eggheads, or expensive to use. It's very affordable, suitable for mass use.

In fact, as I mentioned earlier, the desecration of the American architectural traditions and standards, especially the anti-Brutalist iconoclasm, has been a crusade led by the professional class for the professional class, in blue states and blue cities, by government bureaucrats and academics and pharma research labs, and by women rather than men. It's every conceivable demographic that lives in order to carry out the will of the neoliberal Democrat party.

The only wrinkle is the meta-ethnic frontier one -- West Coast Democrats are far more conservationist of American culture than East Coast Democrats (Boston / Massachusetts being ground zero for Brutalist demolition). They're closer to the historical, defining frontier against the Indians, while the back-Easterners were never shaped into Americans by that frontier, so why would they want to preserve its cultural output? They're pseudo-European, and they want their culture to be that way, and stay that way.

* * *


However, we can't say that these ubiquitous concrete drinking fountains owe their existence to Brutalism -- that was just one stage within American architectural ethnogenesis. It goes farther back -- back to Frank Lloyd Wright himself! It's amazing, I don't plan to discover his foundational influence in everything I look into (like the swivel chair and cantilever chairs generally), but he really was America's first-mover genius. American architecture & design is just footnotes to Frank Lloyd Wright -- and that includes all areas absorbed into our empire over the 20th C., like Europe and Japan.

The work in question is the Horse Show Fountain from -- where else? -- Chicago, dating back to -- when else? -- 1909. Not Berlin, not London -- and not New York, for that matter. Not 1919 or 1929 or 1939 or 1949. Both the original and the current replica (made in 1969) are made from reinforced concrete, which is not clad behind any other material. It's a drinking fountain, for people and originally horses too.

Although the Wiki article claims that only the current replica has the grainy exposed aggregate surface (ubiquitous by the '60s), a gallery of images of the original, both photographic and illustrated, make it look about as aggregate-y as the later replica. Maybe in some areas more than others, like around the edge of the basin, where there are square indentations, but still, it doesn't look radically different and perfectly smooth.

And in fact, Wright used the exposed aggregate finish in the same year of 1909 in the same city of Chicago, for the Unity Temple. So, hardly a stretch of the imagination to believe the original fountain had some exposed aggregate as well.

Before getting to the Unity Temple, though, we have to consider earlier structures built elsewhere in America and Europe that claim to be the "first concrete / reinforced concrete buildings".

In 1853 in Denis, France, Francois Coignet built a reinforced concrete house -- but it was not exposed as an architectural element. Looks like it was covered by plaster (now peeling off in sheets), which was then painted. Because it did not take the material in a bold new direction, it spawned no imitators or movement within France. If you wanted painted plaster on the facade, you didn't have to use concrete underneath it -- any material from the French tradition would do.

Then in 1873, using a process designed by Coignet, the Coignet Stone Company Building in Brooklyn, New York used concrete blocks without any cladding. However, by casting them into blocks meant to resemble the cut stones of traditional masonry, and then either laid into place in arrangements also from traditional masonry -- or poured into molds meant to mimic that arrangement -- the concrete doesn't really show itself. If you didn't know beforehand, the viewer would probably think it was any ol' stone building. This is apart from the overall style being a Euro-LARP-ing style rather than a new American style. The raw material itself, and its assembly into the whole -- whatever style it is -- does not look new or different from European stonework.

The William E. Ward House of the same time period and metro area, has the same problems with it being the "first" in an ethnogenetic sense. It is made of reinforced concrete, which is not clad behind another material, but the material has either been cut and laid into place, or poured into a mold meant to resemble, the processes of traditional masonry. On the lower two stories, the corners where walls meet have simulated quoins, the most glaring example of trying to disguise its concrete nature as traditional masonry. Again, this is apart from the matter of the overall style being a Euro LARP.

The Highland Cottage from the same time and place has the same problems, and then some. Aside from simulating traditional masonry, the concrete is faced in stucco. Unlike the Ward House, this one is not reinforced concrete. The Coignet Stone Company Building has a reinforced basement, but not above that level. Wright's fountain and Unity Temple are reinforced. However, I don't think reinforcement is central to the development of a new American style and material vocabulary. It's not visible, and is only relevant on the utilitarian level -- allowing greater-scale structures to be built.

Aside from being in the wrong place for American ethnogenesis (back East), these three New York buildings are also from the wrong time -- still mired in the integrative civil war phase of imperial growth, which included the Reconstruction era. It wasn't until the 1890s that the winner of the civil war -- the industrial Midwest -- could hit the ground running with its creation and dissemination of a new national culture, after internal divisions had been sewn up. This would spread westward along with the meta-ethnic frontier, although places back East ended up adopting it to some extent as well. But it wouldn't last as long back East since they have always been reluctant participants in American culture.

In the right place at the right time -- Chicago in the first decade of the 1900s -- Wright built the Unity Temple. It was not only a new overall architectural style -- American Block Symphony, not Gothic, Baroque, etc. -- it used a new material, concrete with the aggregate exposed. The volumes do not resemble traditional blocks from masonry, are not laid into place in masonic ways, and do not simulate or mimic them via the molds into which the concrete is poured. Just monolithic slabs of concrete, of varying size, with more or less ornamentation built into the mold's shape. Not hidden behind anything else.

In addition to not hiding the concrete, and not mimicking masonry, the exposure of the aggregate within the concrete is a milestone in the history of American architecture. Now the material looked more like granite or marble or some other Stone Age material with patterns and textures within it -- not requiring their addition through mosaic techniques. It no longer looked so smooth and uniform and monolithic.

The techniques used to expose the aggregate are not relevant to its final state, but in this case the workers used wire brushes to gently grind away the outermost layers of the cement binder, like using a fork to flake away the outermost layer of a fruitcake to expose the individual globs of fruit suspended in the flour-y binder.

Like the Horse Show Fountain, the original Unity Temple showed signs of wear by circa 1970, and it was restored (not replicated) with an exposed aggregate finish (and then another major restoration in the 2010s, still using the exposed aggregate finish). But the original back in the 1900s had an exposed aggregate finish as well, as shown by contemporaneous pictures and Wright's own words (likening the appearance to granite). This makes me believe the original Horse Show Fountain also had a similar degree of exposed aggregate finish as its later replica.

* * *


By the time of a 1986 article from Concrete Construction Magazine, "Unity Temple: the Cube That Made Concrete History," the neoliberal backlash against the Progressive and New Deal eras had begun, as well as its cultural expression in the perversion, slandering, or outright demolition of America's distinctive culture. The central target for neoliberals was Brutalism -- too American instead of whatever Olde Worlde LARP / pastiche they preferred, too populist instead of elitist (affordable concrete vs. expensive masonry), too ubiquitous instead of confined to the bi-coastal top zip codes.

In that context, the authors cannot use the term Brutalism or refer directly to the 1960s and '70s as the extension of the history begun by the Unity Temple. The reader is left to fill in the blanks, but that's what they're getting at -- American Block Symphony styles, using exposed aggregate concrete, trace back to Frank Lloyd Wright, in Chicago, at the turn of the century.

They also do not overtly state what this means for other boneheaded theories -- like the myth that Brutalism as a camp, or the use of unhidden concrete, or blocky assemblages of volumes, grew out of Europe somewhere between the '20s and the '50s. Nope -- it's as American as apple pie, from the Midwest (and later, further out West), from the turn of the 20th century, from the American architectural Plato himself, Frank Lloyd Wright.

Europeans were simply a non-entity in endogenous cultural creation after their 18th and 19th-century plateau. They descended into chaos in the early 20th C, along with their empires collapsing in WWI, limping through the interwar period before the remaining fragments were then scooped up by the American Empire -- both politically and culturally. If they wanted to join the American camp, they were more than welcome, and by the Midcentury Modern moment, they were all aboard Team America.

Blaming Bauhaus for anything outside of Europe in the interwar period is just a cope -- and if you're American, a cope to hide your thinly veiled anti-American attitude toward our culture. "Wah, I identify as an 18th-C. Euro aristocrat / ancient Roman villa-owner" -- too bad you're just some American suburban-raised schlub from the 20th and 21st centuries. You're no more of a Baroque aristocrat than a man is a woman. Remember, if you're outside of Europe:

>ywn be European

And here in America, we have nothing to apologize for or feel embarrassed about. I do feel sorry for some parts of Europe, in Britain and Germany mainly, where "Brutalism" was de facto Bauhaus eking out another few decades of comatose existence, while wearing a concrete disguise in order to blend in with the new American style that was anything but Bauhaus-y.

But charmingly Stone Age meets futuristic chrome drinking fountains adorning and providing a public good at parks all over America? Sublimely primitivist yet futuristic buildings that connect us with the primeval grounded past, while somehow simultaneously enticing us through a portal to the optimistic utopian future? No, that is to our *credit* as Americans, with our own cool badass culture. There is no "blame" to go around in the first place.

If you hate on Brutalism, you hate on the entire American tradition, from Frank Lloyd Wright to public parks to our ultimate architectural activity-place -- the malls. Oh yeah, I'm just getting started on this crusade to vindicate Brutalism. All you faggy mall-haters better pack up and leave now. But just as a preview: both malls and Brutalism proper were derided and demolished during the same time period, by the same camp of people, with the same complaints, whereas the appreciation / celebration / nostalgia came from a similar group of people (opposed to the first camp).

* * *


To conclude this exploration into the origins of exposed concrete as America's defining building material, let's take a whirlwind tour through some major milestones along the way, between the Unity Temple and Brutalism in the '60s and '70s. To not stray too far from the main topic, and because he really was the one who organized everything into its major channels, we'll stick with good ol' Frank Lloyd Wright.

In the 1920s, he put in a stint in Los Angeles, where he built several houses using concrete blocks that were cast on site, but not in a recognizable Euro / Roman / Olde Worlde form. Rather, their rectilinear geometric impressions were inspired by Mayan temples and other New World civilizations.

The blocks were then arranged into place like usual masonry, in horizontal courses or stacked into columns, all contributing to the synthesis of Mayan step-pyramids and his own American Block Symphony styles. But they were clearly made from concrete, not stone that had been cut and carved, and not bricks. The designs are intricate and are used in a large number of blocks -- clearly telling us that they were all cast from a single, intricately shaped mold, not intricately carved each time. The latter would've taken so much labor, it could only be built by a legion of slaves for a monument for an imperial ruler -- not a house for a typical affluent American household.

You can watch a documentary on this episode of his career for free on YouTube. These buildings are the Storer House, the Millard House, the Samuel Freeman House, and most famously the Ennis House.

These blocks were later reincarnated, still in California but spreading elsewhere, in the decorative breeze blocks of Midcentury architecture. See here for an overview of the breeze block phenomenon -- one of the most identifiably American decorative elements, something unseen in Europe, but are everywhere out West (and somewhat back East), down to the most lowly apartment buildings, not restricted to elite circles. As you can see from the close-ups here, already in the '20s Wright used versions of his blocks that were perforated to allow light and wind to pass through, in addition to the totally solid versions.

In the 1930s, his Fallingwater house used massive horizontal cantilevered slabs of concrete, which although it has a slight sandy pigment to it, is still recognizable as concrete -- not clad in stucco, not employing or mimicking masonry, etc. The entire building is not made from concrete, but these slabs are its defining features.

Finally, and most important to establish the link to Brutalism, is the Guggenheim Museum, which was planned & revised during the late '40s and early '50s, and was built between '56 and '59. It is made from concrete that was poured -- or rather, sprayed from a gun -- in place, not cast into individual blocks used for masonry. It is not clad in any other material, nor was it hidden under heavy paint (although it did receive a light beige coat at first, which was later changed to white).

In fact, the paint is thin enough that you can still see with the naked eye the woodgrain impressions left by the boards that acted as the boundary or container ("formwork"), onto which the concrete was sprayed from the inside. See this post for the details. At first Wright wanted a smoother surface, but the head of construction argued that it was not only impossible, but that the impressions showed off the material better -- it's not stone, it's not going to look like stone.

Leaving the impressions of the formwork became a staple of Brutalism, and as far as I can tell, it all started (as always) with Frank Lloyd Wright, well into his senior career. Indeed, when first built the Unity Temple showed a kind of horizontal banding left by the various stages ("lifts") in which the concrete was poured from lower to upper heights (for the photo, see p.3 of the Concrete Construction Magazine article linked earlier).

Small-scale impressions of woodgrain, up to seams between successive lifts in the pouring process, are just like the natural imperfections in animal skins or quarried stone, and courses of masonry that are not perfectly level all the way across. It gives the concrete a primitive Stone Age feel, not a lab-perfected ultra-modern material with no variation of any kind or any seams.

So, Brutalism's "openness" about its construction process traces back to Wright, in the first decade of the 1900s -- not to Mies van der Rohe, who used no concrete at all in the Barcelona Pavilion several decades later, nor to any other Bauhaus-adjacent boogeyman / hero (depending on whether the clueless academic is a hater or lover of Bauhaus).

And not only did Wright pioneer the openness of the concrete construction process in the Guggenheim Museum, he also made the building a large-scale sculpture out of a few pure geometric volumes, and they're arranged into an asymmetric grouping to make for some movement of attention and off-kilter dynamism -- without warping the fabric of space, using distorted points-of-view, or fragmentation of the components, as would happen during the neoliberal era, most notably by Frank Gehry in another Guggenheim Museum (the one in Bilbao).

These defining traits of Brutalism were all there in the late '50s in America, but not in the '50s apartment blocks by Corbusier or the Smithsons, which are utilitarian Bauhaus boxes that use concrete instead of some other material. BFD -- it's still Bauhaus, not the style pioneered in America and later called Brutalism.

* * *


To reflect on where we started with exposed aggregate concrete, that not only became a staple in those ubiquitous drinking fountains, trash cans, benches, columns for shopping center covered walkways, etc. Exposed aggregate running in vertical corduroy bands was a staple in Paul Rudolph's buildings, e.g. the Yale Art & Architecture Building from the early '60s and the Boston Government Service Center from the early '70s. Much of the facade of the Xerox Tower (by Brutalist superstar Welton Becket from the late '60s) is exposed aggregate.

There is no such thing as the "good Brutalism" that was for a popular audience, and had the charming familiar exposed aggregate, vs. the "bad Brutalism" that was for elites and had clinically smooth texture and perfectionistically uniform color. The latter-day American Stone Age material, with aggregate exposed, adorns so many of the structures that the clueless haters never bother to look at, and just assume that because it's concrete, it looks like dried cement.

Nope, it has lots of texture, pattern, and color from all the various stones revealing their faces. They may not come in neon or jewel tones, but there's plenty of earthy yellows, reds, oranges, browns, blacks / grays, sometimes shading into blue tones. And that's the type of "color" that the haters have in mind anyway -- a brick facade, marble, etc. If it counts as colorful for standard red brick and marble, it counts for exposed aggregate concrete.

Why don't they know what these buildings look like? Because they've never experienced them. If they've been up close to one IRL, their senses are too weak to perceive what is right in front of their faces. But mainly they are into hating on Brutalism as one part of their Olde Worlde LARP, and because Brutalism is distinctly American, that's a ripe target. It doesn't matter if its facades are as colorful as brick and marble facades -- just tell a lie that it's uniform gray, and don't bother to look closely at pictures to tell for yourself, and trust that everyone else in the LARP will do likewise.

The "why no color?" complaint is really rich, given that another complaint from the clueless haters is that Brutalism ignored the desires and wills of those who actually utilized the buildings, and only pleased the distant cultural elite who viewed them through photographs in slick magazines.

Actually, it's the haters who only look at these buildings in far-away-shot photos over the internet! Any close-up photo would show the texture, color, variety of stones, etc. But they image searched the building, got a zillion copies of the same shitty stock photo shot from a million miles away, and that's all they need -- close-up shots might contradict their preconceived hate, so please, anything but close-ups! And definitely no IRL visits to see it unmediated -- it would contradict your beliefs, and put you so physically close to a contaminating heretical substance -- Americanism! Why, all that American stuff might just melt away years of effort to cultivate your Olde Worlde LARP -- can't risk the exposure!

But as I said before, most Americans don't hate Brutalism, concrete, or its exposed aggregate form. We take it for granted, as the physical stuff itself as well as its creation of a primitive-futurist environment that we as Americans find irresistible. That mood makes us comfy and familiar, because it's so deeply ingrained into our culture by this point.

It's only managerial-professional-class Euro LARP-ers who get incensed over these defining traits of our culture, for obvious reasons of status insecurity when they belong to a non-European culture. Sadly for our heritage, though, they do wield disproportionate decision-making influence, so they can and already have begun a campaign of anti-American desecration and demolition, particularly on the East Coast.

That is as good as any predictor for the boundaries of the future states of the post-collapse American Empire. Where they're demolishing the distinctive architecture of our nation / empire, they're clearly seceding. Where they're neither fighting to demolish it, nor pro-actively guarding it, is a border region. Where they're conserving it long in advance, will be part of the core of the new American state, post-empire.

Concretely, as it were, that means the whole back East region will secede, with central-southern Florida being a wild card that could become a somewhat reduced nation of Florida unto itself, or a non-contiguous piece of America, while the north of Florida joins the secession. The Midwest will mostly stay, although Ohio could be a wild card that would join the secession. Not surprisingly, Florida and Ohio are both the two constant swing states in presidential elections.

Obviously California will stay and become the new political core (it's already been the main cultural core for most of our ethnogenetic growth period, after Reconstruction). But other parts of the Southwest will stay, too, for the same reasons -- Vegas (AKA Nevada), Arizona, all of Mormonland, Texas, all of it.

In fact, Mormonland provides the most intense counter-signal to the back-East demolishers of American Block Symphony buildings. Mormons have standardized Block Symphony as the style for their temples, the most important building type for them (not the weekly meeting houses, but the ones where weddings, initiations, and so on, take place).

Mormon elites did eliminate the Midcentury / Space Age (not Brutalist) design of the Ogden and Provo temples (in 2014 and early 2020s), but they replaced them with Block Symphony designs from the American Modern period and geographic origin. Not glass-and-steel fishbowl flexspace abominations like the East Coasters have done post-demolition, nor an Olde Worlde LARP that the trad haters of Brutalism would want (but would never actually get, and would settle for getting cucked by a glass-and-steel Silicon Valley kindergarten instead, because they hate the New Deal politics and culture even more).

The last group in the world to make the contradictory concept of "Greco-Roman" architecture their standard would be the Mormons, whereas it would be the go-to for many East Coasters. That tells you all you need to know about who is gonna make it into the post-imperial-collapse nation of America, and who will be inhabiting small breakaway states riven by mutual mistrust, bitterness, and sinking deeper into the cultural black hole that they've always been.