This article by Stan Liebowitz (free download) takes a look at the phenomenon of co-authorship in academic journal articles, concluding that there is too much of it. Why?
Authors are rewarded for the number of items in the publication list on their C.V., and this number is scarcely devalued when the author was not the sole author. When you write an article with one partner, strict prorating would give you 50% of the credit. In practice, each co-author gets about 70-90% of the rewards, esteem, and credit from their department.
So, what's to lose? You can pump out more papers, and you get something close to full rewards in most cases. You may not even have to contribute too much to each one -- if you're really lucky, your name is tacked on spuriously to the author list.
Liebowitz notes that prorating the authorship would solve much of the problem. If you're thinking of co-writing an article with 9 others, and you expect to only get 1/10 of the rewards from your employers that a sole author would, you start to find better uses for your time, instead of trying to maximize the accretion of lines onto your C.V.
He has also measured the rates of co-authorship among the top-ranked economics journals back to the 1940s. The rate is low and fairly constant during the '40s and '50s, and some increase is already apparent by the early '70s, after which it steadily increases.
This fits the phenomenon into the status-striving and inequality cycle. The initial push toward the over-production of elites began circa 1970, as Peter Turchin has estimated by looking at the growth in law school enrollments. Intensified status-striving appears to have been a chain reaction more than a unified wave, beginning with more aspiring elites seeking credentials, and then spreading out toward the lower tiers of the social pyramid (e.g., the higher education bubble that began about 10 years later, circa 1980).
Fitting it into this broader cycle also supports the interpretation of excessive co-authorship as something that is individually beneficial yet socially corrosive. Liebowitz provides a simple model to show how assigning more credit than the person's fair share leads to a level of co-authorship beyond the optimum. The department is now churning out too many co-authored articles, amounting to a lower total contribution to knowledge than if those authors had focused more on their own work (with some co-authorship too).
Before the shift in the social norms toward dog-eat-dog, the prevailing norm was making-do and reining-it-in. Hence the low (but non-zero) levels of co-authorship during the '40s and '50s.
Liebowitz focuses on economics because that's where he feels most comfortable sifting through the old journal articles. But he cites a growing literature on co-authorship that details its broad practice across disciplines.
Most of that literature assumes that the growth in co-authorship is due to increasing specialization, greater complexity of subject matter or mathematical techniques, or something similar that requires more researchers today to form teams and write articles as co-authors. Liebowitz instead draws the natural conclusion from the theory of self-interest that the growth in co-authorship is a form of rising careerism (although he doesn't delve too much into the possible causes for this).
I've heard these kinds of explanations informally -- that because of the fancy-schmancy statistical toolkits out there now, researchers need to write articles jointly with someone who knows what they're doing. Y'know, instead of learning how to do something yourself, or asking for clarification or help when you don't, but not abdicating responsibility and outsourcing the quantitative stuff to someone else entirely.
In fact, those who study co-authorship among legal scholars, such as this article, have reached the opposite conclusion (and no less self-assuredly) -- the growth is due to a greater empirical focus, whereby you need area experts who know the local terrain and can interpret the local language for you. Not that the abstract theory, formal models, mathematics, etc., are becoming bewilderingly complex.
Two firm conclusions about the same phenomenon that directly contradict each other means that something else is going on. Unless we ignore the distinction between abstract vs. empirical complexity, and lump them under a single thing called "complexity," which makes co-authorship more necessary.
The idea that things are just so much more complex than they used to be, is nothing more than a self-serving rationalization. Like, sure, back in the old days when Isaac Newton invented calculus and classical mechanics, when Maxwell invented electro-magnetism, and when Einstein wrote his four Annus Mirabilis papers, it was still possible to write up your ideas on your own. Add on Darwin, Mendel, Fisher, Wright, Hamilton, Trivers, and others from biology. And whoever else you prefer from your own discipline.
But inventing all of modern mathematics and science was just baby stuff. I mean, who hasn't independently invented calculus before dozing off on the couch at night? And quantum physics is one of those things that most of us think up while we're staring at the refrigerator waiting for our tea kettle to boil in the morning. And I've lost count by now of how many times I've integrated Darwinian biometrics and Mendelian inheritance while being put on hold by the power company.
The bogosity of these arguments comes through more clearly when we look at the actual output in the supposedly more brain-bending era of the past 30 or so years. If research these days requires three Einsteins rather than just one, shouldn't the results be even more mind-blowing than the original work was back in its day? Yet how many Earth-shattering theories or empirical patterns have come out in the past 30-odd years that are several times more profound and awe-inspiring than any of those listed above by single authors?
From what I've been exposed to, the closest thing to a Big Deal is the recent body of work on human evolution that has used human genome sequencing. It throws light on our murky early origins, in particular the genetic influence on homo sapiens from related species like the Neanderthals (and bringing to light a species that we didn't even know existed before, the Denisovans). It also shows us how different groups have evolved in recent times, say since the dawn of agriculture, adapting to their local conditions and new ways of life (starchy diets, centralized states).
At the same time, I wouldn't rank this sub-sub-sub-field up there with Fisher's formalization of natural selection and adaptation (oh yeah, plus inventing modern statistics), or Hamilton's theory of kin selection and inclusive fitness.
And I can't include this work in a larger category called "human genome research," because outside of evolutionary results, looking at genomes hasn't taught us anything, when it promised to locate what genes were responsible for cancer, schizophrenia, homosexuality, and other fitness-depressing traits that would have been weeded out a long time ago. It also promised to deliver gene therapy that would help the many people suffering from these conditions. All that work hasn't amounted to diddly squat.
It is hard to escape the conclusion that researchers are too concerned with bloating their publication list in their grab for greater status, sacrificing the focus and originality that comes from serving as the captain of their own ship. If you can put in a decent effort and get fully rewarded, why give it your all?
Soon the norm becomes design by committee, pass the buck, cover your ass, and plausible deniability. Fewer folks follow the norm of stewardship and of tending your flock of ideas until they're good and ready to be sheared or slaughtered.