Perspectives, insights, and research

From the Scholarly Kitchen

How Traditional Publishing Works

September 17, 2018  |  By

Originally published in The Scholarly Kitchen on September 17, 2018

With so much current talk about new business models in publishing, and a series of announcements that may make it appear that we are headed toward a fulfillment of Open Access (OA) millenarianism, whether of the Gold or Platinum variety, it seems useful to describe how traditional publishing works, if only for reasons of nostalgia. I want to provide a high-level description, without any particular regard to practitioners or even of specific formats (i.e., books, journals). Most of the comments here apply as well to other media forms such as movies and music, as there are common elements to all businesses that invest in content.

Large cog wheels in motor

Which brings us to the matter of definitions: What is traditional publishing anyway? Publishing is the business of investing in and marketing content that is largely text-based. There are exceptions to this. A volume of photographs from Aperture can hardly be called text-based, and scientific articles found in journals may have video and animations embedded in them. But anchoring a discussion of publishing in text has the sanctions of history and common usage. There is a difference between what we expect from The Lancet and Steven Spielberg, even if both use video cameras at least part of the time.

The word “traditional” is problematic, though, as in current parlance it is linked to the leviathan of the established order, and who would want to say anything good about that? Nowadays, at least in the world of scholarly communications, traditional publishing is called “toll-access publishing,” a terrible, derogatory term, which defines traditional publishing along the single axis of access rather than, say, editorial strategy or performance. It is true, though, that traditional publishing is publishing that you pay for, as when you step into a bookstore and come out with a copy of Homo Deus or The Complete Essays of Montaigne or when a librarian purchases (some would say “leases”) a subscription to Nature. In the traditional model authors and publishers do a lot of work up front and then wait with trepidation to see that that effort will find a willing customer. In traditional publishing publishers take risks — they invest in content — and hope for a payoff.

Now let’s imagine that you are responsible for that publishing operation. Your job is to invest in content and get a return on that investment. You go to the office each day with an elevated pulse because, contrary to popular myth, there is no sure thing in publishing. An investment can result in a total loss, and the people supporting your organization, whether it is a not-for-profit or a commercial firm, will not be pleased. So you look for ways to make that investment work. You quickly realize that the marketplace does not have an infinite amount of money to spend on publications, so some publications will succeed while others fail. It is clear that in order to command the attention of the marketplace, you have to have superior products. This is the principal point of competition among publishers, editor vs. editor, and that competition is brutal. Yes, price is always a factor, but no two publications are identical (because of the monopoly nature of copyright), so customers will flock to those of higher quality, at least as a particular customer or market segment understands quality. For journals, for example, quality may be papers about groundbreaking studies. A K-12 publisher may measure quality by the appropriateness of content for a particular age level. A publisher of romance or mystery novels may measure quality by the clever adaptation of a tried-and-true narrative formula. Quality, in other words, is not an absolute value but a reflection of the interests of the customer base. Toni Morrison is not “better” than Harry Potter; only better targeted and more appropriate for a particular — and paying — audience.

Thus the defining property of traditional publishing is editorial selection. That is what publishing is about. Editorial selection is a tough game, however, and publishers seek ways to minimize their risks. The most important risk-management strategy is portfolio publishing, which all publishers use whether they acknowledge it or not. Since it’s hard to know ahead of time which books or articles will succeed — and if you wait until the signs are clear, the competition will have gotten to the author first — publishers have to make bets in the absence of complete information. They hedge these bets by making investments in a larger number of authors and properties than are likely to succeed, with the hope that enough will indeed succeed to more than pay for those that do not. It’s often said in trade publishing that only 20% of the books make money, but they pay for everything else. (This, by the way, is exactly how venture capital works: a dozen investments, two winners, and a write-off for the rest.)

Economically speaking, a journal is the portfolio strategy in practice. No one knows which article will be the breakout piece, but editors place a range of bets. Some articles go uncited, some are cited a few times, and some are superstars. The metric that captures this is the much maligned and profoundly misunderstood Journal Impact Factor, which measures the value of the portfolio. One consequence of the portfolio strategy is that some of the value associated with individual works migrates to the brand. Thus we look more kindly, for example, at a book from, say Oxford University Press or an article in The New England Journal of Medicine. It is no coincidence that the OA movement seeks to demolish publishing brands; that demolition will take the portfolio strategy along with it.

Another strategy, ineffective in books but crucial for journals, is downstream market dominance. Since book publishers sell their wares through a number of channels (EBSCO, ProQuest, Amazon, Ingram, etc.), they cannot dominate the point of sale as they would like to — and in any event, the Amazon monopsony is too powerful to withstand. In journals, however, downstream market dominance takes the form of the Big Deal, whose purpose is to deflect customers’ attention away from editorial decisions, enabling the largest publishers to stuff their packages with products of lesser merit. Whatever one’s view of the Big Deal (people tend to like it when they control one, dislike it when they don’t), it is best viewed as a hedge in the competition for editorial superiority.

These various hedges against open and hostile editorial competition bring inefficiencies to the market (to the benefit of successful publishers), but they don’t wipe out the basic market mechanism, which is that publishers seek editorial products that will attract finite market dollars. That market mechanism works in subtle ways. Each product is a series of negotiations — between publisher and publisher in the competitive scheme of things, between author and publisher, between publisher and distributor. Each party influences the other, which in turn influences the first party. The series of interactions is a winnowing process, where market demand makes the ultimate decision as to what is good and what is not. What is good is what gets sold. I fear that there is a lack of curiosity about this market mechanism, which results in simplistic and often wrongheaded views of what publishing is, often characterizing it as a vehicle to serve the greedy. Better to think of it as a complex process to navigate through finite resources in pursuit of editorial superiority.

Though publishers use platforms of various kinds, publishing is not itself a platform. In effect, publishing sits atop the high stack of infrastructure and services, which makes it susceptible to disruption when a seismic event occurs down below in the stack (e.g., the invention of PDF, the global reach of the Web). When software developers insist that publishing should look more like software, they don’t hit the center of the target, as publishers use software only insofar as it serves their primary business, which is to invest in and sell content and to hedge their bets by removing some of the risk from the content business.

The OA world is the inverse. While there is nothing to stop OA services from having high-quality editorial materials, editorial quality is not at the structural center of what OA sets out to do. For traditional publishers editorial quality is the means to effect a sale, whereas for an OA service, that quality is epiphenomenal. OA is built upon the assumption of abundance, not of finite resources; over time the form of the publications of OA and traditional publishing will pull further and further apart. When an individual or librarian, working with finite resources, notes that they can not acquire all desirable publications, that’s not a bug: it’s a feature.

Go to original article

Joseph Esposito

Twitter  |  Email  |  Linkedin

Joe Esposito is Senior Partner at Clarke & Esposito where he specializes in strategy in the areas of digital media, publishing, and education technology. Joe has previously served as the CEO of three companies—Encyclopaedia Britannica, Tribal Voice, and SRI Consulting See Full Bio

All Perspectives By Joseph →