Special Issue

Issue 37 · September/October 2021

Join the conversation on Twitter @Briefer_Yet and LinkedIn.

Did a colleague forward the The Brief to you? Sign up via the button below to get your very own copy.

Journal Benchmarking

C&E is pleased to have released our 2021 benchmarking report on Editorial Metrics in Biomedicine. The report has been sent to participants of our biomedical cohort. If you edit or publish a biomedical journal and did not participate, but would like to in the future, please let us know and we’ll be happy to include your titles in our 2022 study.
 
We are presently collecting data related to our 2021 Physical Sciences cohort (including physics, chemistry, engineering, mathematics, and materials science) and are just beginning recruitment for the next 2021 study cohort: Life Sciences. If you edit or publish a journal (either published independently or via a publishing services agreement with a commercial partner) in the life sciences and wish to participate or learn more, please let us know (benchmarking@ce-strategy.com).

Upcoming Events

David Crotty will be teaching in the upcoming CSE Short Course for Journal Editors, as well as speaking on a panel at the UKSG November Conference — Open Scholarship 2021: the good, the bad, and the ugly (no comment on which of these he’ll be representing). Laura Ricci will be presenting at the Charleston Conference Pre-Conference Session, “Opening The Book”.

Special Issue

1

Paolo Crosetto has written a widely discussed post, provocatively titled “Is MDPI a Predatory Publisher?” We take no view on this question but did read with interest Crosetto’s analysis of the rapidly growing Swiss open access publisher. MDPI is known as the publisher of a portfolio of well regarded journal titles. They are lauded for their rapid publication times and their excellent customer service — authors generally report good experiences with MDPI. This customer service is largely due to being quick and responsive — and also because they have a 50% acceptance rate (good news is typically received much better than bad news). Part of the reason MDPI is viewed as responsive is that they are supported by an army of staff. According to their 2020 Annual Report, MDPI added 1,700 new positions at the company last year (you read that right: seventeen hundred!). 
 
MDPI are also known as prolific email spammers, clogging researcher in-boxes with offers to submit a paper, often on a topic unrelated to the researcher’s work. They were once infamously on Beall’s List and remain the focus of criticism and controversy over their editorial practices, discussed in both Crosetto’s post and its comments. 
 
Perhaps the most notable thing about MDPI, however, is their growth rate. According to Crosetto, the publisher grew from an output of 36,000 articles in 2017 to a staggering 167,000 in 2020 (which given a 50% acceptance rate, means they reviewed 334,000 papers). Their revenue is estimated to have grown from $14 million in 2015 to $191 million in 2020. For some context, Hindawi, often viewed as a comparable benchmark to MDPI, published approximately 25,000 papers and generated approximately $40 million in revenues in 2020, prior to acquisition by Wiley
 
But the truly extraordinary aspect of this growth is that nearly all of it appears to be related to publication in special issues. According to Crosetto, among the 74 journals published by MDPI with impact factors, there were 388 special issues published in 2013. By 2020 this number grew to 6,756. There are 39,587 special issues with a closing date in 2021. That works out to approximately 500 special issues per journal. That is just a completely bonkers number. Even if a large number of those special issues are abandoned or don’t move forward, the trajectory for special issues in 2021 appears larger than in 2020 and 6,756 is itself already a bonkers number. Heck, 388 (MDPI’s 2013 number) is an aggressive number for special issues given 74 titles — that still works out to over 5 special issues per journal. 
 
For context, most journals publish maybe a couple special issues per year. They are called special issues because they are special. Atypical. Outside the norm. If you publish more special issues of a journal than “regular” issues, the special issues are no longer special. As Crosetto notes:

Traditional journals have a fixed number of issues per year — say, 12 — and then a low to very low number of special issues that can cover a particular topic, celebrate the work of an important scholar, or collect papers from conferences. MDPI journals tend to follow the same model, only that the number of special issues has increased in time, to the point of equaling, surpassing, and finally dwarfing the number of normal issues.

Crosetto questions whether the editorial process for special issues is not the same as regular issues, with peer review oversight managed by guest editors and not the regular journal editors. Volker Beckmann, an editor involved in MDPI special issues, weighs in on this in the comments section, stating that MDPI manages conflicts of interest and manages an editorial process for special issues that is comparable to that of regular issues (though it still sounds like it is a different process). 
 
The question of editorial oversight of special issues has become an acute issue for many publishers. Readers of The Brief may recall the recent case of the Journal of Nanoparticle Research (published by Springer Nature), where a special issue was taken over by a “rogue editorial network” of individuals spoofing the identities of well-known researchers. While this was a truly bizarre situation, it was only made possible by a lack of editorial oversight of special issues at the journal. A similar lack of oversight occurred at the journal Microprocessors and Microsystems (published by Elsevier). Following an investigation, Elsevier has added statements of concern to more than 400 papers that appeared in special issues of the journal. The papers contain “tortured phrases” that appear to be output of translation software that may have been used to rewrite (plagiarize) existing papers. Elsevier reports that in this case, a configuration error in the manuscript review system prevented the journal’s Editor in Chief from approving the papers.
 
There have not, to our knowledge, been reports of “tortured phrases” or rogue editors associated with MDPI special issues. But with that many special issues, really who would even know? The sheer quantity and explosive growth raises many questions of quality control, even if outright scams can be avoided. It also raises broader questions about whether such growth is sustainable. The special issue strategy works because the journals they are attached to have good reputations — and decent impact factors. If the reputations of the journals decline, the ability to attract papers to special issues also declines. Crosetto is skeptical MDPI’s special issue growth can continue:

This is not sustainable, because it is impossible that there are enough good papers to sustain the reputations of any journal [at a rate of 10 special issues] per day. And even if there were, the incentives are set for guest editors to lower the bar. Reputation is a common pool resource, and the owner of the pool just invited everyone on sight to take a share.

While this strategy is not sustainable with the same set of 74 journals, it might be sustainable using a “rinse and repeat” strategy where MDPI develops its additional journals (without 500 special issues) until they achieve a good reputation and impact factor, and then loads them up with special issues. Once the impact factors and reputations decline, they reverse course, reduce the number of special issues associated with that set of journals, and become more selective. Meanwhile, the special issues shift to other journals elsewhere in their portfolio. We have no knowledge that this is MDPI’s strategy but it would be one approach to sustainability across a portfolio. 
 
A word that comes to mind in thinking about MDPI is “stochastic” in that they are hard to pin down and display different, seemingly contradictory, elements simultaneously. When a publisher is growing at such a clip, a lot of randomness is introduced into the system, both for good and bad. Their strategy of optimizing for speed and turning the model of special issues on its head (from a rare, “special” occurrence to the primary mode of publishing) is exactly the kind of thing a “predatory” publisher would do. But the strategy also appeals to many researchers who like the idea of publishing in an issue focused specifically on their niche area of research guest-edited by someone that really knows the field, love fast turn-around times, and like the balance of a decent journal reputation that also has a probable acceptance rate. MDPI in some ways represents a strategy that maximizes growth and efficiency over all other factors. Whether this proves sustainable over the long term is an open question, but one must admire their execution. 


Source: Paolo Crosetto’s blog, Nature

Academic and Professional Publishing

2

PLOS has released 2020 “price transparency” data for journals in its portfolio. This is a Plan S thing. While we are in all in favor of transparency regarding finances at not-for-profits (and so applaud PLOS’s efforts), we are not sure what this information really tells anyone. First, allocating costs in this way is highly subjective, with most costs not breaking neatly into one category or another. The cost of a manuscript submission and review system, for example, is spread out over multiple categories — determining how much to put in which bucket is anyone’s guess. Likewise determining what might go into “Journal and Community Development” is going to vary widely from journal to journal and publisher to publisher. This level of subjectivity makes the data less useful for benchmarking. One publisher will allocate costs one way and another publisher in a totally different way. 
 
Second, the categories leave to the side large expenses necessary for operating a journal. Nowhere is there a category for “accounting,” “human resources,” “legal,” “operations,” “building lease,” “IT,” and so on. Nor is there a category for general R&D or profit/surplus. These are substantive costs that are not obviously accounted for (and all the columns add up to 100% of the APC suggesting that they are factored in somewhere?). 
 
But, as we have discussed before, this is all largely irrelevant as the Plan S Price Transparency requirements rest on an unusual theory of journal pricing. As we wrote in the May 2020 issue of The Brief (see Item 4), shortly after the Plan S pricing transparency scheme was announced:

A journal is not a discrete set of severable services. The value of a journal lies in conferring the brand of the journal on a given paper (and reciprocally, the journal value is built over time by the papers it publishes). This is true for both top-tier journals as well as community journals, where inclusion in a journal is an important signifier for a given community of research, practice, or scholarship. That value is not measured by various aspects of the publication process. Thus attempting to develop pricing for “submission to first decision” and “peer review management” (and so on) is nonsensical because those steps in the publication process are not what is being purchased. Doing every single thing that Nature does, in exactly the same way that Nature does it, is not the same as being published in Nature. The value is not in the process but in the brand. 

This is true at PLOS (who charges more for the more selective PLOS Biology as compared with PLOS ONE) and it is true of other publishers. Academia is a prestige economy and brand (of institution, of lab, of funder, of journal) matters. Authors and their institutions (and most funders excepting those in cOAlition S) don’t care what your costs are. They care about the value to them of the product or service being purchased. Basing pricing on costs not only misaligns pricing and value, but creates perverse incentives for publishers to increase their costs rather than invest in systems and processes that reduce costs. Such publisher investments have significantly reduced the cost per article paid by institutions over the last two decades. While “pricing transparency” sounds great (who is not for transparent pricing?) the reality is that pricing is already transparent (APCs are public information as are the details of a growing number of transformative deals).


Source: The Official PLOS Blog, cOAlition S, Robert Kiley (@robertkiley) via Twitter, The Scholarly Kitchen

3

After pausing uploads of new content since late 2020, Sci-Hub has come roaring back to life with 2.3 million new articles posted in early September. Sci-Hub had stopped making new uploads due to court imposed restrictions in the ongoing case in India regarding the legality of the site. According to Sci-Hub founder Alexandra Elbakyan, “our lawyers say that restriction is expired already” and hence the pirate site has resumed uploading. Elbakyan presumably elected to honor the court’s restriction as she believes there is a chance Sci-Hub could win the case (in instances where Elbakyan did not think there was a chance of winning, no representative from Sci-Hub bothered to show up, nevermind adhere to court orders). Elbakyan and Sci-Hub are thought to operate in Russia, where Indian law does not apply, though an Indian court could order internet service providers in the world’s largest democracy to block access to the pirate site, mirroring similar orders in the United States as well as Austria, Belgium, Denmark, France, Germany, Italy, Portugal, Russia, Spain, and Sweden.
 
The resumption of uploads coincide with the site’s 10-year anniversary and a fundraising effort, meant to upgrade search capabilities, build a mobile app, and drive the use of “neural networks to extract ideas from papers and make inferences and new hypotheses.” The irony of openly acknowledging the costs of maintaining such an operation while at the same time ignoring the parasitic nature of Sci-Hub and the enormous levels spent on labor and technology by others that Sci-Hub is entirely dependent upon, is not lost on us. Donations are only allowed in cryptocurrencies, of course.


Source: Vice, Alexandra Elbakyan (@ringo_ring) via Twitter, The ScientistThe Indian ExpressTechdirt

4

As reported last month in The Scientist, there were 125,000 journal articles related to COVID-19 published in the first 10 months of the pandemic — and 30,000 of them, or about one quarter, appeared as preprints. Even more remarkable, nearly a third of those 30,000 papers were cited in at least one news article, as compared to one percent of preprints unrelated to the pandemic. 
 
These figures come from a study by Fraser et al. published in PLOS Biology back in April documenting how preprints in the medical sciences have exploded since the onset of the pandemic, providing a venue for researchers to communicate their work immediately with both other researchers and (for better or worse) the media and general public. 
 
As preprints become more embedded in the research and scholarly workflow, the question is how that might change other aspects of scholarly communication. If we think of the functions that journals provide — Dissemination, Registration, Validation, Filtration and Designation — preprints slot into the first two: getting the material out and marking the time point where the authors staked their claim to the ideas contained in the paper.  
 
One of the really interesting implications is that preprints might let journals slow down. Speed has become an important competitive factor for most journals — how quickly will you publish my work is a question considered by authors when deciding where to send a paper. This has led to some sloppiness and places increased time pressure on both editors and peer reviewers. But if preprints take on the role of (preliminary) dissemination and registration, would that relieve time pressure on journals? Can journals then spend more time on conflict of interest questions, better scrutinize figures, review data sets, perform more statistical reviews, and put in place a more leisurely and more thorough review process? Perhaps there’s a lesson for the sciences from the humanities, where journal manuscripts go through multiple rounds of revision under the care of skilled editors in order to refine the argument presented. Rather than preprints disrupting journals, are preprints instead a mechanism to allow journals to have the time to do a better job?


Source: The ScientistPLOS BiologyThe Scholarly Kitchen

5

Three essays caught our eye recently, each considering a re-thinking of both the scientific paper and science itself. 
 
In the first, a pair of Brazilian neuroscientists serving as coordinators for the Brazilian Reproducibility Initiative look at the enormous amount of work involved in rigorously ensuring the reproducibility of a research project. By following every proposed approach to reproducibility, “a typical high-profile article might easily take an entire decade of work, as well as a huge budget.” Their proposed solution is to expect less from the scientific paper, essentially splitting out exploratory science from confirmatory science. “Instead of expecting that every paper will establish reliable phenomena, it might be more feasible to improve systematic confirmation of preliminary findings.” It’s an interesting thought, but would require a massive restructuring of funding, careers, and science itself. Budgets would need to be directed away from projects exploring new ideas and put toward confirming a smaller number of advances. It’s not likely to be feasible, but makes for an interesting intellectual debate — are we better served by lots of new research with varying quality, or by investigating a smaller number of ideas really convincingly? Given the ongoing success of sound-science megajournals and the continuing growth in publication numbers, researchers and their institutions seem to have chosen the former over the latter.
 
Speaking of ideas, Nobel Prize winner Paul Nurse suggests that modern science is overwhelmed by quantities of data while lacking framing and a willingness to put forth new ideas and explanations. “It is as if speculation about what the data might mean and the discussion of ideas are not quite ‘proper’.” While data collection is obviously necessary, information is not the same thing as knowledge, and Nurse hits on a key failing of the big-data, AI-driven approaches now in vogue. Collecting data and making observations is only the first step. “For example, it would have been rather a pity if Darwin had stopped thinking after he had described the shapes and sizes of finch beaks, and had not gone on to propose the idea of evolution by natural selection.”
 
On an even larger scale, Ed Yong takes a step back from the past year and a half of covering the pandemic as a science writer to ask, what even counts as science writing anymore? While he notes that the pandemic has largely exposed the damage done to the scientific enterprise by the publish-or-perish mindset (“during the pandemic, scientists have flooded the literature with…half-baked and misleading research”), his key realization is to put lie to the notion that science is some sort of neutral pursuit that stands apart from society. Science is, and has always been political, “because it is an inextricably human enterprise. It belongs to society. It is interleaved with society. It is of society.” The pandemic can’t be understood solely through the lens of virology, but rather requires understanding history, sociology, psychology, anthropology, and more. Yong argues that science is not a cloistered subject, to be confined to journals, universities, and other opaque institutions. “Science is so much more than a library of publications, or the opinions of doctorate holders and professors.” As a community dedicated to the communication of scholarly research, we would be well-served to consider Yong’s plea for expansive thinking.


Source: Nature, The Atlantic

The Book Business

6

Supply chains for many physical goods are a mess right now as container ships anchor in queues at ports and goods pile up in warehouses (along with whatever is going on with lorries in the UK). The supply chain crisis has not left book publishing untouched. As this piece from The New York Times notes, the problem is particularly acute from books printed overseas but applies to domestically printed titles as well due to reduced printer capacity among other factors. The particularly generous return policies of book publishers are exacerbating the situation. With many other kinds of products, the seller buys the inventory from the producer and then is on the hook to sell it. Booksellers, by contrast, can typically return unsold inventory to the publisher. This is now resulting in booksellers, understandably concerned about shipping delays over the coming holiday seasons, stockpiling inventory to mitigate the risk of empty shelves. This is a sensible calculus for booksellers because they can return any unsold inventory (though of course they have to have a place to store the extra books and they do have to pay for them upfront, which may cause cash flow challenges for smaller independent operators). Stockpiling books, of course, only compounds the supply chain pressures and at the same time increases the risk of write-offs from greater than usual returns following the holiday season. As inveterate book buyers ourselves, we recommend that holiday shoppers buy a few extra titles to help alleviate the problem. We’ll be doing our part.


Source: Matthew Hockenberry (@hockendougal) via Twitter, BBC, The New York TimesQuartz

Dealmaking

7

The pace of mergers and acquisitions has not abated heading into the 4th quarter of the year — if anything it has accelerated. As we have discussed before, the underlying driver of this frenzy of activity is a combination of high stock valuations (in the case of publicly-traded companies) and low interest rates. Balance sheets are bulging and money is cheap. At the same time, the pandemic has accelerated trends related to the digital delivery of information, including scholarship and (especially) instructional materials. Acquiring companies are looking to deploy their balance sheets (or sky high stock prices) to build scale, readjust product portfolios and capabilities to new market realities, exploit new business models, and better meet new customer/user needs and expectations.
 
What will be the outcome of all this activity? Certainly there will be fewer companies in the market (though new start-ups seem to be sprouting like mushrooms after a spring rain), but is that a universally bad thing? Joe and Roger Schonfeld made the case in a recent Scholarly Kitchen article, observing that consolidation can bring benefits as well as drawbacks. The operative word here being “can”, not “always will.” Any deal brings tradeoffs — for customers, for competitors, for vendors and partners, for shareholders, and for the market as a whole. 
 
While it is always dangerous to generalize, many of the deals described might be characterized as mature companies buying new capabilities as opposed to consolidating existing market share. The exception is the announced acquisition of Ouriginal by Turnitin, which has raised monopolistic concerns among regulators.
 
Blackboard and Anthology to merge, creating an EdTech powerhouse reportedly valued at $3 billion. 
 
Cambridge University Press & Assessment acquires CogBooks, a technology company that develops adaptive learning software.
 
MailChimp announced it will be acquired by Intuit. The email system we use to distribute The Brief will now be owned by the company who developed the accounting system we use to track our books. If only we charged a subscription fee for The Brief we might look forward to the synergies…
 
TruNarrative acquired by the LexisNexis Risk Solutions, a division of RELX. TruNarrative is billed as a “cloud-based orchestration platform that detects and prevents financial crime and fraud.”
 
Pearson acquires Faethm, a predictive analytics company focused on workforce skills. 
 
Wolters Kluwer announces it will sell its U.S. legal education business to Transom Capital Group for $88 million in cash.
 
McGraw Hill acquires Achieve3000, an EdTech software provider focused on K-12 education. 
 
Turnitin announces intent to acquire Ouriginal, its most significant remaining rival among plagiarism detection software. The deal has raised flags among regulators who are concerned about consolidating market power. 
 
J&J Editorial has been acquired by Wiley for an undisclosed price. The editorial services company provides services related to scientific and scholarly journals, including peer review coordination, operations, copyediting, system configuration, and so on. J&J supports 120 clients, including society and publishers. The acquisition fits well in Wiley’s growing “services” business, alongside Atypon and Hindawi’s partner journal program — and indeed Wiley’s core society publishing program (itself greatly augmented by Wiley’s 2007 acquisition of Blackwell) — as Angela Cochrane notes in a recent Scholarly Kitchen article on the deal.
 

People

8

Barry Bealer joins taxonomy software company Access Innovations as Chief Revenue Officer. 
 
Francis Collins will step down as the Director of the National Institutes of Health after 12 years at the helm of the agency. 
 
Jay Flynn is appointed Executive Vice President of Wiley Research. Flynn succeeds Judy Verses, whose departure came as a surprise to both industry observers and many at Wiley.  
 
Lauren Kane will rejoin BioOne as next president and CEO. 
 
Robert (Jay) Malone has been appointed Executive Director of the Association of College and Research Libraries (ACRL), a division of the American Library Association (ALA).
 
Renew Consultants closes up shop. Simon Inger exits a long and distinguished career as an industry consultant to focus on his roles at Cadmore Media and Society Street. Tracy Gardner moves to TBI Communications. Sam Burrell joins 67 Bricks and Julie Neason will be continuing in a consulting role under the brand Renew Consulting for Societies.
 
Elaine Stott joins Canadian Science Publishing as CEO. 
 
Franck Vasquez, former MDPI CEO, joins Frontiers as head of partnerships. 
 
Vicky Williams has been appointed CEO of Emerald Group. She was previously CEO of Emerald Publishing, a division of Emerald Group. She succeeds Richard Bevan, who is retiring from the role.  
 
Karin Wulf joins Brown University as the Beatrice and Julio Mario Santo Domingo Director and Librarian of the John Carter Brown Library.
 

***

Michael Holdsworth, a publisher who worked at Cambridge University Press for over two decades, has passed away.
 

Briefly Noted

9

In notable style guide news, the Modern Language Association releases an online edition of its venerable MLA Handbook, called MLA Handbook Plus. The digital resource will be sold on a subscription basis to institutions. Meanwhile, the Chicago Manual of Style is now integrated into PerfectIt, a Microsoft Word-based proofreading tool. 

Cabell’s Predatory Reports Database now includes over 15,000 journals. This raises the question, are we nearing a point when predatory journals will outnumber reputable journals?

Education publishers, Macmillan Learning, Cengage Group, Elsevier, McGraw Hill and Pearson, obtained a preliminary injunction against 60 websites selling pirated ebooks

In other lawsuit news, Pearson is suing former partner Chegg over inclusion of Pearson end-of-chapter questions in Chegg study guides. “I​​t could undermine not only Chegg, but you have this entire industry of study guides, and the study guides are all based on existing texts and follow the general selection and arrangement of those texts.”

Richard Charkin takes a heretical view of academic publishing. “I wonder whether the complex negotiations around “publish and read” models, institutional repositories, hybrid journals, multiple sources of article processing charges for a single paper succeed in reducing costs.”

RELX remains the world’s largest publisher by revenue

8 major publishers collaborate on guidelines related to image manipulation.

NISO has received a grant of $125,000 from The Andrew W. Mellon Foundation to develop a “consensus framework for implementing controlled digital lending (CDL) of book content by libraries.”

Center for Open Science is the recipient of an $18 million grant from Flu Lab. “The focus of the investment will be to extend COS’s services to catalyze culture change in biomedical research.”

University of North Carolina plans a $5 million cut to its library budget

92 million additional citations have been added to COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations, raising the total number of citations in the database to over 1.18 billion. 

Ham and cheese is the cornerstone of civilization and hardback books.

 

***
In general, I think audiences are a lot smarter than people think. So it’s not “know your audience”, it’s “respect your audience, and really know your content.” — Edward Tufte