Job Opportunity: Executive Editor, RUP’s Journal of Experimental Medicine
Our friends at Rockefeller University Press are seeking an Executive Editor for Journal of Experimental Medicine. The Executive Editor oversees the editorial direction and operations of JEM, maintaining its legacy of excellence in biomedical research while driving innovation in scientific publishing.
This position offers the benefits of a world-class academic environment at Rockefeller University, including comprehensive health coverage, generous retirement plans, tuition assistance, on-campus childcare, and robust support for professional development and work-life balance.
See the job posting and apply >
Upcoming Webinar: “Data Revolution – Unlocking Value Across the Publishing Landscape”
C&E’s Colleen Scollans will be speaking on an upcoming webinar (May 17, 2025) hosted by Silverchair. Topics will include connecting customer and audience data, how integrating data flows can enhance decision-making, building data competency, and uncovering new revenue streams. Colleen is joined by an all-star roster of panelists.
Register to join >
Value Transfer
1
The single biggest story in scholarly and professional publishing of the past decade is the transfer of value from society journals to the holdings of large, mostly commercial publishers. Billions of dollars of value have effectively been shifted from societies to commercial entities. Remarkably, there has been little coverage of this seismic event.
This value transfer has been effected by more recent incarnations of the Big Deal. When a library buys a “Big Deal” (or a transformative agreement) package from a large publisher, it is increasingly buying that publisher’s package, not subscribing to the individual journals in that package. If a society leaves their current publishing partner and takes their journals out of the package, the library doesn’t see any savings from those titles leaving, and the society is now publishing a journal without any institutional subscribers. This makes it very difficult for a society to leave many of the largest publishers as partners because, with a new partner, they will essentially be starting from scratch. Libraries simply don’t have extra money available to re-subscribe to these journals, or to pay another publisher more because the journals have moved. (We described this practice in more detail in “Navigating the Big Deal: A Guide for Societies.”)
This state of affairs is relatively recent and is not inherent to publisher packaging – that is, it wasn’t always this way. The Big Deal was initially formed as a collection of subscribed journals, with a steep discount applied to “top up” the package by adding the remaining journals to which the library did not subscribe. Indeed, it used to be the case that signing on with a large commercial publisher preserved subscription revenues because it placed a journal subscription inside a larger package and therefore made it harder to cancel. But over the past decade, most of the large commercial publishers started changing the nature of their packages from an aggregation of specific, subscribed titles (with discounts for “top-up” collections) to a subscription to the entire publisher list (with no underlying subscriptions to specific titles).
It is with this background that Pleiades, an organization that manages around 200 science journals, filed a legal petition that aimed to challenge the current structure of the Big Deal. Springer Nature has served as Pleiades’ publisher for the past 30 years. According to the petition, Springer Nature informed Pleiades that they would not be renewing their agreement, which terminates at the end of 2026. Pleiades and Springer Nature will apparently be heading to arbitration (presumably the route to dispute settlement in the governing contract). While Pleiades asked for a preliminary injunction halting Springer Nature from selling Pleiades’ journals as part of its package in deals extending beyond 2026, that request was denied by the court.
Because Pleiades content makes up a claimed 10% of Springer Nature’s total content, the organization suggested that Springer Nature engaged in a “bait-and-switch” when selling content that will no longer be included.
It is a punchy petition making for a remarkably fun read. But the salient passages, from our point of view, center around the mechanics of value transfer:
Springer Nature is planning to employ pricing in its contracts that artificially shifts the value of Pleiades’ journals to Springer Nature’s products and offerings, creating the perception that Pleiades’ journals are less valuable, while failing to take steps required of Springer Nature to promote and preserve the value of Pleiades’ journals.
The court found no evidence of Springer Nature changing contracts in the manner alleged. Whether or not Pleiades is ultimately successful in arbitration remains to be seen. These package economics are not unique to Springer Nature – nearly all large professional publishers have them. However, there are important nuances between publishers and we have long advised clients to be cautious about participation in certain packages (in some cases negotiating special arrangements to avoid or reduce package exposure). The outcome of this dispute may have wider implications for journal owners – many of whom are not-for-profit organizations.
[Editors’s note: This piece has been edited to reflect the judge overseeing this case has denied the request for a preliminary injunction finding that Pleiades did not prove a likelihood of success on the merits of their case and noted that there was no evidence presented by Pleiades that Springer Nature was changing its contracts in the manner alleged in the Petition].
Downstream Impacts
2
Anyone reading this newsletter is undoubtedly aware of the unprecedented actions of the current US administration related to science and academia. Dramatic reductions in headcounts and wholesale dismantling of federal agencies continue, as does the weaponization of federal research grants.
The impact on scholarly and professional publishers and societies is downstream of these actions but not far downstream. The effect on research publishing is sure to be felt and likely along multiple vectors, including:
US Research Output. Turmoil, staff reductions, and funding cuts at federal agencies are likely to result in fewer research grants being issued and therefore fewer research papers being submitted to publishers. University policies related to funding and immigration that decrease the number of PhD students entering lab-dependent disciplines will also likely dampen research output in coming years.
Indirect Cuts. Although temporarily blocked by a court in Massachusetts, the administration is seeking to dramatically reduce grant payments for indirect costs – the portion of federal grants that goes to a university for administration, infrastructure, compliance monitoring, and other support. “Other support” in many cases includes contributions to the universities’ library budgets. Reductions in indirect funding would have a significant impact on already constrained library budgets.
European and Japanese Research Outputs. As the administration reduces the US’s defense spending commitments in Europe and Japan, European and Japanese governments will seek to increase their own defense spending. Such funding must come from somewhere and other spending, such as that for basic research, may be a lower priority.
Scholarly and professional societies are likely to also see reductions in meeting attendance. There are likely to be far fewer federal employees traveling to meetings and federally funded researchers may also be short of funds for traveling. Researchers outside the US are increasingly reluctant to travel to US-based meetings,either due to economic solidarity or anxiety related to border crossings.
With policies changing seemingly by the minute, it is difficult to plan for any specific scenario, but for scholarly publishers, and particularly society publishers, rough seas are ahead. This makes it critically important that societies embrace revenue diversification – employing “all-weather” publishing strategies (a topic for a forthcoming blog post), maximizing industry revenue, growing revenues from outside the US, and developing new products and services.Put content here
Zero-Click Journeys
3
Since time immemorial (OK, for the last 20 years), the deal between publishers and search engines has been the same: publishers allow search engines to index content in exchange for search engines linking out to that content. Indexing for traffic. There are ebbs and flows to this basic deal as publishers adjust paywalls or emphasize advertising and as search engines (and we are here mostly talking about Google) adjust their algorithms and (again, Google) feature sponsored search results. The rise of artificial intelligence (AI) is, however, upending that deal.
As search engines adapt to AI and increasingly display AI-generated summaries, there is a concurrent reduction in traffic sent elsewhere. Instead of going to a publisher’s website to read about the topic of your search, the search engine provides an overview, removing your need to go anywhere else. Such practices impact businesses of all kinds but few so much as publishing, where the content being linked to is the business. (For instance, one might use an AI summary to spend less time researching roofing materials on various websites, but ultimately a roofer is hired to come out to the house – the roof installation, not the content about roofing, is the business.)
There is perhaps no aspect of publishing more prone to disintermediation from AI summaries than reference services. From the Times Higher Education:
Speaking at the Academic and Professional Publishing Conference, part of the London Book Fair, Oxford University Press (OUP) product strategy director John Campbell explained how the launch of Google’s AI-powered search summaries had led to a 19 percent drop in click-through to academic reference services.
Discussing what he called the advent of “zero-click journeys” for academic publishers, Campbell explained how half of all Google keyword searches likely to surface information within OUP’s own platform, Oxford Academic, now appeared with an AI-generated description next to them.
Given the relative recency of AI summaries and their questionable reliability, 19% is a large number. Campell did not comment on the extent to which search traffic to primary literature (like journal articles) may be affected by AI summaries.
Traffic to journal websites is likely to be more resilient. Summaries of papers (aka “abstracts”) have existed on discovery services (e.g., PubMed and Scopus) for decades. In other words, the loss of traffic due to “summaries” is already, at least in part, baked in. That said, Google Scholar, the most popular academic search service, does not include abstracts or AI summaries in its search results. If this were to change, the impact could be non-trivial.
The growth of zero-click journeys may prompt a reassessment of the relationship between at least some publishers and search engine companies. If the search engine is not sending traffic, why should the publisher permit indexing? At least for the primary literature, specialized search and discovery tools are widely available and overall traffic is not directly tied to revenues except for a handful of clinical titles with strong advertising programs. The only traffic that really matters to journal publishers is that of authenticated institutional users – professionals who likely have access to (and know how to use) specialized discovery services. This is all to say understanding who is visiting your site is of increasing importance. A question that organizations will need to answer is what percentage of your traffic loss consists of less-qualified visitors versus a valuable audience.
Zero-click journeys prompt the question, does licensing becomes the new indexing – with more direct economics? If the unit of exchange is no longer “user traffic,” will publishers now trade content directly for dollars? It is possible, although the recent revelation that Meta chose to train its AI on millions of pirated books from LibGen rather than having to wait an interminable four weeks for the delivery of licensed content is not cause for optimism regarding this new economic model.
Meta is unfortunately (and unsurprisingly) not alone among tech companies in its disregard for (other people’s) intellectual property. The New York Times this month notched a small victory in their much-watched copyright infringement lawsuit against OpenAI and Microsoft related to AI training. Speaking at the DealBook conference in December 2024, OpenAI CEO Sam Altman said, in reference to the lawsuit, “We need to find new economic models where creators will have new revenue streams…” Indeed.
Reviewer Zero
4
There is growing consternation among researchers and journal editors regarding the use of large language models (LLMs) in peer review. It is as inevitable as it is problematic that reviewers will turn to general purpose LLMs like ChatGPT to help write their reviews, as noted in Nature. Some uses might be relatively benign, such as using an LLM to interrogate methods or to fine-tune (or translate) language. Others, such as the reviewer feeding the research paper under review into an insecure LLM or using the LLM to substantially write the review itself, are more concerning. These latter examples threaten the confidentiality of research entrusted by authors to the journal editors and undermine the very notion of “peer” review. As one researcher quoted in the Nature article put it, “I submit a manuscript for review in the hope of getting comments from my peers. If this assumption is not met, the entire social contract of peer review is gone.”
At the same time, appropriately applied, AI has the potential to substantially improve the peer review process for all parties. One potential solution is for publishers (or their vendors) to develop “reviewer zero” technologies: those technologies that analyze a paper in advance of a human reviewer and then prompt the reviewer with a series of structured review questions and findings.
There are several such “preflight” technologies emerging. “Alchemist Review,” developed by Hum, applies AI technology purportedly “rapidly extracting and summarizing key insights like central claims, methodological soundness, citation integrity, & research originality.” Alchemist Review also includes an integration with Grounded AI’s Veracity, an AI-based technology that claims to help “ensure citations are contextually accurate, authoritative, and relevant.” Other examples of AI-based preflight technologies are offered by Enago, World Brain Scholar, and Paper-Wizard – with seemingly more arriving on scene every day.
We do not yet have enough experience with these technologies to recommend one as compared with another. However, we do believe that providing reviewers with a set of publisher-provisioned, secure AI tools to assist in appropriate and structured ways with the peer review process is preferable to hoping that reviewers will use their choice of off-the-shelf, insecure AI in an appropriate way (or to bury one’s head in the sand and pretend that peer reviewers will avoid AI entirely).
openRxiv
5
In a move that bioRxiv founder John Inglis compares to moving out of one’s parents’ house to your first apartment, the preprint server (along with companion medical preprint service medRxiv) is leaving host Cold Spring Harbor Laboratory (CSHL) and founding an independent nonprofit called, somewhat predictably, openRxiv.
The aim of the move is to create an organizational infrastructure more suited to long-term sustainability. With no incoming revenue from authors or readers, preprint services have long been in something of a precarious position, and bio/medRxiv have so far largely depended on funding from the Chan Zuckerberg Initiative (CZI). Indeed, CZI has funded a substantial endowment for openRxiv – the initial investment is reported to be $16 million, more than five times the typical annual budget for the sites. While that kind of backing will pay for a very nice futon and well-stocked beer fridge, it will not fund a growing archive indefinitely. The new structure better positions the organization for financial contributions from other stakeholders, as the association with a research institute (CSHL) made soliciting that support more complex. The connection to a scholarly press (Cold Spring Harbor Laboratory Press) may also perhaps have limited the organization’s scope to a focus on the publication process. The founding of openRxiv is meant to solve both limitations.
openRxiv is not the only nonprofit founded in the last month, as the Directory of Open Access Journals (DOAJ) has announced that it is moving to the new DOAJ Foundation. Like openRxiv, the move is being made for the purposes of sustainability.
Briefly Noted
6
In another US government blow to the dissemination of knowledge, the Institute of Museum and Library Services (IMLS) is being shuttered. Karin Wulf, in a must-read piece on the role of the humanities as the “canary in the coal mine” for the fate of research, points out that museums in the US support some 726,000 jobs and contribute more than $50 billion to the US economy, as well as $12 billion in tax revenue.
The bad news for the library community continues with the stunning announcement that the Special Libraries Association (SLA) will be shutting down after 116 years of service. This June’s SLA Meeting will mark the final activity for the organization.
While perhaps not as significant as the closure of IMLS in the scheme of things, the news that the Wilson Center will be shuttered is particularly notable here at C&E as Wilson is a client. For those unfamiliar with the organization, it is a nonpartisan repository of expertise on foreign relations that advised law and policy makers as well as organizations outside government. In addition to its deeply knowledgeable staff, the Center hosts fellows, experts who come in for a year or two to write a book. The Center also houses a substantial, historically significant library including the personal library of George Kennan and materials from the Woodrow Wilson presidency.
For some reason it is impossible for tech journalists to write about scholarly publishing without describing it, erroneously, as “broken” before waxing on about the latest technological “fix.” Here is this month’s example from Wired, although with a slight twist. Rather than write about a new start-up that is going to “disrupt” publishing (and thereby somehow fix it) the author of this piece writes about … arXiv. It is unclear how arXiv has succeeded if publishing is still broken, but succeed it has, despite the efforts of unnamed vested interests (“The biggest mystery is not why arXiv succeeded. Rather, it’s how it wasn’t killed by vested interests intent on protecting traditional academic publishing.”) It is unclear what the point of this article is, but there are some nice photos of Paul Ginsparg looking artfully disheveled.
For those doubting the imminent threats facing nonprofit research societies that appear too “woke” to the current US leadership, take note that the American Chemical Society is facing a lawsuit for supporting a long-running (30-year) program providing small scholarships to groups underrepresented in the chemical sciences. Whether the case succeeds or not, societies are increasingly spending time and money fending off such actions.
AI bot traffic is out of control and threatens even openly accessible content by driving up the costs of hosting. Eric Hellman describes the situation and even suggests a dire future where registration and paywall barriers may be needed to keep the hoard from overwhelming publisher systems. Wikipedia similarly reports exponential growth in traffic (and costs) caused by AI bots scraping the site.
eLife has released a two-year update on its shift from being a journal to a preprint review service. Last year,submissions fell by 22%, but over the second year, they have largely stabilized, declining only 4% more. The percentage of articles sent out for review has also stabilized with a two-year average of 27.3%. Of course, the real test begins this summer, when eLife officially loses its Impact Factor and truly sheds one of the key signifiers of a prestige journal.
Two journals, Critical Care Medicine and Biology Open, released the results of experiments in offering payment to peer reviewers. For Critical Care Medicine, a trial was performed where half of the invited reviewers were offered a $250 incentive. The money provided a slight (5%) increase in acceptance of review assignments, and a one-day improvement to the length of time it took for the review to be submitted. Biology Open did a small pilot, which it hopes to expand this year, in which a group of reviewers were essentially put on retainer and paid $776 to review three papers in a quarter. The payment came with a requirement for reviews to be performed within 4 days, which resulted in an average turnaround time of 4.6 days (faster than the 38 days for unpaid reviewers, but still later than required). It is worth pointing out that the costs here would greatly increase article processing charges (APCs), as published papers would have to cover the costs of reviewers for every paper that was rejected from a journal. For example, a journal with a 33% acceptance rate that employs three reviewers per paper would in fact be paying out ~$2,250 in review fees for each paper published. The idea of someone being paid on retainer to peer review strikes us at The Brief as being similar to the process of hiring professional editors or employing an editorial board to peer review submissions, something already in place at many journals.
As Cambridge University Press has become more and more open access, Mandy Hill, Managing Director, suggests that “Without more radical change, models will become increasingly unsustainable…” Cambridge hopes to “identify bold and workable solutions” to this problem through a program of workshops, interviews, and a survey (Cambridge is soliciting participation).
We often hear the anecdote that MDPI creates solutions (e.g., good author experience) by throwing people at the problem, so a 13% increase in employees in 2024 should not be surprising (6,650 compared with 5,900 previously). Personnel are the primary focus of MDPI’s 2024 Annual Report, charting a year in which submissions fell by 9% and publications dropped by more than 15%. One hopes a lot of those new hires are dedicated to resolving the research integrity concerns that likely account for those declines.
MDPI was not the only publisher issuing an annual report last month. Springer Nature issued theirs. Financial results for 2024 were also released for Taylor & Francis, and Wolters Kluwer, as well as third-quarter results from Wiley.
China continues its efforts to crack down on research integrity issues, this time with the Supreme Court recommending harsher penalties for paper mills.
Although long-term sustainable models remain elusive, some good news came this month for open access book programs, with Taylor & Francis reporting progress on its Pledge to Open model and MIT Press reporting further success with its Direct to Open program.
Is the “replication crisis” really a crisis? A new study suggests that replicability studies are often flawed, and that questionable research practices are less abundant than is reported.
With the caveat that they don’t have access to data from LinkedIn or Threads, Altmetric has declared that, “it looks like Bluesky is becoming the leading platform for posts linked to new research.” While Twitter/X still has higher traffic levels, much of that is due to reposting older research (perhaps unsurprisingly a lot of research about “vaccine scepticism, MMR, autism, ivermectin, etc.”). Bluesky users are described as more “story starters” rather than “propagators,” although virality is stronger on Twitter/X.
JSTOR announced a new offering this month, their Digital Stewardship Services program, “designed to help libraries and archives describe, preserve, manage, and share their unique collections at scale.”
The new draft Transfer Code of Practice (NISO RP-24-202X) has been released and is open for comment through May 2.
If you want 32 additional citations to your paper, apparently you need to follow the rule of three, and include a tripartite phrase (e.g., “market size, trade, and productivity”) in your paper’s title.
Is AI revolutionary, life-changing technology or just sort of “mid” productivity technology for mid tasks? It perhaps depends on who is using it.
Perhaps it says something about the reach of Academia.edu, or maybe it’s just the sheer quantity of emails they constantly barrage us with, but recognition (painfully accurate recognition) from McSweeney’s is, we guess, a badge of honor.
***
Is it really intelligence that is becoming ubiquitous and practically free? What we consider to be the pinnacle of human intelligence is the ability to see what everyone else sees, to learn what everyone else has learned, and yet to see something that no one else was able to see. Or to see something completely unfamiliar and make sense of it, without prior knowledge. In a bold stroke, to remake the world. The creators of AI have displayed that kind of intelligence. Their creations, not so much. – Tim O’Reilly