Capped

Issue 79 · July 2025

Join the conversation on Twitter @Briefer_Yet and LinkedIn.

Did a colleague forward the The Brief to you? Sign up via the button below to get your very own copy.

Congratulations to PRIO and OUP

The Peace Research Institute Oslo (PRIO) and Oxford University Press (OUP) have entered into a new publishing agreement for the Institute’s two flagship journals, Journal of Peace Research and Security Dialogue. Clarke & Esposito had the privilege of working with PRIO to select OUP and broker this agreement, and we congratulate both parties on their new partnership. 

Elevating Your Author Experience, One Message at a Time

Too often, author communications are treated as administrative operations – functional but not strategic. Yet every message, from submission invitations to post-publication follow-up, is a brand moment that shapes the author experience. When these touchpoints are clear and purposeful, the results are meaningful: smoother processes, stronger relationships, and higher author satisfaction. Our newest blog post shares four pillars for transforming author communications into strategic assets that build a more resilient publishing pipeline.

Capped

When Plan S was first announced, it included a requirement that open access (OA) publication fees would be “standardised and capped.” Publishers argued that price caps would reduce diversity in the market (favoring larger publishers with lower per-article costs) and lead to cost-cutting (with attendant reductions in quality and threats to sustainability). These arguments prevailed, and no concrete plan for price caps was ever released by cOAlition S, with price transparency requirements being (poorly) instituted instead. Price caps have, however, been given new life by the US National Institutes of Health (NIH), which this month have announced “plans to implement a new policy that will cap how much publishers can charge NIH-supported scientists to make their research findings publicly accessible.”

The NIH subsequently released a “Request for Information on Maximizing Research Funds by Limiting Allowable Publishing Costs” with responses due by September 15. The NIH’s five proposed options are:

  • Disallowing all publication costs
  • Limiting allowable publication costs to $2,000 per publication
  • Limiting allowable publication costs to $3,000 per publication when peer reviewers are compensated at $50 per hour
  • Limiting the total amount allowable for publication costs over the life of a grant to 0.8% of the award’s direct costs, or $20,000, whichever is greater
  • Limiting the total amount allowable for publication costs over the life of a grant to 0.8% of the award’s direct costs, or $20,000, whichever is greater, and setting a limit of $6,000 per publication up to the $20,000 total limit.

It is worth noting that none of the options offered reaches the calculated cost per paper offered by EMBO (European Molecular Biology Organization) of around $6,400, meaning that many journals would be required to publish NIH-funded papers at a financial loss. The NIH’s proposed payments for peer reviewers do not, in fact, cover the costs of peer review at their suggested rate, as they only offer payment for reviewers of accepted articles. If a journal rejects 50% of the papers it reviews, then the NIH’s suggested $1,000 will only cover half the costs of paying peer reviewers (and will create perverse incentives for journals to accept more papers).

There are a lot of reasons why price caps on article processing charges (APCs) are a bad idea. Many are articulated in this article by Christopher Marcum, one of the policymakers behind the drafting and implementation of the Nelson Memo. As Marcum notes, price caps often become price floors, and every APC set below the cap is likely to rise to what will be perceived as a new industry standard. Marcum also points out that static caps do not reflect ongoing changes in the economy and will quickly become obsolete, that the NIH is acting unilaterally here (rather than in concert with other federal agencies, creating confusion and an increased compliance burden), and perhaps most saliently, that the policy is simply not evidence-based.

One could easily see price caps for NIH-funded researchers creating a class system in the research community. NIH-funded researchers at better-funded institutions able to supplement their grants to pay APCs in journals with prices above the cap would have more freedom to pursue the optimum outlets for their work. Those at lower-funded institutions would have to settle for a more limited choice of low-cost and low-service journals (and subsequently may receive less career advancement and reward). NIH-funded labs and projects could become a less attractive second choice to graduate students and postdocs looking to build their publication record to help future employment.

In the end, even if implemented, price caps may simply fail to make any difference in what journals charge and what authors pay for publication. A Plan S parallel comes to mind: when cOAlition S ended financial support for APCs in hybrid journals, it did not have the desired effect on researchers, and in fact, publication in hybrid journals increased. This is due to the spread of transformative agreements, in which libraries strike deals with publishers that include both subscription access to journals and payment of APCs for campus researchers. Since the author is no longer using research funds to pay directly for the APC, cOAlition S has no control over their spending. Here we could see the same thing – increased uptake of transformative agreements by US institutions that would allow NIH-funded researchers to escape from restrictive caps, as the money being spent would not be coming directly from the NIH and would be out of their control.

As with most poorly-thought-out publication policies from funders, price caps will ultimately favor the largest commercial publishers, as they are the best equipped to negotiate and implement transformative agreements, leading to further market consolidation beyond that already caused by Plan S. It also may be deeply damaging to high-touch, highly selective, fully OA biomedical journals. Subscription and hybrid journals can, at least for now, choose the Green OA route to compliance, whereas most fully OA journals must charge an APC for compliance and now may be priced out of the market for NIH papers. This also suggests the policy could further increase the market shift toward the high-volume publishing approaches of megajournals.

It’s not clear whether the NIH-funded papers amount to a sufficient volume to be able to move the market – there are  around 125,000 such publications in a market of 5.6 million (and with proposed budget cuts, the proportion is likely to drop in coming years). Still, a $2,000 APC cap would greatly reduce author choice for NIH-funded researchers. As the editors of The New England Journal of Medicine and the Journal of the American Medical Association eloquently state in this Washington Post editorial, “The broader scientific ecosystem depends on independent journals competing to publish the best research, which reinforces rigor and creates essential checks and balances.” Setting conditions that seemingly will wipe out large swathes of those independent journals offers a significant threat to academic freedom and scientific progress. 

Alternative Routes

One month in, clarity around the implementation and impacts of the NIH’s Nelson Memo public access policy remains elusive. The Authors Alliance has updated its “What Authors Need to Know” guidance, including answers to whether papers accepted after July 1 but from grants that had expired by then are under the new requirements (no), whether the American Chemical Society’s ADC (article development charge) is an allowable expense (maybe?), and what the NIH’s threat of APC caps means (who knows?).

To get a sense of how publishers are approaching the NIH’s public access policy (and presumably every other US federal agency public access policy as of 2026), we scoured each of their journal websites to better understand their approach to author compliance. Our current list is as follows; note that the Gold list includes publishers who have explicitly stated a Gold OA policy, those whose websites do not acknowledge the new policy but retain a required 12-month embargo for the Green OA route, and fully OA publishers who will charge an APC regardless of the author’s funding source; links to policies are included where relevant:

Gold

Green

Mixed

Notably, the five largest publishers by volume in the market all require the Gold OA route and payment of an APC for compliance. Overall, those listed that require Gold OA published around 42% of the total journal literature in 2024, and more importantly, published 73% of papers that listed NIH funding (according to Dimensions, an inter-linked research information system provided by Digital Science, www.dimensions.ai). Using the Office of Science and Technology Policy’s suggested average APC of $4,000 for federally funded researchers puts the NIH’s “free” policy, barring the implementation of price caps, at an annual cost of around $365 million.

Google Zero

Most of the top news websites saw steep year-on-year declines in traffic in May, fueling more worries about the effects that AI is having on readership and the internet. The idea of “Google Zero” (a phrase coined in 2024 by The Verge) – essentially the traffic apocalypse that websites are facing as search engines deliver AI-generated answers instead of links – is gaining ground. News organizations, product recommendation services, blogs, and other websites are all facing reduced traffic and altering their strategies.

But what does this mean for scholarly publishers and the audiences we serve?

Journals rely on a different set of discovery tools than do other media organizations, and Google Scholar and PubMed are not offering AI-driven answers to search queries. At least not yet. Elsevier’s Scopus AI and ScienceDirect AI are offering this, although with far more prominent sourcing than is found in general-purpose AI chat-bots and, in the case of ScienceDirect, they are sharing royalties with content owners. 

This difference in discovery shapes monetization. Journal revenue typically does not depend on broad public traffic. Instead, it is anchored in targeted usage from subscribing institutions and, increasingly, author-paid article processing charges (APCs or payments related to transformative agreements) in OA models. However, usage from subscribing institutions is a critical metric in subscription renewal decisions, and maintaining that usage in a world of AI summaries will require new approaches. 

Connecting institutional resources and credentials with AI search and discovery tools using RAG (retrieval-augmented generation) is one promising approach. Under this scenario, a user performs a query with an AI tool that then queries databases for which the user has institutional access credentials. This would preserve the value of institutional subscriptions and their usage (assuming each AI query is captured on COUNTER or other usage logs). Wiley’s recent deal with Perplexity points in this direction. 

It is worth noting that some journals – mostly clinical medicine – also derive meaningful revenue from advertising and sponsorships. These often come from pharmaceutical companies, medical device manufacturers, and other medical organizations. Such sponsors are drawn not by scale, but by the value of reaching a specialized, influential, and often decision-making audience within the clinical and professional domains (and doing so in the context of a high-reputation journal).

Clinicians – due to schedules, varying practice settings, and differing use cases (keeping knowledge up-to-date or seeking specific insight to inform diagnostic or treatment decisions) – often access journal content sporadically, via searches, and outside institutional systems. For clinically focused journals, this type of traffic is commercially important, supporting advertising and sponsorship models aimed at high-value professional audiences. 

The real question for these journals isn’t necessarily how much traffic they get, but rather where that traffic comes from and whether they’re monetizing it. Most journals don’t have the reporting capabilities to differentiate between clinicians, researchers, and members of the public, which makes it difficult to distinguish high-value audiences or understand the implications of traffic loss. Diminishing traffic due to AI summarization will put more pressure on proving the remaining audience is one that advertisers and sponsors wish to reach. 

Briefly Noted

Our long, arsenic-based nightmare is finally over, as Science has finally retracted the infamous and widely derided 2010 “arsenic life” paper. What’s interesting here is less the retraction itself than the questions it raises about what should be retracted. There are plenty of articles in the literature that we know are wrong (Linus Pauling’s speculation that DNA is a triple- rather than a double-helix immediately springs to mind), yet no one is clamoring for the retraction of outdated articles as they provide a valuable history of scientific thought at the time. Jeremy Berg, a former Editor-in-Chief of Scienceexplains that he didn’t retract the paper even though it was clear that the evidence presented was “much weaker than was needed to fully support the claims,” because retracting such a paper would set an unclear precedent that might be applied to other papers where the data seemed incomplete. Holden Thorp, current Editor-in-Chief of Sciencejustifies the retraction by notingthat the journal’s criteria for retraction have expanded over the years, and include situations where an editor determines that “a paper’s reported experiments do not support its key conclusions.” Should papers that are incorrect (or later proven to be incorrect) be retracted? If so, that’s going to create an awful lot of work for journals to go back through 350 years of papers to find outdated hypotheses. If not, is there a better form of annotation that could be added in special cases instead of retracting the article, which still carries something of a stigma of misconduct even in cases where honest mistakes were made?

Some valuable data was released this month with a particular focus on China’s growing dominance over the world of research. Clarivate’s G20 research and innovation scorecard 2025 notes that, “China leads the G20 in research output with nearly 900,000 papers in 2024 – triple its 2015 volume,” with over half of these papers involving “domestic collaboration, reflecting a shift toward internal partnerships.” Christos Petrou offers a data-laden look at that growing output from China and the impact it is having on western journals, essentially overwhelming existing peer review capacity, leading to slowing times from submission to publication.

The US Senate, at least so far, appears to be resisting the White House’s calls for drastically slashing research funding in next year’s budget, although their allocations will still need to be reconciled with those of the House. Of course, just because funding is allocated doesn’t mean it won’t be clawed back through recission or simply impounded and not spent. The Chronicle of Higher Education takes a look at how much it costs to run a lab, and why those indirect funds are so important.

Springer Nature released a white paper looking at the challenges in publishing null results in a journal that perhaps misses two key factors in why researchers fail to publish negative results. The first, as has been eloquently pointed out online, is that not all null results are valuable (many are “a frequent consequence of not knowing what the hell you’re doing, half-assery, and crucial oversights”). The second is that, while nearly every journal we at The Brief have encountered does welcome papers presenting negative results, those submissions have to live up to the standards of rigor and significance of the other publications in the journal. Once an avenue of research has proven a dead end, few researchers are willing to put in the time and effort that would be needed to thoroughly replicate and verify those null results, let alone write them up. Why not instead invest one’s valuable time in the next set of experiments that might actually work?

The Committee on Publication Ethics (COPE) and the STM Association have published guides offering best practices for the publication of guest-edited collections in journals, meant “to empower publishers and editors to uphold integrity, transparency, and trust.”

Frontiers’ Research Integrity Auditing team has uncovered an illicit peer review network among a group of researchers resulting in the retraction of 122 papers from five Frontiers journals. More ominously, the team reports that the network is responsible for a further 4,000 papers in journals from seven different publishers.

MDPI announced that it has hired a staggering 2,000 new employees in the first half of 2025.

The NIH is placing limits on the number of grant applications any individual can submit per calendar year from September 25 onward, due to an overwhelming flood of AI-generated proposals. 

Not an Onion headline: Springer Nature retracted an entire AI textbook because it turned out to have been written by AI.

Two worthwhile AI reads this month – Jason Koebler explains why forcing journalists to use large language models (LLMs) in a bid to become an AI-first company is not going to save media companies, and Meghan O’Rourke, Professor of Creative Writing at Yale, discusses the impact AI is having on students (and compares ChatGPT to Soylent Green). 

Those interested in a caustically skeptical take on current AI developments may find Ed Zitron’s three-part podcast titled “The Hater’s Guide to the AI Bubble” of interest (here is a link to the first episode on Apple Podcasts). Zitron observes that only a handful of giant tech companies are developing foundational AI models and spending nation-state levels of capital to do so. And yet, none of them are making any money from these AI models and it is not clear that there is any viable path for them to do so. These companies account for over 30% of the S&P 500 total market capitalization. What happens when the inevitable pull-back on AI data center spending starts?

Three interesting perspectives on coping with an AI future can be found in interviews with the American Society for Microbiology’s Executive Publisher Melissa JuniorElsevier’s CTO Jill Luber, and the British Library’s CEO Rebecca Lawrence.

Evolutionary biologist Neil Shubin has been nominated to head the US National Academy of Sciences.

McGraw Hill has a target of $4.2 billion for its upcoming IPO.

Informa (parent company of Taylor & Francis) reports a gangbuster H1 of 2025 with 20% year-on-year growth. T&F’s revenues are up an impressive 12%, although the company’s full-year guidance warns “overall revenues expected to be slightly lower than last year, due to the $75m+ of non-recurring Data Licensing Agreements secured in 2024.”

Elsevier also released positive H1 2025 financial reports this month (revenues up 2% and profits up 4%, year-on-year).

Oxford University Press meanwhile released its 2024/25 Annual Report, showing turnover of £796 million for the year, down 4.5% from last year’s £833 million. Profit before taxes was similarly down to £75 million from £111 million the previous year. 

Nobel Prize winning economist Paul Krugman offers a mathematical model for the ongoing “enshittification” of the internet.

Our team of writers is 100% on the side of the em dash as it defends itself from allegations of being a sign of AI-generated writing. 

***
I am the punctuation mark of human frailty. I am the writer’s block, resolved mid-sentence. I am the OG vibe shift.The Em Dash (as quoted by Greg Mania)