Issue 62 · February 2024

Join the conversation on Twitter @Briefer_Yet and LinkedIn.

Did a colleague forward the The Brief to you? Sign up via the button below to get your very own copy.

Journal Metrics Benchmarking – Join Now

Recruiting is open for C&E’s 2024 Journal  Metrics Benchmarking study. This study is conducted every 2–3 years and provides you with journal benchmark data for

  • EIC and other editor honoraria
  • Submission trends
  • Article transfer stats
  • And many more critical journal metrics

Reach out to benchmarking@ce-strategy.com for more information and pricing. 



While Frontiers has been lauded for slowing its approach to guest-edited special issues due to research integrity concerns, this month it suffered an ignominious and very public failure in its editorial oversight duties when it allowed the publication of an absurd paper featuring clearly AI-generated figures. You’ve probably seen them, and while we don’t feel it’s necessary to go into detail about the anatomical features of the rat pictured, we were amused to see those who study cellular pathways were equally offended by the paper’s other figures. In response to the public mockery, Frontiers has since retracted the article and removed it “from the databases to protect the integrity of the scientific record” – the html version of the article no longer appears on the journal’s website, but the PDF (watermarked as retracted) remains available. In keeping with its recent practice of promoting open science as the solution to all of society’s ills, Frontiers thanks the “crowdsourcing dynamic of open science” for surfacing the issue. 
Frontiers, it is worth noting, has developed a “unique, award-winning collaborative review forum” that “unites” the various stakeholders in “a direct online dialogue.” We have anecdotally heard much praise for this review process from both authors and reviewers. Basically, Frontiers’ peer review system convenes a virtual chat among peer reviewers (including input from the authors) to come to, if not consensus, at least a unified view on a paper – whether to recommend acceptance or rejection and, if the former, what might need to be revised. 
The reviewer conversation regarding this paper might be best imagined by the writers of SNL.
Research integrity problems were already mainstream news this month, before the well-endowed rat fiasco added a layer of ridiculousness to the more serious concerns raised. Articles on publishing fraud and scientific malfeasance have recently appeared in The GuardianThe New York TimesThe Wall Street JournalThe Economist, and NBC News. First, as with any of the many “crises” in scholarly communication, it’s important to understand the scale of the problem being discussed. In 2023, 10,000 retractions out of more than 5 million articles published represented 0.2% of the literature. As we pointed out in January, if one takes out the 8,000 retractions from the Hindawi housecleaning (hopefully a non-repeating event), total retractions decreased in 2023 as compared to 2022.
That said, the actual numbers are irrelevant here, as public perception is based more on “truthiness” these days than anything else. And between ever-increasing career pressures on researchers, the rise of AI, and the rapid adoption of volume-based publishing models, scholarly publishing is in a perfect storm of opportunity for misconduct from every stakeholder category in the research reporting chain. Let’s be careful not to leave out some portion of responsibility for the policymakers, funders, and librarians who have fanned these flames by driving open access (OA) requirements without a sustainable route that emphasizes rigor over bulk publishing. Demands for increased editorial oversight and the development and implementation of new research integrity tools are in direct opposition to calls for reduced subscription prices and decreased article processing charges (APCs).
Perhaps the biggest hole in the system is a seeming lack of consequences for bad behavior by researchers, other than perhaps a slight decline in citation rate. While both the publisher and the peer reviewers involved in letting through problematic or fraudulent research share some blame (especially when the problems are glaring), the primary fault lies with the person or people who created it. Here, China seems to be taking the lead on correcting this gap with a nationwide review of retractions and research misconduct, which will presumably lead to more pressure on institutions to monitor and enforce research integrity. Although this may be difficult to replicate in countries such as the US, where there’s no central authority that governs state and private universities, one could see funders making such reviews a requirement of eligibility for receiving grants. The University of Bern has launched a “bounty” program, offering financial rewards to those who find errors in research papers and similar rewards to authors whose papers pass muster. We have our concerns, however, that, like so many well-intentioned efforts in scholarly communication, this will create a slew of unintended consequences and perverse incentives. In the end, responsibility must fall to those ultimately rewarding misconduct, and as with nearly every problem in scholarly communication, that comes down to the career and funding structures of academia.

Year of Open Science


The US White House Office of Science and Technology Policy’s (OSTP’s) “Year of Open Science” has come to an end, and the incremental results are somewhat underwhelming. In a celebratory fact sheet, the highlights seem to consist mostly of progress reports on policies from the Nelson Memo, which was already in place before the Year began, and implementation of data policies by the National Institutes of Health (NIH) and the National Aeronautics and Space Administration (NASA), again both planned and announced more than a year ago. 

The fact sheet does offer links to a variety of agency Nelson Memo plans, including a few we hadn’t seen before, such as those from the Administration for Community Living (ACL), the Social Security Administration, the US Census Bureau, and the US Geological Survey (USGS). These policies are largely in line with those already announced, or provide updates of the agencies’ Holdren Memo policies. Many of these “policies” are works in progress, with repositories for papers and data not yet identified. The Social Security Administration’s policy is more of a plan to make a plan, but includes some surprising language that claims agency ownership over all publications and data resulting from funded research:

All scientific research publications and scientific research data resulting from our federally funded research will be our property with unlimited rights to reuse, reproduce, and make available at any physical, digital, or online location accessible to the public.
It also leaves the door open to requiring permissive licenses such as CC BY:
We may require unlimited rights licenses to use, modify, reproduce, release, or disclose research data in whole or in part, in any manner, and for any purpose whatsoever, and may authorize others to do the same, depending on the facts and circumstances of the award.

Each policy mirrors that of other agencies in stating that grant recipients can put funds toward publication and data management costs, but as there are no additional funds in agency budgets to cover these new financial burdens, this likely means a decrease in the overall amount that will be going directly to research from each agency. The OSTP has largely dropped the ball on performing significant analysis of the overall cost of these policies and of the decrease in research performed due to the necessary diversion of funds. 

new study from the Association of Research Libraries funded by the US National Science Foundation (NSF) makes clear the scale of the problem. The study looked at costs from 2013 to 2022, under the rules of the Holdren Memo (which are laxer than those of Nelson). Even with no enforced open data requirements and a 12-month embargo on article availability, the average current administrative spend per institution on data management and sharing (DMS) efforts is $750,000 (USD), and the total campus spend per year, including researcher costs, is $2.5 million per year. Researchers are spending, on average, around $30,000 per grant on DMS, approximately 6% of their funding, but for smaller grants, it’s a much larger portion – over 15% of the funding provided. If federal agencies start to enforce the Nelson Memo’s data requirements and most papers require the payment of an APC, these costs are likely to skyrocket exponentially.

Despite funding levels that are insufficient to the meet policy requirements, and the growing backlash against author-pays publication models (with the inequities and integrity problems they encourage), progress toward implementation of the Nelson Memo continues at a steady pace. Implementation plans will be finalized at year’s end and go into effect in 2026 in a publishing market that may be markedly different from where things stand now. 



RELX, the parent company of Elsevier, posted its 2023 annual results along with a presentation to investors. Highlights include:

  • Revenue grew by 8% across the portfolio, but just 4% to £3.1 billion (GBP) in the company’s scientific, technical, and medical (STM) business (Elsevier). Given that the global inflation rate in 2023 was around 6.9%, the STM business appears to have shrunk slightly on an inflation-adjusted basis.
  • The STM business remains highly profitable. Profits in STM also grew around 5%. The STM business is the most profitable RELX business unit with a 38% margin in 2023. 
  • In the STM unit, subscriptions continue to reign and were responsible for 75% of revenues. It is unclear to what extent transformative agreements are contributing to “subscription” revenues. 
  • Europe accounts for only 22% of the company’s STM revenues whereas 47% derives from North America – underscoring that all the attention on Plan S and other European policies is perhaps out of proportion to actual market drivers (to the extent Elsevier’s business is representative of other publishers, geographically speaking).
  • Meetings are back. The Exhibitions business (RX) posted revenue gains of 17% and increased profitability by 97%. We assume the huge gains in profitability in the RX division were aided by pandemic-era belt tightening. 
  • RELX calls out the growth of “databases, tools & electronic reference, and corporate primary research” (by which we assume they mean Scopus, SciVal, ClinicalKey, and so on) but don’t break out any numbers. 
  • They also point to “strong growth in article submissions, particularly pay-to-publish open access articles” but again provide no specific metrics. 
  • Acquisitions were dramatically lower in 2023 than typical. The company spends an average of £400 million on acquisitions annually, but only £130 million in 2023 (down from £443 million in 2022). 
  • Internal development (capital expenditure) ticked up a bit from £436 million to £477 million. We were surprised by this figure. Given the stakes for Elsevier with regard to AI technology, we would have expected to see a larger uptick in technology investment (or acquisition costs). 

Perhaps the most notable thing about Elsevier’s results was not in the press release or presentation, but instead in the comparison between Elsevier and other publishers, particularly Wiley and Frontiers. In Wiley’s last fiscal year (which is notably a different period from Elsevier’s – Wiley’s fiscal year ends on April 30), the American publisher reported a 3% decline overall and a 6% decline in its research division, in part due to the research fraud issues at Hindawi. Since then, the publisher has fired its CEO and retired the Hindawi brand. Meanwhile, Frontiers announced that it is laying off nearly a third of its workforce due to declines in published output.

As we have noted previously in The Brief, the fallout at Hindawi and the layoffs at Frontiers highlight the fragility of OA business models in which remuneration is based on the volume of publisher output. Elsevier’s more circumspect embrace of OA, combined with investment in data and other analytics products, has led to standout financial results – a pattern that has not gone unnoticed by The Financial Times (FT). The FT notes in a recent profile of Elsevier and its CEO, Erik Engstrom:

RELX, or its previous incarnations, has been the top-performing stock on the FTSE 100 in the index’s 40-year history, according to London Stock Exchange data. Since Engstrom became chief executive in 2009, its share price has soared 650 per cent – from about 465p to more than 3,489p – and total shareholder returns (reinvesting dividends) have been more than 1,000 per cent. This means that the company has outperformed the FTSE 100 by 328 per cent in that period.

About that “AI” in “Fair Use”


A US district court has thrown out a number of claims brought by a group of authors against OpenAI. The authors include Michael Chabon, Sarah Silverman, and Paul Tremblay, among others. Many of the dismissed claims pertained to copyright infringement, and the court has ruled that the authors failed to prove that “every ChatGPT output is an infringing derivative work” because the authors’ work was used in the ChatGPT training set and may, in some cases, have similarities to the authors’ work. The judge did, however, allow a “direct infringement” claim to move forward, as well as a perhaps more significant complaint that OpenAI used the authors’ works in the ChatGPT training set without the authors’ permission. This latter complaint is based not on copyright infringement, but on California’s unfair competition law. If successful, the claim could put additional pressure on technology companies to enter into licensing arrangements related to the use of copyrighted content in their training sets.
While the authors’ copyright infringement complaint seems to have failed with regard to the question of training sets, the wider question of whether the use of copyrighted material in AI training sets falls under the somewhat nebulous “fair use” provision of US copyright law is far from a settled matter. Most notably, OpenAI and Microsoft still face a lawsuit from The New York Times. A recent analysis from tech journalist Timothy Lee and law professor James Grimmelmann argues that the Times lawsuit has a chance of winning and looks closely at relevant past “fair use” lawsuits, including some that will likely be familiar to readers of The Brief, such as the Google Books case and the lawsuit between the American Geophysical Union and Texaco. It is a useful piece that highlights the nuances surrounding the fair use precedent in the US. 
Meanwhile, the social media company Reddit has signed a $60 million annual licensing deal with Google, allowing the tech giant to use content on the Reddit platform in, among other things, training sets. The obvious joke here is that Chabon et al. and The New York Times should just put their work on Reddit and up the price of their Google licensing deal.

CAS Journal Early Warning List


The Chinese Academy of Sciences (CAS) recently announced the 2024 edition of its International Journal Early Warning List (EWL). In addition to a largely new selection of 24 journals, CAS made some significant changes to the EWL’s criteria and approach: listing specific types of academic misconduct (e.g., citation manipulation, paper mills) and practices that CAS deems disadvantageous to Chinese researchers, such as publishing a disproportionately high number of articles from Chinese authors or charging higher-than-average APCs. Nicko Goncharoff of Osmanthus Consulting (author of International STM Publishing in China: State of the Market Report 2023, a joint effort with C&E) offers more on the 2024 EWL in a recent post.



Sabrina McCarthy has been appointed President of Bloomsbury US.

Mari Sundli Tveit has succeeded Marc Schiltz as the Chair of the cOAlition S Leaders Group. 

Jason Winkler is now Vice President of Medicine and Life Sciences Journals at Springer Nature.

Briefly Noted


Bluesky opened up its platform to the world this month (previously they throttled the service by requiring an invitation code to register). Bluesky was originally a project within Twitter, meant to migrate Twitter to an open protocol, but it was spun out in advance of the Twitter acquisition by Elon Musk (for more on the company’s short history and future plans check out this recent interview with Jay Graber, Bluesky’s chief executive). Bluesky seems to be picking up momentum as a viable alternative for scientists and academics disaffected by the changes to Twitter, apprehensive about the corporate ownership of Threads, and uninterested in the level of effort required to even sign up for Mastodon, never mind figure out how to use it. 

When Ithaka S+R issues a major report, it is usually worth your time, and we at The Brief can confirm that the latest, “The Second Digital Transformation of Scholarly Publishing,” accurately captures many of the key infrastructure trends of the moment. Highly recommended.

Some updates from Clarivate will impact (pun partially intended) the 2023 Journal Citation Reports (JCR) due out this summer. First, Clarivate announced that they have paused all evaluation activity toward moving journals from the Emerging Sources Citation Index (ESCI) to the Science Citation Index-Expanded (SCIE) and the Social Sciences Citation Index (SSCI). While this means that the ESCI purgatory continues for any journals listed therein, it probably no longer matters all that much, because those journals now have Journal Impact Factors (JIFs) –most likely no one cares what flavor of JIF a journal has, just that it has one. There are nine JCR categories that had separate listings in both SCIE and SSCI, but Clarivate will now produce a single, combined listing for every subject category (rather than, for example, separate SCIE and SSCI listings for Psychiatry journals). This will likely decrease the standing of social sciences journals, as they’ll now be ranked lower in comparison with the science journals they’re joining that often have a more prolific citation culture (e.g., The Journal of Anxiety Disorders, currently the 10th ranked journal in the SSCI index would be 14th in the combined listing). Clarivate has also come to the realization that for arts and humanities journals, their one-decimal-place JIFs don’t work, as large swathes of entire categories would consist of ties. Their solution is to not display category rankings for the arts and humanities, which seems contradictory to their headline goal to “enhance transparency and inclusivity.” The announcement is listed as “the first in a series of updates,” so stay tuned for more.

Whether it results in threats to library freedoms or the increasingly direct connection between research funding and publication costs, governments are wielding more influence on scholarly communication. Publishers and their vendors are ramping up their efforts to increase their visibility with policymakers. Silverchair and Oxford University Press have just released Sensus Impact, a platform to collate publishing metrics meant to demonstrate impact to funders. Sciencehas meanwhile introduced Policy Paks, press packages summarizing research outputs for those involved in policy. And Overton is now offering metrics on the use of research papers in policy documents. Research has always been political, but now it is becoming overtly so.  Digital Science has taken a fascinating look at the Texas court case against the abortion drug mifepristone, which uncovered conflicts of interest and unreliable methods in the published research presented as evidence in the case. 

Three not-for-profit physics publishers have united to launch a coalition called “Purpose-Led Publishing.” Not-for-profit and society publishers have struggled over the years to join together to achieve the scale that’s important for success in the market – competition law and differing priorities often blunt such efforts. In her keynote at this month’s Researcher to Reader meeting, IOP Publishing’s Chief Executive, Antonia Seymour, mentioned the work being done to steer clear of collusion, and that commercial activities such as working together on transformative agreements are not (or at least not yet) in scope. Cost-saving areas are more of a focus for the group, including sharing a booth at industry meetings and coordinating on APC waivers. Obviously, this is also a marketing campaign aimed at authors, although in The Brief’s experience, a journal owner’s nonprofit status or community standing is rarely enough to significantly move the needle on an author’s journal choice.

Journal of the American Medical Association (JAMA) and the JAMA Network journals are making good on their promise of transparency and reporting on the demographics of their editorial teams. As the scholarly community increasingly diversifies, journal leadership needs to reflect the communities represented and we at The Brief hope these sorts of disclosures become a standard practice for publishers. 

Another month brings another series of journal editorial board rebellions. The Advisory Board, Editors-in-Chief, Managing Editors, Journal Administrator, and Associate Editors for the Journal of Economic Surveys resigned en masse in response to Wiley’s proposed renewal terms. The group objected to both Wiley’s focus on publication growth (detailed in this Wiley post explaining how society journals will thrive in an open environment) and the “cross pollination” among Wiley journals of rejected papers cascaded to other titles, raising the risks of “proliferation of poor-quality science.” Meanwhile, Springer Nature has come under fire for installing a new editorial team without consulting the existing editorial leadership at the journal Theory and Society. While the new editors describe this as making the journal “more scientific, less political, and more interdisciplinary,” others have disputed this instead as being part of the anti-woke movement and a profit-driven strategy by Springer Nature to turn a “sleepy journal in a tame field into a more visible product.” Regardless of the motivations involved, the journal is owned by Springer Nature, who are entitled to pick the editorial team they prefer. Whether authors will submit papers to the journal under the new editorial leadership is an open question. The lesson, as always, for researchers and editorial teams is to own your community’s journals (via a professional society governed by community members) or risk decisions related to business model, scope, or editorial leadership that may be at odds with the community’s preferences.

Forbes analyst Derek Newton describes the potential sale of Udacity to upGrad as indicative of the collapse of the online education market, “the punctuation mark to a bad idea,” and “the whimper of an ending that MOOCs [massive open online courses] deserve.”

The Apple Vision Pro has hit the market, and Elsevier is already moving into the “spatial computing” (whatever that means) market with an immersive heart education product. We at The Brief would like to be the first to welcome our new goggle-wearing cardiology overlords.

Although no details of the agreement have been released, the Royal Society of Chemistry has announced a “next generation open access consortium model,” collaboratively produced with 77 German research institutions. Billed as a “community approach,” the agreement includes financial support from “all types of institution, including those that don’t publish.”

Though we have our concerns about its long-term sustainability, it is worth noting that MIT Press’s shift+OPEN program to flip subscription journals to a Diamond OA model has expanded with funding from the NSF.

Perhaps this is progress? Ten years ago, researchers noted that Google Scholar was filled with false citations as bad actors gamed the system to improve the reported performance of their papers. To prove this, they spent “less than half a day’s work” creating a handful of fake papers for Google Scholar to index. Now, a decade later, a new preprint states that those same issues persist, but things have gotten much more efficient – fake citations can now be easily purchased as this route to academic fraud has been monetized. One interesting note is that even though it was recognized back in 2012 that Google caches the fake papers so the false citations persist, even if they’re deleted from the internet, this problem still hasn’t been addressed by Google Scholar. As we noted last month, the growth in AI capabilities offers an existential threat to the value of non-gated bibliometric services such as Google Scholar, creating serious data integrity problems not faced by curated services.

The prospect of developing AI tools as scapegoats for corporate failures took a blow this month when Air Canada tried to claim in court that its AI chatbot was “a separate legal entity that is responsible for its own actions” when they were sued over bad information the chatbot gave to a customer. Thankfully, the court found this argument “remarkable,” and Air Canada lost the case and was found liable.

Science’s Holden Thorp calls out the myopic arrogance common in academic science: “I see my colleagues at the journals treated disrespectfully by authors, reviewers, and readers who consider running a research lab as somehow more meaningful than anything else in science.” The editorial is worth your time, and reminds us of Joe Esposito’s now classic piece about how knowledge in one area doesn’t necessarily make one an expert in another: “…the distinguished life scientist pronounces on how publishing operations should be run without reflecting that perhaps there is more to the game than being smart.”

Here at The Brief we have been enjoying the conversation on the newly launched Open Café listserv, dedicated to “the free, open, constructive, and civil discussion of issues related to open scholarship.” However, so far it does seem to mirror the structures and behaviors that seem universal on scholarly communications listservs, detailed in a recent study, suggesting that a small number of male participants tend to dominate the conversations. The study also found that the “reply guy” phenomenon is indeed real, as “this gender disparity becomes more pronounced when considering only those messages that were sent as replies to other posts on the list.”

Elvis is everywhere. – Mojo Nixon, 1957–2024