Integrity Indicators from SCImago for Publishers
Every editor knows the feeling. A manuscript arrives from a familiar institution. The research looks solid. Peer review goes smoothly. The paper is published. Nothing seems unusual. Then, months later, something surfaces. Another paper from the same group raises concerns. Then another. Patterns begin to emerge, unusual authorship, heavy internal citation, links to questionable journals. What initially looked like isolated cases turns out to be something broader.
By the time the pattern becomes visible, the situation is already difficult to unwind. This happens because editorial workflows are built to evaluate individual manuscripts, one at a time. But publication risks do not always live at the paper level. Sometimes they emerge across institutions, across teams, across hundreds of publications. And those patterns are hard to see without stepping back.
That is the gap IRIS (Integrity Risk Indicators from SCImago) was designed to fill. IRIS takes a different perspective. Instead of asking whether one manuscript looks suspicious, it asks a broader question: how does an institution behave across its publishing activity? It is not about ranking institutions or pointing fingers. It is about identifying signals, small statistical deviations that suggest something may deserve a closer look. Not conclusions, just context. To do this, IRIS standardizes multiple indicators so they can be compared on the same scale. The global average sits at the center. Institutions that diverge significantly in either direction become visible. This makes it easier to spot patterns that would otherwise be buried in normal editorial workflows.
The aim is not automation or judgment. It is awareness. Early awareness.
Once you step back and look at institutional-level behaviour, subtle signals start to surface. Some institutions show unusually high levels of multiple affiliations on papers. That may reflect legitimate collaboration, or it may suggest authorship practices designed to spread institutional credit. Others exhibit elevated rates of retracted publications. In isolation, that may not mean much. But in combination with other signals, it could indicate gaps in research oversight.
You might see strong internal citation patterns, where authors repeatedly cite colleagues from the same institution. That can reflect close collaboration or attempts to amplify impact within a closed ecosystem. There are also cases where institutions show higher-than-average publishing in journals that have been discontinued from major databases, or unusually large author lists, or extremely prolific individual authors producing dozens of papers per year.
None of these signals are inherently problematic. But when several appear together, they begin to form a pattern. And it is the pattern, not any single indicator, that matters.
One of the most important things about IRIS is what it does not do. It does not label institutions as risky. It does not block submissions. And it certainly does not replace editorial judgment.
An institution with higher retraction exposure may simply be large and highly productive. Heavy use of institutional journals might reflect regional publishing priorities. A collaboration gap might indicate strong international partnerships.
IRIS does not make decisions. It helps editors ask better questions.
Imagine receiving several submissions from the same institution. Nothing looks unusual at first. The manuscripts are well written. The science seems sound. But IRIS shows a pattern: slightly elevated retraction exposure, higher reliance on institutional journals, modest internal citation clustering, and some publishing activity in discontinued journals.
None of these signals would trigger concern on their own. Together, they suggest that a bit more editorial due diligence might be helpful. Perhaps affiliations are verified more carefully. Maybe citation patterns get an extra glance. Authorship contributions are checked a little more closely.
Publishing is evolving quickly. Submission volumes continue to grow. Authorship networks are expanding across borders. Institutional incentives vary widely. Editorial teams are expected to move fast while maintaining high standards. Traditional safeguards, plagiarism detection, peer review, ethical declarations, remain essential, but they operate at the level of the individual paper. What is often missing is visibility into broader behaviour.
IRIS adds that missing layer. It gives publishers a way to see patterns across institutions, not just across manuscripts. It helps identify clusters of retractions, unusual authorship behaviour, affiliation anomalies, and citation ecosystems that might otherwise go unnoticed.
IRIS is not about making publishing more restrictive. It is about making it more informed.
When editors can see patterns earlier, they can act earlier. That might mean asking an extra question, checking a detail, or simply being aware of the broader context. Most of the time, nothing further is needed. But when something does emerge, editors are better prepared. In that sense, IRIS works like an early-warning radar. Not every signal matters. But having visibility means fewer surprises later. In a publishing landscape where trust is everything, that kind of awareness can make all the difference.