May 6, 2026 | by OpenScholar
The Hidden Attention Economy Behind Research Visibility
May 6, 2026 | by OpenScholar
The quietly growing equity problem in institutional research communication.
Look at the researchers at your institution who tend to get quoted, cited, recruited from, and collaborated with most often. On the whole, those researchers share something—and it's not necessarily that their research is better than those who don’t. It's that more of the world can find them.
To be clear, this isn't a story about researchers needing to change how they work. It's a story about a public-research ecosystem that has drifted out of step with how that work actually happens—and the conditions it ends up rewarding when no one is looking.
An Attention Economy
Look at any institutional faculty roster and the pattern emerges. Some researchers have a personal website that's actively maintained. Others don't. Some keep their ORCID record current; others let it go years between updates. Some post regularly on LinkedIn or X; others don't post at all. Some have detailed institutional bios with current research areas, methods, and collaborators; others have a paragraph from 2018.
None of those are research decisions. They're decisions about how much time, support, and comfort with self-promotion a researcher has on top of the actual work of being a researcher. And the decisions accumulate: on the whole, the researchers who keep the most current public presence tend to accrue more of the visibility-driven outcomes—citations, collaborations, recruiting interest, press attention—that institutions care about.
That isn't a meritocracy of research. It's an attention economy with a filter that organizes around signals having very little to do with the work. And the filter doesn't merely sort; it concentrates. Visibility produces visibility. Attention attracts attention. The researchers without the resources to maintain a public presence drift further from the institution's public face over time, even when their work is what the institution would most want known.
What that filter actually picks up on
The markers that the filter picks up on don't track research. They pick up on the conditions surrounding it.
- It picks up on time. Researchers with bandwidth to maintain a public presence aren't randomly distributed. Early-career researchers grinding through tenure clocks have less of it. Clinician-scientists splitting time between practice and research have less of it. Faculty with caregiving responsibilities have less of it.
- It picks up on admin support—and that support is often unevenly distributed across departments. Some have communications staff who help maintain faculty pages; others don't. Some labs have grant-funded research administrators with bandwidth to update bios; others don't. The well-supported faculty stay visible; the under-supported ones tend to start to disappear.
- It picks up on comfort with self-promotion—which is culturally and individually variable. Some researchers are at ease posting their work publicly; others aren't. Some fields normalize it; others don't.
- It picks up on field-level norms. Economists post working papers and write op-eds; many bench scientists don't. Computer scientists maintain GitHub profiles; many social scientists don't. Each norm is reasonable inside its discipline. The aggregate is a representation of "your institution's research" that over-weights the disciplines whose practitioners self-publish and under-weights the rest.
Researchers shouldn't need marketing resources just to be discoverable. Under the current public-research ecosystem, they often do.
What it costs you
The institutional cost shows up in places that are easy to miss.
- Collaboration. A researcher at another institution searching for a collaborator on a specific topic finds the visible researchers more easily than the less-visible ones — even when the less-visible ones are doing closer work. Collaborations form around findability, not always around fit.
- Recruiting and retention. Prospective PhD applicants and faculty candidates research institutions by reading public faculty profiles. The visible researchers shape the institution's recruiting brand; the less-visible ones don't show up as readily. Faculty who feel under-represented in their institution's public face are more likely to leave for institutions where they don't.
- Citation and external recognition. Journalists, policy researchers, and funding officers writing about a topic cite the researchers they can find. The visible ones get quoted; the less-visible ones, even when their work is more relevant, often don't.
The cumulative effect of this cost is real: an institutional research presence that systematically over-represents a subset of faculty and under-represents the rest. The institution's public face stops matching the institution's actual work.
What actually fixes it
The common response is exhortation. "Update your bio." "Start a website." "Maintain your ORCID." The institutions doing this aren't wrong. It's the only lever they've had under an infrastructure that has been quietly failing for years—one that asks faculty to maintain their own visibility on top of doing research and gives institutions no other mechanism to close the gap.
What’s needed is a new layer that produces consistent visibility for every researcher, regardless of whether they have the time or inclination to maintain a public presence. A layer that reads from the platforms a researcher's work already lives on—publications, data, code, trials, talks, media—and presents it coherently. Built once. Maintained automatically. Equal across every researcher.
That's the difference between exhortation and infrastructure. Exhortation reproduces the gap. Infrastructure removes the conditions that produce it.
Our 90-day pilot ahead of our July launch
At OpenScholar we’ve built exactly that layer. OS Research Hubs is what every researcher gets, regardless of whether they've ever maintained a website: an AI-generated, researcher-validated, continuously updated public account of their work. OS Match is what becomes possible when that layer exists — funding discovery grounded in real signal, not in who keeps their LinkedIn current.
Both products go publicly available in July. Right now, we're running a 90-day pilot ahead of that launch.
Ten of your researchers. Three rounds of funding matching. Up to 450 high-confidence opportunities surfaced. The full system on your own faculty.
The pilot is $5,000, and the fee applies in full toward your institutional license if you proceed.