Editor’s Note: Today’s post is by Amanda Rogers, Beth Richard, Carsten Borchert, Lou Peck, and Simon Holt. Amanda is the Communications and Engagement Manager at BioOne and serves as the DEIBA Liaison. Beth is new to her role as Product Manager for Content Accessibility at Elsevier, while her contributions to this post reflect her experiences as Senior Publishing Editor at the Institute of Development Studies (IDS). Carsten is co-founder and CEO of SciFlow, an innovative platform for scholarly writing and publishing. Lou Peck is the Chief Executive Officer and Founder of The International Bunch. Simon is Head of Content Accessibility at Elsevier and an SSP Board Member.
Authors’ note: The audio file linked below provides a spoken version of this blog post.
In Part I of “Beyond Open Access: Make Academic Content Truly Accessible for All,” we stated that open access alone is not enough to ensure genuine accessibility. Scholarly communication must go beyond simply removing paywalls. It must also be designed so that everyone, including those with disabilities and differing abilities, can engage with knowledge on equal terms. We highlighted the reality that only a tiny fraction of scholarly outputs (2.4% of PDFs) meet accessibility standards today, and we made the call for a shift toward “born-accessible” content that places empathy and inclusivity at the core of academic publishing.
In this second part, let’s review more practical yet fundamental elements of scholarly communication: images and their descriptions. For millions of readers, images remain inaccessible, hidden behind poor or absent descriptions; alternative (alt) texts. Discover key takeaways and actionable insights to improve the accessibility of your published images.

A practical challenge: how to create good image descriptions
Image descriptions are essential for accessibility, providing textual descriptions of elements like images, charts, and figures. All visual elements should include an image description, unless they are purely decorative or have no information to convey. WebAIM Million 2025 Report states that 18.5% of all home page images had missing alt text, and Audioeye found that from a broader scan of two million web page reviews, 60% of images did not have alt text. Google, Bing, Yahoo, DuckDuckGo, Yandex, and Baidu use alt text to understand the content of images, which is crucial for indexing and ranking images in search results. Missing alt text means your images are less likely to appear in image search results, reducing potential traffic, which is more imperative than ever in an increasingly AI-technology-adapted world.
Find the right balance to ensure that individuals using screen readers or other assistive technologies can fully access and understand visual content. In a recent SciFlow webinar for university presses, approximately 80% of participants’ questions focused on image descriptions, highlighting the practical challenges publishers face when adjusting workflows to create effective descriptions for visual content. The type of description for an image depends on the complexity of the image and how it is described in the surrounding content.
What is the difference between alt text and long descriptions?
Alternative (alt) text is a short text description of an image that provides a comparable experience for the reader. It is attached to the image and generally does not appear on the screen, but it is used by assistive technologies such as screen readers to convey the content to readers with visual disabilities and impairments who cannot effectively “see” the image. Imagine a world where, without critical information like alt text, an image disappears and is non-existent. Alt text also displays on the screen in place of an image if the link is broken or if the user has chosen to turn off images in their browser, for example, due to low bandwidth. Alt text improves image search and discoverability, makes visuals easier to repurpose in various contexts, and enhances the audio experience for readers who prefer listening to content.
Image descriptions are used across channels, including social media. As part of competitive analysis undertaken across social media channels, The International Bunch found that out of the eight commercial and society publishers analyzed, only 25% (two of which were from the big five) used alt text on most of their social media imagery. No one used it on all social media imagery.
Tools and services offer ways to help improve alt text, like Microsoft 365 products include an Accessibility Checker that scans documents and prompts actions, and the free tool Google Lighthouse browser plugin, which audits your site and gives you an assessment and recommendations on the mobile and desktop view.
Long descriptions are useful for complex images, frequently used in academic publishing, such as line graphs and workflow diagrams, that cannot be fully described within a short section of alt text. For these images, a short alt text description identifies the image and directs the reader to the long description, and readers can choose to interact with that longer text if they wish to.
How image descriptions are created
For many smaller and medium-sized publishers, image description creation typically begins at the manuscript stage, authored by those who best understand the content, usually the authors or subject matter experts. Alt text and long descriptions may form part of someone’s role, or part of a dedicated team, and may be manual and/or automated. It’s important to factor in additional time to improve accessibility as part of the normal workflow, rather than think of it as an add-on.
Ideally, this process occurs alongside the creation of original content. Publishers can integrate image descriptions later in production, especially with vendor support, but for those working directly with authors, having them involved earlier can be more effective.
Shortcuts such as using image captions as image descriptions (e.g., alt text) offer a quick solution, but often do not enhance accessibility, as captions give brief context to the image designed for those who can view the image in full. Microsoft 365 and content management platforms (e.g., WordPress, Wix, etc.) have built-in generic and automated alt-text generators, but they remain at the surface, not covering the depth of figures in scholarly publications.
Collaboration between authors and publishers
Successful implementation of image descriptions requires clear communication and collaboration between authors and publishers. Authors have the expertise to accurately describe visual content, while publishers provide essential guidelines and user-friendly tools, such as those in publishing platforms like SciFlow Publish. Clear instructions and intuitive tools streamline this process, improving workflow efficiency and image description quality.
Using AI for image description generation
AI technologies offer promising solutions for generating image descriptions efficiently. However, AI tools require contextual inputs, such as figure captions and relevant surrounding content, to ensure high-quality results. They are also currently unable to interpret subtleties or nuance within images, which reduces their ability to generate meaningful descriptions. Authors or editors must review AI-generated image descriptions to ensure accuracy, context relevance, and meaningfulness. Human oversight remains essential, especially as we consider that AI tools work better in English than in other languages.
Characteristics of effective image descriptions
Compelling image descriptions should be:
- Accurate and contextual: Clearly describe the image’s content and relevance to the surrounding text.
- Concise yet informative: Provide enough detail to convey essential information without being overly verbose.
- Purpose-oriented: Focus on the meaning and function of the visual element, rather than merely its visual characteristics.
- Relevant and complementary: Align closely with the accompanying text’s central message to enhance overall comprehension.
The pitfalls to avoid:
- Don’t rely exclusively on figures. Important information should always be included in the surrounding text. Alt text is a supplement, not a replacement for context.
- Avoid redundancy. Screen readers indicate that alt text is an image replacement, so don’t use phrases like “Image of…” or “Graphic of…”.
- Don’t repeat the caption. Alt text should provide additional context, not duplicate what’s already in the caption or surrounding text.
- Avoid irrelevant details. Information not displayed in the figure (such as author, date, source, or bibliographical references) doesn’t belong in alt text.
- Steer clear of interpretation. Alt text should describe what’s visible, not offer subjective interpretations or opinions.
- Don’t overload with text. Keep alt text concise. Avoid lengthy descriptions that overwhelm readers with unnecessary details.
- Avoid formatting. Screen reading software doesn’t interpret formatting (e.g., bullet points). Stick to plain, straightforward descriptions.
- Don’t assume visual context. Describe the image and its purpose within the publication as if the reader can’t “see” it. Avoid phrases like “As you can see…”.
- Avoid gender assumptions. Be specific. Instead of “Man,” use “Smiling person reading a book.” You can’t tell someone’s gender; simply refer to them as a Person or Individual.
- Don’t forget to test. Always verify your alt text using tools like screen readers to ensure it conveys the intended message effectively, and combine with usability tests, focus groups, and the human element.
Some practical examples

Example 1 alt text: Line graph showing that enzyme activity increases with temperature, peaking at thirty-seven degrees Celsius before declining sharply.
A long description would provide information such as the title, axis labels, and maximum and minimum values, the categories, key data, and any trends. The data could be presented as a table beneath the graph or long description. This description should consider the purpose of the image and its use in the context of the content.

Example 3 alt text: Cover of Ursus, Volume 2025, Number 36E4, featuring a photograph of a black bear standing on a tree branch.

Example 3 alt text: Empty alt attribute (alt=””).
Top 10 alt text tips
Thoughtful and considered alt text enhances accessibility. Make your content more inclusive for your readers.
- Think about the why. Decide how best to convey to someone what this image is, with or without having been able to “see” it before. Why is it relevant, and what does this description add to the overall content?
“Illustration of a diverse team collaborating on a project.”
- Be concise. Avoid verbosity and be clear while providing all the details. Deliver a powerful message in a few words – no longer than a sentence or two or fewer than 125 characters. Screen readers may stop reading at this point and cut off the description. Use descriptive keywords. Check spelling and grammar!
“Close-up of a blooming sunflower.”
- Capture emotions. Describe feelings and expressions.
“Happy children playing in a sunlit park.”
- Provide context. Make sure you add context to a description to make it relevant. If you use a picture of the Eiffel Tower to promote Paris as a holiday destination, to showcase the use of wrought iron in tall structures or monuments worldwide, or to highlight the best places to propose, your alt text needs to provide the appropriate context to help bring the story to life.
“Vintage typewriter on a wooden desk, with a sepia filter to make it look like an old photo.”
“My grandfather’s typewriter on a wooden desk.”
- Avoid repetition. Focus on the unique aspects of the image.
“Golden retriever dog catching a frisbee mid-air.”
- Highlight key elements. Describe what draws attention and be specific.
“CEO addressing a packed conference hall.”
- Be descriptive, not prescriptive. Describe what is happening but avoid telling people how to interpret the image.
“Abstract artwork with vibrant colors and flowing lines.”
- Use active language.
“Hiker walking up a rugged mountain trail.”
- Think about colors and contrast. Be mindful that some will have never seen color to understand what it ‘looks’ like.
“High-contrast black-and-white portrait of an older adult artist.”
- Test with screen readers and tools. Use tools like alt text generators and screen readers, including browser plugins like Google Lighthouse, to find out how the text appears, but make sure you check it and update it. Context is essential. Harvard University has some nice examples of free tools. Follow the alt Decision Tree from W3C to help choose what to write. Microsoft 365 will automatically generate text for you. Does it make sense? Could you do a better job of making it more engaging?
Hidden barriers and emerging solutions
Several hidden barriers exist beyond technical accessibility, including language complexity, which often requires high-level reading comprehension in academic writing; hidden costs that create inequities, even in open access environments; and technological infrastructure, considering varying levels of internet access and technology availability that can impact accessibility.
Clear methods reporting is crucial for accessibility and reproducibility. Recent initiatives focus on developing consistent approaches to methods reporting with standardization, ensuring clear and transparent documentation of research procedures, and community engagement, gathering feedback from researchers to improve reporting practices.
Promote accessibility by providing comprehensive resources. Develop best practice guidelines for more accessible publishing, offering platforms for knowledge sharing and collaboration. Some key resources include:
- Accessible Publishing Learning Network
- BIC’s accessibility hub
- DAISY Accessible Publishing Knowledge Base
- eBOUND Canada’s Accessibility Metadata Best Practices for Ebooks report
- FigureTwo for scientific figures for journals
- IPG’s Rising to the Accessibility Challenge course
- NISO’s Powering Accessibility: A White Paper 2024
- SciFlow’s free accessibility check for publications
- The STM Association’s accessibility resources for academic publishing
- UK Publishing Accessibility Action Group (PAAG)
While implementing accessibility measures requires investment, it can be cost-effective in the long term. Things to consider include:
- Early planning: Incorporate accessibility from the start to reduce retrofit costs.
- Resource allocation: Prioritize accessibility in budgets and workflows – think born accessible from the start and moving forward with existing projects and enhancements.
- Training investment: Build internal capacity for the implementation of accessibility measures, like alt text.
The authors have provided these recommendations as infographics available for download.
Discussion
3 Thoughts on "Guest Post – Beyond Open Access, Part II: Make Images Truly Accessible for All"
There is a huge author education and training piece here, as accessibility and alt text image descriptions are not part of researcher training or manuscript preparation.
While author-publisher collaboration is highlighted as the most desirable approach, how do you deal with disagreement around a publisher/AI-generated alt text? For example, what if I were to argue that the bear is lying on the branch, not standing? Or if I were to say how can you tell the person standing in front of a packed hall is a CEO?
Thank you for your comment. As for AI generated Alt text, I would propose two essential quality aspects 1) context: provide more context to generate the alt-text than just the image, like paragraphs from the manuscript and image description 2) human in the loop: We tested the approach described in (1) and received good results – however, having the results checked by a human (author or editor) is still necessary.
Such an important discussion—thank you for highlighting the practical steps publishers, authors, and technologists can take to make images truly accessible. Alt text may seem like a small detail, but it’s central to equity in scholarly communication. I especially appreciate the call for collaboration between authors and publishers as well as the reminder that accessibility should be built into workflows from the start, not added as an afterthought.