Article: The academic, economic and societal impacts of Open Access: an evidence-based review

Citation: Tennant, J.P., Waldner, F., Jacques, D.C., Masuzzo, P., Collister, L., & Hartgerink, C.H.J. (June 2016) The academic, economic and societal impacts of Open Access: an evidence-based review [version 2]. F1000Research, 5:632. doi: 10.12688/f1000research.8460.2

The authors of this recent piece of research aim to look at the impacts of Open Access (OA) on academia, economics, and society.

Their findings in each area are summarized in the tables below:

Impacts of OA on Academia

Impact Comments
“association with a higher documented impact of scholarly articles, as a result of availability and re-use” (p. 6) OA articles are consistently cited in higher numbers and more quickly than non-OA articles, but research varies widely on how big the difference is (pp. 7-9). The impact here does seem to trickle down to non-scholarly use of articles, judging from alt-metrics (p. 9)
“non-restrictively allowing researchers to use automated tools to mine the scholarly literature” (p. 6) In contrast to traditional publishing, which usually requires authors to cede copyright, the tendency of OA journals to request non-exclusive rights makes data- and text-mining easier (p. 10); thus, OA articles are more “legally safe” for this kind of research (p. 10).

Economic Impact

Impact Comments
Impact on Publishers OA undeniably means that publishers need to recoup costs in other ways (p. 12). Many publishers have moved towards a “pay-to-publish” model, but these can increase barriers to participation for those without funds (p. 13). Other models with potential include shifting payments to libraries, a one-time-only author fee, and library-based publishers (p. 13).
Impact on Non-Publishers OA models that charge authors to submit or publish have had an effect on research funding, and licensing and IP rights have also become problematic where state funds are in play (p. 14).

Societal Impact

Impact Comments
On “other domains in society” Access to knowledge is a human rights issue, and OA supports this by reducing barriers to access (p. 15).
In Developing Countries Although the removal of paywalls can greatly benefit developing countries, pay-to-publish models run the risk of limiting support for OA in developing countries by locking authors out of the publication system (p. 16).

In addition to explorations in these specific areas, the article contains a broad overview of the OA movement and its history, and boatloads of data. Altogether, it serves as a useful springboard to consider some of the issues at the heart of OA: access, free information, and equity.

Book: Transforming Scholarly Publishing through Open Access

Citation: Bailey, C. Transforming Scholarly Publishing through Open Access: A Bibliography. (2010). Retrieved from

Bailey’s Transforming Scholarly Publishing through Open Access was published as a web-based bibliographic monograph in 2010. The work includes a very brief overview defining Open Access, and then presents citations split into a number of broad categories:

  1. General Works
  2. Copyright Arrangements for Self Archiving and Use
  3. Open Access Journals
  4. E-prints
  5. Disciplinary Archives
  6. Institutional Repositories
  7. Open Archives Initiative and OAI-PMH
  8. Library Issues
  9. Conventional Publisher Perspectives
  10. Open Access Legislation, Government Reviews, Funding Agency Mandates, and Policies
  11. Open Access in Countries with Emerging and Developing Economies
  12. Open Access Books

Most of these are split into further sub-categories.

Despite the bibliography’s publication date making some categories dated and some of the URLs to the items it references no longer working, it remains an excellent general resource for anyone looking to find research and other materials on Open Access from the 2000s.

Bailey’s other works on Open Access topics can be found on his website

Book: The Access Principle by John Willinsky

Citation: Willinsky, J. (2006). The Access Principle. Cambridge, MA: MIT Press. Retrieved from

This 2006 book on the OA movement aims to “inform and inspire a larger debate over the political and moral economy of knowledge that will constitute the future of research” (p. xvi). Each of its thirteen chapters—with one-word titles that make their focus clear—present a combination of historical overview, current state of scholarly publishing, and arguments for OA.

Although the playing field has moved somewhat in the 10 years since the book’s publication, the vast majority of Willinsky’s descriptions are still on point, and his arguments are as cogent as they were in 2006.

The chapter on the economical challenges of scholarly and OA publishing, for example, holds Elsevier’s ScienceDirect platform up as an exemplar of providing increased access to research. After the establishment in 2012 of the “Cost of Knowledge” campaign boycotting Elsevier journals over the company’s business practices, which researchers say restrict circulation and damage scholarly publishing, these remarks are clearly no longer an accurate representation of scholarly consensus.

All the same, Willinsky’s argument at the end of the chapter—that new publishing models must be pursued to counter rising journal prices and restrictive licensing—is just as relevant as it was in 2006, if not even more so.

Perhaps the most interesting parts of Willinsky’s book are the appendices. The first of these, “Ten Flavors of Open Access,” presents ten types of OA with different economic models and examples. These “flavors” include university subsidization of research on author home pages, author fees, partial OA, OA of bibliographic material for indexing purposes, and others. Additional appendices present details on the economics of scholarly associations, journal publishers, and setting up an OA cooperative, as well as statistical information on indexing and OA journals as of 2006.

Other Resource: “Accessibility Testing” at the W3C Wiki

Citation: Hawkes-Lewis, B. (2014) Accessibility testing. W3C Wiki. Retrieved from

The idea of carrying out web accessibility testing on a web site you’ve built can be daunting, especially if you’re not well-versed in web accessibility in the first place. This guide, originally created by Benjamin Hawkes-Lewis for Opera’s Web Standards Curriculum, introduces users to the idea behind web accessibility testing, provides an overview of basic concepts, and describes a number of methods that can be used to test a web site or other online resource for accessibility.

One of the key takeaways from this guide is also its shortest section: When to test for web accessibility. Hawkes-Lewis notes that “test early; test often” is the best advice, as trying to do all your web accessibility testing at the end of the development process can be expensive and time-consuming.

Before you can start doing that, though, you need to know the reason you’re testing. Your “external requirements,” such as government mandates, corporate or institutional best practices, or common userbase demands will all play a role in what you check for in the testing process. Hawkes-Lewis notes, however, that these “should only be the beginning of the process; they should be treated as a minimum set of requirements” instead of your end goal.

One way to get a handle on what kind of disabilities to test for is to create user personas—”fictional users that act as archetypes for how particular types of users would use a web site.” These personas can be used to more clearly understand the kinds of things your users might want to accomplish on your site, and can help you uncover some of the problems they might run into while doing so.

Hawkes-Lewis further breaks down accessibility testing into “expert testing” and “user testing.” Expert testing—probably the type most people think of—involves a web accessibility expert examining and analyzing either the public view of your web site (whether via a monitor, mouse, and keyboard or through the use of a specific web accessibility tool) or its code (manually or by automated checkers).

User testing, as the name implies, involves seeking out actual users—ideally those with actual disabilities—and observing them while they try to use your web site. As Hawkes-Lewis points out, this kind of user testing can quickly get expensive. However, he says that “even small-scale user testing” can have significant benefits for accessibility.

It’s important to realize that for user testing to really be effective, you don’t just want to put your testers in front of your web site and let them do whatever. Instead, you should have a set of tasks for each tester to try and complete. Observing testers try to complete these tasks can allow you to “uncover lots of problems you had not anticipated.”

Hawkes-Lewis provides a number of links to groups that might be approached for user testing purposes.

The final step in web accessibility testing according to Hawkes-Lewis is to communicate your results and to act on those results by working to improve your site. It’s worth remembering the adage he quotes early in the guide, however: “test early; test often.” By integrating accessibility testing into your design process, you can drastically improve your final web site.

Article: Exploring perceptions of web accessibility: A survey approach

Citation: Yesilada, Y., Brajnik, G., Vigo, M., & Harper, S. (2015). Exploring perceptions of web accessibility: A survey approach. Behaviour & Information Technology, 34(2): 119-134.

Yesilada et al. note that definitions of accessibility vary due to the “constantly evolving” nature of the field and the various sub-fields within it (p. 119). As the authors found in a previous study, “misunderstanding [of] accessibility definitions, language, and terms might cause tension between different groups,” leading to difficulties. This study, consisting of a survey of over 300 people “with an interest in accessibility” (p. 121) is the authors’ way of addressing the issue in the hopes of enabling more useful communication within the field.

The study was carried out via a survey distributed by several accessibility-related mailing lists and which asked participants to provide demographic information about themselves; to rank five different definitions of accessibility and/or write their own definition; and to agree or disagree with statements regarding web accessibility (p. 121).

The definitions used in the study are excerpted here from the web survey used, which is still available online:

  1. Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web.
  2. Technology is accessible if it can be used as effectively by people with disabilities as by those without.
  3. The extent to which a product/website can be used by specified users with specified disabilities to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.
  4. A website is accessible if it is effective, efficient and satisfactory for more people in more situations.
  5. The removal of all technical barriers to effective interaction.


The bulk of the survey, however, was given over to the rating of statements about the purpose of accessibility, its drivers, and how to enact it. These statements can be categorised as follows:

  • Usability vs accessibility – Does accessibility relate to usability? (p. 122)
  • Audience of accessibility – Is accessibility concerned mostly with people who have disabilities, or a broader audience? (p. 122)
  • Legislature vs revenue – Is accessibility primarily driven by laws or effect on revenue? (p. 123)
  • Evaluation of accessibility – How can accessibility best be assessed? (p. 123-124)
  • Dynamic and contextual – How is accessibility affected by “pages that change and the context in which a page is experienced”? (p. 124)
  • Standard definition – Is one important? (p. 124)
  • Accessibility and user experience – What is the relationship between accessibility and the user experience? (p. 124-125)

The authors analyzed responses to these statements not only in aggregate, but by correlating responses with respondent’s self-reported demographics. Expertise—defined by the authors as whether or not respondents’ time spent working on accessibility and their years in the field (p. 126)—technical background, work sector, area of specialisation, and whether or not the participants were “in the trenches” or not all played a role in how respondents rated statements in the various areas.

Due to the sometimes extreme variation in responses based on respondent demographics, and based on overall responses to the statements, the authors argue that more needs to be done at the educational level to teach accessibility as “interrelated” to usability and user experience (p. 131).

One particularly interesting note is that those who work in the government sector or who work on accessibility issues regularly are more likely to support statements about accessibility benefiting a broader group of people. As the authors note, there is sufficient evidence “showing how accessibility is also about those living in the developing world” or who are otherwise socially disadvantaged (p. 132), suggesting that more studies are needed to make this clearer to those who are not practitioners.

Ultimately, the authors conclude that accessibility’s breadth and continuing evolution make communication challenging, and that more studies like their own—as well as an approach to education which takes the broader context of accessibility into account—are needed to fully address the problem.

Article: The challenges of Web accessibility: The technical and social aspects of a truly universal Web

Citation: Brown, J. & Hollier, S. (2015). The challenges of Web accessibility: The technical and social aspects of a truly universal Web. First Monday, 20(9).

Brown & Hollier provide a high-level overview of various accessibility challenges as of late 2015, and argue that although there are still technical difficulties with creating accessible web content, the larger challenge is building awareness of accessibility problems in the first place.

The basic discussion of accessibility at the beginning of this article will contain no surprises to most web designers and accessibility researchers. However, the authors’ review of accessibility policies like section 508 and WCAG, and increases in technology from mobile devices and more traditional adaptive technologies like screen readers serves to illustrate a point: technologies for end-users and tools for designers have both increased in complexity since the early days of the web.

The latter half of the article discusses assessment of accessibility and conformance testing. The authors summarize several points of consensus amongst researchers in this field:

  • Automated tools cannot substitute entirely for manual checking of accessibility issues
  • Assessment of accessibility can be as complicated as accessible design

The authors also outline the W3C’s suggested accessibility conformance evaluation methodology: “defin[e] the evaluation scope, explor[e] the target Web site, [select] a representative example of pages, [audit] those pages and [report] the findings.”

The authors also note that particular areas of concern for disabled web users are government websites—which can provide crucial services, and which tend to have some accessibility issues despite greater attention and assessment than corporate sites—and social media—where the variety and type of content and communication can make accessibility challenges even more difficult for end-users to overcome.

One especially interesting technology the authors review is “cloud accessibility,” wherein user preferences can be stored in the cloud and then accessed by individual machines so that the “working environment would adapt to the context of the user and their specialised requirements.” The authors do note, however, that as more and more services move to app-based environments, cloud-based services may be “superseded before they even move beyond the concept stage.”

Finally, the authors investigate awareness issues surrounding accessibility, suggesting that it—more than the development of specific technologies—”will have a greater impact on the uptake of accessible design.”

Despite its summary nature, this paper serves a useful reference point for researchers in accessibility, as many papers which discuss the challenges of creating accessible content are older, and the information they contain about technology is no longer as relevant as a result. Additionally, the authors’ argument that building awareness, and with it the skill sets required to build a more broadly accessible web, is a more effective way forwards is useful in guiding future research and accessibility initiatives.

Article: Automatic web accessibility metrics: Where we are and where we can go

Citation: Vigo, M. & Brajnik, G. (2011). Automatic web accessibility metrics: Where we are and where we can go. Interacting with Computers 23: 137-155. Retrieved from

In this article, the authors study seven quantitative metrics for reviewing web accessibility to determine which are the most reliable for assessing web sites. As the authors note, despite conformance criteria like the Web Content Accessibility Guidelines (WCAG) and numerous automated conformance-checking tools, metrics can provide more detailed quality control when comparing multiple web sites or multiple iterations of the same web site (p. 137).

The authors explored different metrics using the following areas to determine which was of the highest quality:

  • Validity – “How well scores produced by a metric predict all and only the effects that real accessibility problems will have” and “how well scores mirror all and only the true violations” of conformance criteria like WCAG 2.0 (p. 138)
  • Reliability – Are the metrics consistent?
  • Sensitivity – Is the metric too sensitive to minor changes in accessibility level?
  • Adequacy – Does the metric report its findings in a consistent manner that can be adequately quantified?
  • Complexity – How many variables does the metric need to compute its scores, and/or do external tools exist to create the metric?

Out of a large number of metrics reviewed briefly, the authors analyzed the following automatic metrics:

Of the metrics analyzed, only WAQM, WAB, and PM fulfilled the validity criteria (p. 151), with the WAQM and WAB scoring slightly better than the PM metric in terms of adequacy (p. 154). The authors note that even these three metrics are less than idea, and suggest that researchers “focus more on quality aspects of accessibility metrics with the long-range goal” of improving them (p. 154).

Document: Amsterdam Call for Action on Open Science

The Amsterdam Call for Action on Open Science is a living document created during an EU Open Science conference in April of 2016.

The published report begins with a brief description of Open Science to make the case for the importance of the movement–chiefly, that it can “increase the quality and benefits of science by making it faster, more responsive to societal challenges, more inclusive and more accessible to new users” (p.4)–and sets forth twelve actions which European member states, the EU Commission, and other stakeholders can take to reach full open access of scientific publications in Europe by 2020, and to also make data sharing “the default approach” for publicly-funded research by the same date. (p.5)

The twelve actions are:

  1. change assessment, evaluation, and reward systems in science
  2. facilitate text and data mining of content
  3. improve insight into intellectual property rights and issues such as privacy
  4. create transparency on the costs and conditions of academic communication
  5. introduce FAIR and secure data principles
  6. set up common e-infrastructures
  7. adopt open access principles
  8. stimulate new publishing models for knowledge transfer
  9. stimulate evidence-based research on innovations in open science
  10. develop, implement, monitor and refine open access plans
  11. involve researchers and new users in open science
  12. encourage stakeholders to share expertise and information on open science

The remainder of the document is devoted to in-depth examination of each of these twelve actions, describing which problem or problems each addresses, solutions to those problems, and concrete actions that can be taken. Despite the European focus of the call, many of these actions could easily be adopted on a broader scale.

One of the more interesting set of concrete actions is that put forwards to address text and data mining of published research.  Here, the call recommends that the EU Commission propose copyright reforms allowing “the use of [text and data mining] for academic purposes” as well as others. (p.11)

A PDF of the call for action (from which page numbers in this post are taken)can be downloaded from the EU 2016 website. The text of the call is also available on the SURFnet wiki, with comments from various people attached.

EU Resolution: Council Conclusions on the Transition Towards an Open Science System

On 27 May, 2016, the EU Council met to discuss the transition of their member states towards what they call an Open Science System.

The 18-point conclusion stems from several EU-based OA initiatives, including Horizon 2020 and several reports from the EU Commission which put forward OA dissemination of research—especially data-drive science research—as the most efficient way to drive innovation and serve the public interest. The EU council calls this dissemination “Open Science.”

The conclusion deals with publicly-funded research in particular, stating that it “should be made available in an as open as possible manner” without “unnecessary legal, organizational and financial barriers to access” (p. 5).

While this all sounds good, the conclusion is non-legislative. The majority of its points are recognition of initiatives like the Open Science Policy Platform that are already underway or existing statements like the Amsterdam Call to Action (p. 4), or recommendations that various governments and commissions work to implement Open Science and other OA initiatives at the national level.

You can read the full resolution on the EU Council website.

Article: Gold or green: the debate on Open Access policies

Citation: Abadal, E. (2013). Gold or green: The debate on Open Access policies. International Microbiology 16: 199-203.

Abadal provides a brief discussion of green and gold OA prompted by the release of the 2012 Finch Report, a document produced for the British government in response to its request for a solution that could achieve OA publishing in the UK without harming the publishing industry (p. 200). That report recommended that Gold OA (in which journals make articles available free of cost for readers) be the “strategy for all science communications in the UK” (Abadal, p. 201).

Abadal makes excellent points about how gold OA can be problematic for authors in countries which lack established infrastructures for funding researcher (p. 202). Academic publishing fees are often in the thousands of dollars, an amount which is unreasonable even for some institutions, let alone individual authors.

However, as Peter Suber points out in an overview of OA publishing on his website, there are several different business models for gold OA, listed here by the Open Access Directory, not all of which rely on authors paying fees. Indeed, Finch report recommendation aside, “most OA journals (70%) charge no author-side fees at all.” (Source, which it’s worth noting is from 2006.)

It’s absolutely true that gold OA can be prohibitively expensive for authors who don’t have an institution to cover their costs, and it’s also absolutely true that green and other forms of OA serve a very useful function. While the Finch Report’s recommendation of implementing an author payment system is disappointing, it’s worth keeping in mind that gold OA isn’t always a pay to play scenario.