Other: QuESo – A Quality Model for Open Source Software Ecosystems

Citation: Franco-Bedoya, O., Ameller, D., Costal, D., & French, X. (2016). QuESo – A quality model for open source software ecosystems. (UPC, Report No. ESSI-TR-16-1). Barcelona: Universitat Politecnica de Catalunya. (link)

This resource is a little far afield from either Accessibility or Open Access, but it’s close enough to the latter to be included here.

This is a technical report by researchers from the University Politecnica de Catalunya in Barcelona, Spain, and it describes in some detail a model called QuESo, which can be used to measure the health of Open Source Software Ecosystems (which the authors abbreviate as OSSECO).

The QuESo model examines factors in a number of areas to arrive at a view of the overall health of a given Open Source Ecosystem, as shown in the figure below:
The QuESo model measures the quality of a software ecosystems community and network

QuESo measures not just a specific piece of software, but its community and network health.

For community quality, areas measured are:
Maintenance capacity (size and activeness)
Process maturity
Sustainability (heterogeneity, regeneration ability, effort balance, expertise balance, visibility, and community cohesion)

For network quality, areas measured are:
Resources health (trustworthiness, vitality, OSSECO knowledge, and niche creation)
Network health (Interrelatedness ability, synergetic evolution, information consistency, and ecosystem cohesion)

QuESo claims to measure the entire ecosystem (e.g. of all Open Source journal management systems), but there is a little bit of confusion on this point, as many of their measures seem to refer specifically to a single product’s community. Presumably this confusion comes about because they are interested in measuring large products which may have multiple iterations of software coming out of a single original product.

In effect, this confusion means that QuESo can do double duty by examining not only ecosystems, but users of a specific OS project. The model, although some of its measures are overkill for most OA advocates’ purposes, is a useful tool to have when looking at Open Source software for creating OA repositories, journals, and other things.

Article: Exploring Usefulness and Usability in the Evaluation of Open Access Digital Libraries

Citation: Tsakonas, G., & Papatheodorou, C. (2008). Exploring usefulness and usability in the evaluation of open access digital libraries. Information Processing and Management, 44: 1234-1250. [url]

This article explores the usability of OA digital libraries (DLs). Unlike many articles which feature the evaluation of scholarly websites, which tend to use accessibility-related tools, the focus of this is on usability, specifically with the Interaction Tryptich Framework (ITF), a model for evaluation which considers systems as interactions across various ‘axes’, defined here as usability, usefulness, and usability, between the system’s elements, which in this case are the DL, its content, and the user (p. 1237).

A chart showing digital libraries as interactions along the axes of usability, usefulness, and performance
the Interaction Tryptich Framework for Digital Libraries

Like most articles which examine accessibility and usability, the authors are interested in one particular digital library: E-LIS, a library science repository running on the eprints system. Unlike accessibility-focused articles, at least, the site was measured not through the use of automated tools or testing, but by means of a questionnaire filled out by the DL’s users. A regression analysis was then carried out on each category to get an idea of the general success of each axis.

The authors give some general conclusions about users’ expectations with regards to the usability and usefuless of E-LIS content, but seem to have no specific insights in regards to how the results might be applied more broadly to designing usable and useful DLs which perform well.

Given that the article is focused on usability, not accessibility as such, its conclusions may be of limited use to those interested in web accessibility. However, the difference of approach and the exploration of the ITF may be of use to researchers looking to apply different paradigms to accessibility evaluation.

Tool: oaDOI

oaDOI is a recently-launched tool which works similarly to a DOI, by directing users to a perma-link for a given article. The key difference is that oaDOI is OA-friendly: it will direct end-users to OA versions of the article if one is available. [link]

The tool has two parts, a link-generating service similar to bit.ly and other link shorterners, and an API that can be used to implement this behavior in other environments.

Generate an oaDOI link

The link-generating service is simple to use. Just get the DOI link for an article and paste it into the textbox at https://oadoi.org/.

The system will process the request and provide you with an oaDOI link you can distribute to direct users to an OA version of the article, if possible:

oaDOI.org presents a link to OA versions of an article

As seen above, the results page also describes whether the system was able to find an OA version or not, and if so where and how open that version is. The system will also provide a link that contains API information.

API

The API is for more advanced users who wish to take advantage of the system’s ability to find OA articles in other contexts (e.g. in an OpenURL resolver).

The oaDOI API page provides a run-down of functionality and example code, as well as a few use cases of other code libraries and projects which are using the API.

Report: 100 Stories – The Impact of Open Access

Citation: Banker, J-G. & Chatterji, P. (2016). “100 Stories: The Impact of Open Access” (Preprint) Open Access to Scholarly Communication in 2016: Status and Benefits Review, UNESCO (2016). [link]

This report, authored by the CEO of Bepress and a senior employee, ostensibly aims to supplement Altmetrics and other measures by providing a “framework” that shows some of the ways in which institutional repositories can have an impact on readers (“Advancing Knowledge”), authors (“Reputation Building”), and institutions (“Demonstrating Achievements”) (pp. 2-3). This is done by presenting what are essentially 100 short case studies of institutions who use Digital Commons, the repository software owned by Bepress.

Some examples of the impacts the authors present are:

Readers:
Advancing innovation
Improving access to education
Updating practitioners

Authors:
Amplifying scholarship
Finding collaborators
Preserving scholarly legacy

Institutions:
Building reputation
Strengthening recruiting
Professionalizing students

Although these all seem fairly undeniable as things that OA can accomplish, it’s a bit of a stretch to label what the authors have created as a framework, a term which the OED defines as “an essential or underlying structure; a provisional design, an outline; a conceptual scheme or system.”

Rather, the document presents more a list of possible outcomes that can be achieved by using an institutional repository or other system to distribute scholarly research under an OA model.

That’s still a fine and useful thing, and the list of case studies should be of interest to anyone looking to start up an institutional repository.

Ultimately, however, the overselling of the outcomes as a framework, coupled with the obvious incentive that Bepress has to showcase their own product, makes this report feel a bit closer to advertising than a study into the impacts of OA publishing.

Other resource: Open Access Week

Open Access Week is a web-based, international project celebrating OA in all its forms.

The project’s website, http://openaccessweek.org/, includes a list of events, a forum to discuss OA topics, and other resources.

Although the forum and resources are intended for a general audience not already familiar with OA publishing, the project serves a number of useful purposes for librarians/OA practitioners:

  • Allows for a way to publicize your own OA projects
  • Gives a glimpse of what other OA practitioners are doing around the world
  • Provides an opportunity to network

The project is organized by SPARC (the Scholarly Publishing and Academic Resources Coalition).

Article: ‘Predatory’ open access: a longitudinal study of article volumes and market characteristics

Citation: Shen, C., & Bjork, B-C. (2015). ‘Predatory’ open access: a longitudinal study of article volumes and market characteristics. BMC Medicine, 13: 230. DOI 10.1186/s12916-015-0469-2

This article presents a study of predatory open access publishers—those who publish journals and books with “highly questionable marketing and peer review practices.” The authors used Beall’s List of predatory OA publishers (which I discuss here) to generate a random sample of 613 journals, and then manually gathered data on the subject, geographical location, processing charges, and volumes published between 2010 and 2014 for these journals.

The authors found that the number of predatory OA journals which have published at least one article has grown from 1800 in 2010 to roughly 8000 in 2014. Additionally, 420,000 articles were published in 2014, up from 530,00 in 2010. Journals with no specific scientific subject published the most, followed by engineering and biomedicine. There were some difficulties with determining geographical location, but India was the largest percentage at 27%, followed by North America at 17.5%. This and more data can be reviewed in detail by viewing the article in BMC Medicine.

Beyond these results and others, the authors call into question the term ‘predatory,’ noting that most authors in these journals “probably submit to them well aware of the circumstances and take a calculated risk that experts who evaluate their publication lists will not bother to check the journal credentials in detail.” Instead of ‘predatory,’ they prefer the phrase quoted above: “open access journals with questionable marketing and peer review practices,” although they admit that, as ‘predatory’ is a well-established term, it is unlikely to change.

Article: Measuring Altruistic Impact – A Model for Understanding the Social Justice of Open Access

Citation: Heller, M., & Gaede, F. (2016). Measuring Altruistic Impact: A Model for Understanding the Social Justice of Open Access. Journal of Librarianship and Scholarly Communication, 4, eP2132. DOI: http://doi.org/10.7710/2162-3309.2132

In this paper from August 2016, the authors argue for assessing the impact of repositories on two levels: pragmatic and in terms of social justice (p. 2). To this end, they have created a so-called “social justice impact metric” which uses the number of social justice related items accessed and the total amount of international usage from “less-resourced” countries (p. 3).

After establishing an overview of social justice as it pertains to Open Access (OA), the authors argue that since OA is “a social and public good,” (p. 3), traditional measures of impact as a number of citations or downloads are insufficient. Instead, they suggest measuring “social justice impact,” which shows how an OA repository or publication is likely to affect those otherwise without access to information that has become “vital to success in our information economy” (page 5).

To create their Social Justice Impact metric, the authors measured how often content is accessed by search engines, and how often it is accessed in “lower-resourced countries” (p. 8). Data were measured with Google Analytics, looking at both search engine keywords related to social justice and geographic usage (p. 9). Keywords were pulled from a corpus created by the authors (p. 10), included as an appendix in the report.

Anyone looking for a specific number like those given by altmetrics or journal impact factor will be disappointed by the results of the authors’ analysis, which is more like a method for measuring how often international users access repository content that relates to social justice, with suggestions for how readers might most successfully increase access to social justice related content at their own instutition.

All the same, the argument that providing access to information to those who would not otherwise have it should be a core part of measuring the success of OA repositories is a compelling one. As the authors note, we all too often focus solely on academic impact, and should not forget that broader social good comes out of OA work as well.

Other Resource: OpenDOAR; ROAR

OpenDOAR (the Directory of Open Access Repositories) and ROAR (Registry of Open Access Repositories) are two similar, but unrelated web sites which list OA repositories. Given the similarity of the two projects, both will be briefly reviewed in this post.

OpenDOAR

OpenDOAR is a project of the Centre for Research Communications at the University of Nottingham in the UK. The directory currently holds information about 3182 OA repositories. The “Find” page which lists results for searches (and which can also act as a browse feature) lists a description of each repository along with its software, number of items (and last update date), subjects, content type, languages, and a list of policies. By default, this page returns summaries. Clicking the “Link to this record” link next to each repository will provide more information about its policies and a little bit more information about the repository and its institution in general, but otherwise this screen is identical to what appears on the brief results page.

Users can instead select that the results be returned as a chart, table, or Google Map. Options for charts including the number of repositories by content, country, type, and other information. These charts can be embedded in other web pages, as described by a powerpoint on the “Tools” page.

Users can also search the contents of repositories listed in OpenDOAR via a Google search page, and suggest that a new repository be added to the directory.

ROAR

ROAR is a project of the University of Southampton, also in the UK, and (as of the date of this post) lists 4322 repositories (more than OpenDOAR in part because OpenDOAR seems to have stricter weeding policies). Users can search for repositories using an on-site search page with a number of options, and can also search for content inside of repositories using a custom Google search (which, at the time of this posting, was not working). Additionally, the repositories can be browsed by country, year, type of repository, institution, and software type.

Clicking the “record details” link next to a repository’s information will provide more details, such as when the repository was created, what kind of content it contains, where it is based, and its number of records.

Beyond just listing repositories, ROAR allows you to create charts and graphical analyses, and export results in various formats. It is, for instance, possible to generate a graph showing the number of known repositories by year in a certain country or topic, making ROAR a useful tool for OA scholars. Additionally, results pages provide not just a list of how many repositories there are for a topic (etc.) but how active these repositories are, showing number of deposited records and so on.

Like many web lists, ROAR allows users to add new records. You will need to create an account if you wish to add yours to the list.

The project notes that (at this time) automated harvesting of repositories is not working correctly, so that the number of articles hosted by each repository is incorrect.

OpenDOAR or ROAR?

Both lists of repositories are slightly different in terms of what they present to the viewer. OpenDOAR seems to do a more effective job of providing a current picture of OA repositories, whereas ROAR provides a clearer picture of their historical numbers. The ROAR web site is also a bit buggy at the moment, and several features do not work properly. OpenDOAR does not seem to have this problem.

Ultimately, both are useful sites for researchers interested in finding OA content or in researching green OA.

Balancing pedagogy, student readiness and accessibility: A case study in collaborative online course development.

Citation: van Rooij, S.W., & Zirkle, K. (2016). Balancing pedagogy, student readiness and accessibility: A case study in collaborative online course development. Internet and Higher Education 28: 1-7. DOI 10.106/j.iheduc.2015.08.001

This article presents a case study of an online course developed at George Mason University in Virginia to teach students how to learn online. The authors (who were the leads on the project) were tasked with making sure that the course was accessible as well as pedagogically-sound.

Much of the study describes the setting of the university and course, and the main finding so far as accessibility is concerned is that course creators are better off determining accessibility needs before creating content (p. 4). As the authors put it content creators “need to integrate accessibility services into the process early on and to continue those services throughout the design, development and implementation of the online course” (p. 6).

While the study’s findings elsewhere are certainly useful to creators of online courses, their findings as regards accessible course creation are hardly new.

Conference Paper: There and back – Charting flexible pathways in open, mobile and distance education

Citation: Alahmadi, T., & Drew, S. (2016). An evaluation of the accessibility of top-ranking university websites: Accessibility rates from 2005 to 2015. DEANZ2016, April 2016. Hamilton, New Zealand. The University of Waikato. [link]

This paper presents a review of university websites around the world, in Oceania, and in Arab countries, using the AChecker accessibility tool to gauge the number of WCAG AA-level errors in the home page of each university selected, as well as a random sample page from their admissions and course description websites (p. 226).

The study results (perhaps as expected) show that there are numerous accessibility errors on pages around the world. As the authors note, errors are high regardless of region, whether the university is “in the developed world, in countries such as the US, UK, Australia and Japan, or in developing countries, such as Egypt, Saudi Arabia and Lebanon” (p. 229).

However, the data as presented in the article makes it hard to tell the relative success and failure rates in by region, as the tables and charts only provide a summary from the total dataset, or compare a sum of all errors found in “global” vs “Oceania” vs “Arab” regions. Given the study’s focus on these three regions, more granularity in the data would be helpful.

Despite this minor flaw in its presentation, the study provides a useful reminder that web accessibility is still a problem in universities around the world. As the authors say, the increasing importance of web-based learning management systems and the use of the web to distribute course materials and other forms of media to students make web accessibility a higher priority than ever before (p. 232).