Other: QuESo – A Quality Model for Open Source Software Ecosystems

Citation: Franco-Bedoya, O., Ameller, D., Costal, D., & French, X. (2016). QuESo – A quality model for open source software ecosystems. (UPC, Report No. ESSI-TR-16-1). Barcelona: Universitat Politecnica de Catalunya. (link)

This resource is a little far afield from either Accessibility or Open Access, but it’s close enough to the latter to be included here.

This is a technical report by researchers from the University Politecnica de Catalunya in Barcelona, Spain, and it describes in some detail a model called QuESo, which can be used to measure the health of Open Source Software Ecosystems (which the authors abbreviate as OSSECO).

The QuESo model examines factors in a number of areas to arrive at a view of the overall health of a given Open Source Ecosystem, as shown in the figure below:
The QuESo model measures the quality of a software ecosystems community and network

QuESo measures not just a specific piece of software, but its community and network health.

For community quality, areas measured are:
Maintenance capacity (size and activeness)
Process maturity
Sustainability (heterogeneity, regeneration ability, effort balance, expertise balance, visibility, and community cohesion)

For network quality, areas measured are:
Resources health (trustworthiness, vitality, OSSECO knowledge, and niche creation)
Network health (Interrelatedness ability, synergetic evolution, information consistency, and ecosystem cohesion)

QuESo claims to measure the entire ecosystem (e.g. of all Open Source journal management systems), but there is a little bit of confusion on this point, as many of their measures seem to refer specifically to a single product’s community. Presumably this confusion comes about because they are interested in measuring large products which may have multiple iterations of software coming out of a single original product.

In effect, this confusion means that QuESo can do double duty by examining not only ecosystems, but users of a specific OS project. The model, although some of its measures are overkill for most OA advocates’ purposes, is a useful tool to have when looking at Open Source software for creating OA repositories, journals, and other things.

Other resource: Open Access Week

Open Access Week is a web-based, international project celebrating OA in all its forms.

The project’s website, http://openaccessweek.org/, includes a list of events, a forum to discuss OA topics, and other resources.

Although the forum and resources are intended for a general audience not already familiar with OA publishing, the project serves a number of useful purposes for librarians/OA practitioners:

  • Allows for a way to publicize your own OA projects
  • Gives a glimpse of what other OA practitioners are doing around the world
  • Provides an opportunity to network

The project is organized by SPARC (the Scholarly Publishing and Academic Resources Coalition).

Other Resource: OpenDOAR; ROAR

OpenDOAR (the Directory of Open Access Repositories) and ROAR (Registry of Open Access Repositories) are two similar, but unrelated web sites which list OA repositories. Given the similarity of the two projects, both will be briefly reviewed in this post.

OpenDOAR

OpenDOAR is a project of the Centre for Research Communications at the University of Nottingham in the UK. The directory currently holds information about 3182 OA repositories. The “Find” page which lists results for searches (and which can also act as a browse feature) lists a description of each repository along with its software, number of items (and last update date), subjects, content type, languages, and a list of policies. By default, this page returns summaries. Clicking the “Link to this record” link next to each repository will provide more information about its policies and a little bit more information about the repository and its institution in general, but otherwise this screen is identical to what appears on the brief results page.

Users can instead select that the results be returned as a chart, table, or Google Map. Options for charts including the number of repositories by content, country, type, and other information. These charts can be embedded in other web pages, as described by a powerpoint on the “Tools” page.

Users can also search the contents of repositories listed in OpenDOAR via a Google search page, and suggest that a new repository be added to the directory.

ROAR

ROAR is a project of the University of Southampton, also in the UK, and (as of the date of this post) lists 4322 repositories (more than OpenDOAR in part because OpenDOAR seems to have stricter weeding policies). Users can search for repositories using an on-site search page with a number of options, and can also search for content inside of repositories using a custom Google search (which, at the time of this posting, was not working). Additionally, the repositories can be browsed by country, year, type of repository, institution, and software type.

Clicking the “record details” link next to a repository’s information will provide more details, such as when the repository was created, what kind of content it contains, where it is based, and its number of records.

Beyond just listing repositories, ROAR allows you to create charts and graphical analyses, and export results in various formats. It is, for instance, possible to generate a graph showing the number of known repositories by year in a certain country or topic, making ROAR a useful tool for OA scholars. Additionally, results pages provide not just a list of how many repositories there are for a topic (etc.) but how active these repositories are, showing number of deposited records and so on.

Like many web lists, ROAR allows users to add new records. You will need to create an account if you wish to add yours to the list.

The project notes that (at this time) automated harvesting of repositories is not working correctly, so that the number of articles hosted by each repository is incorrect.

OpenDOAR or ROAR?

Both lists of repositories are slightly different in terms of what they present to the viewer. OpenDOAR seems to do a more effective job of providing a current picture of OA repositories, whereas ROAR provides a clearer picture of their historical numbers. The ROAR web site is also a bit buggy at the moment, and several features do not work properly. OpenDOAR does not seem to have this problem.

Ultimately, both are useful sites for researchers interested in finding OA content or in researching green OA.

Other Resource: “Accessibility Testing” at the W3C Wiki

Citation: Hawkes-Lewis, B. (2014) Accessibility testing. W3C Wiki. Retrieved from https://www.w3.org/wiki/Accessibility_testing

The idea of carrying out web accessibility testing on a web site you’ve built can be daunting, especially if you’re not well-versed in web accessibility in the first place. This guide, originally created by Benjamin Hawkes-Lewis for Opera’s Web Standards Curriculum, introduces users to the idea behind web accessibility testing, provides an overview of basic concepts, and describes a number of methods that can be used to test a web site or other online resource for accessibility.

One of the key takeaways from this guide is also its shortest section: When to test for web accessibility. Hawkes-Lewis notes that “test early; test often” is the best advice, as trying to do all your web accessibility testing at the end of the development process can be expensive and time-consuming.

Before you can start doing that, though, you need to know the reason you’re testing. Your “external requirements,” such as government mandates, corporate or institutional best practices, or common userbase demands will all play a role in what you check for in the testing process. Hawkes-Lewis notes, however, that these “should only be the beginning of the process; they should be treated as a minimum set of requirements” instead of your end goal.

One way to get a handle on what kind of disabilities to test for is to create user personas—”fictional users that act as archetypes for how particular types of users would use a web site.” These personas can be used to more clearly understand the kinds of things your users might want to accomplish on your site, and can help you uncover some of the problems they might run into while doing so.

Hawkes-Lewis further breaks down accessibility testing into “expert testing” and “user testing.” Expert testing—probably the type most people think of—involves a web accessibility expert examining and analyzing either the public view of your web site (whether via a monitor, mouse, and keyboard or through the use of a specific web accessibility tool) or its code (manually or by automated checkers).

User testing, as the name implies, involves seeking out actual users—ideally those with actual disabilities—and observing them while they try to use your web site. As Hawkes-Lewis points out, this kind of user testing can quickly get expensive. However, he says that “even small-scale user testing” can have significant benefits for accessibility.

It’s important to realize that for user testing to really be effective, you don’t just want to put your testers in front of your web site and let them do whatever. Instead, you should have a set of tasks for each tester to try and complete. Observing testers try to complete these tasks can allow you to “uncover lots of problems you had not anticipated.”

Hawkes-Lewis provides a number of links to groups that might be approached for user testing purposes.

The final step in web accessibility testing according to Hawkes-Lewis is to communicate your results and to act on those results by working to improve your site. It’s worth remembering the adage he quotes early in the guide, however: “test early; test often.” By integrating accessibility testing into your design process, you can drastically improve your final web site.

Document: Amsterdam Call for Action on Open Science

The Amsterdam Call for Action on Open Science is a living document created during an EU Open Science conference in April of 2016.

The published report begins with a brief description of Open Science to make the case for the importance of the movement–chiefly, that it can “increase the quality and benefits of science by making it faster, more responsive to societal challenges, more inclusive and more accessible to new users” (p.4)–and sets forth twelve actions which European member states, the EU Commission, and other stakeholders can take to reach full open access of scientific publications in Europe by 2020, and to also make data sharing “the default approach” for publicly-funded research by the same date. (p.5)

The twelve actions are:

  1. change assessment, evaluation, and reward systems in science
  2. facilitate text and data mining of content
  3. improve insight into intellectual property rights and issues such as privacy
  4. create transparency on the costs and conditions of academic communication
  5. introduce FAIR and secure data principles
  6. set up common e-infrastructures
  7. adopt open access principles
  8. stimulate new publishing models for knowledge transfer
  9. stimulate evidence-based research on innovations in open science
  10. develop, implement, monitor and refine open access plans
  11. involve researchers and new users in open science
  12. encourage stakeholders to share expertise and information on open science

The remainder of the document is devoted to in-depth examination of each of these twelve actions, describing which problem or problems each addresses, solutions to those problems, and concrete actions that can be taken. Despite the European focus of the call, many of these actions could easily be adopted on a broader scale.

One of the more interesting set of concrete actions is that put forwards to address text and data mining of published research.  Here, the call recommends that the EU Commission propose copyright reforms allowing “the use of [text and data mining] for academic purposes” as well as others. (p.11)

A PDF of the call for action (from which page numbers in this post are taken)can be downloaded from the EU 2016 website. The text of the call is also available on the SURFnet wiki, with comments from various people attached.

EU Resolution: Council Conclusions on the Transition Towards an Open Science System

On 27 May, 2016, the EU Council met to discuss the transition of their member states towards what they call an Open Science System.

The 18-point conclusion stems from several EU-based OA initiatives, including Horizon 2020 and several reports from the EU Commission which put forward OA dissemination of research—especially data-drive science research—as the most efficient way to drive innovation and serve the public interest. The EU council calls this dissemination “Open Science.”

The conclusion deals with publicly-funded research in particular, stating that it “should be made available in an as open as possible manner” without “unnecessary legal, organizational and financial barriers to access” (p. 5).

While this all sounds good, the conclusion is non-legislative. The majority of its points are recognition of initiatives like the Open Science Policy Platform that are already underway or existing statements like the Amsterdam Call to Action (p. 4), or recommendations that various governments and commissions work to implement Open Science and other OA initiatives at the national level.

You can read the full resolution on the EU Council website.

webtext: Access/ibility: Access and Usability for Digital Publishing

Access/ibility: Access and Usability for Digital Publishing is a free-to-use webtext published in issue 20.2 of Kairos: A Journal of Rhetoric, Technology, and Pedagogy.

The webtext comes from a seminar of the same name at WVU, and includes a number of resources on the intersection of accessibility and digital publishing, such as short essays arguing for the importance of accessibility, a set of best practices for creators and editors of web content, and a bibliography for further reading.

In “The Case for Accessibility as in Usability,” makes a strong case for thinking of access to digitally published documents as meaning more than just “free information” (source), and gives librarians and other OA advocates tools and guidelines to get started doing so without delay.

Blog Series on Accessibility at The “Lib Pub” Blog

Since July of 2015, the “Lib Pub” blog has been publishing an occasional series of posts about accessibility in digital publishing. These posts came out of a seminar held at West Virginia University titled Access/ibility in Digital Publishing, and for the most part consist of discussions of that seminar or basic discussions of how to make things accessible on the web. However, the posts as a whole make some interesting points about the intersection of digital publishing and Open Access (OA) with accessibility.

In her post, “A library perspective”, Susan Ivey discusses the conflict between access and accessibility, and notes that librarians tend to concentrate on access in terms of standardized metadata, linked data, and other technical considerations, and can loose sight of accessibility in a broader sense.

In “A role for libraries”, Sarah Kennedy mentions two concepts in particular: lo-fi production technologies and perseverant design.

Lo-fi production technologies (as discussed by Karl Stolley in the Lo-Fi Manifesto) are just what they sound like—low-tech methods of creating and distributing content. These technologies are less likely to obsolesce, and are also more likely to be human-readable with little or no additional work. (Note that this does not necessarily mean they are accessible, though. The header in Stoller’s manifesto is made up of ASCII art—certainly not screen-reader friendly, and probably difficult to read in general for people not used to the font.)

Perseverant Design is a term used by Melanie Yergeau which refers to reappropriating “perseverant behaviors”—those “which are restrictive and repetitive and which do not necessarily follow appropriately with the social context”. Like Kennedy, I am intrigued by this idea, but am uncertain how it could be used in practice. Still, the idea of phrasing design in these terms is an interesting one.

Beyond these posts are several which focus on technical details of creating accessible content: a post by Melanie Schlosser on accessible publishing in HTML, a post by Sarah Kennedy which lays out accessibility testing workflows and tools, and a post (published today) by Kevin Hawkins which briefly discusses accessibility in journal publishing.

Other resource: Directory of Open Access Journals

The Directory of Open Access Journals (DOAJ) is an index of OA peer-reviewed journals maintained and updated by volunteer editors (of whom the author of this blog is one). At the time of this update, there were roughly 9,000 OA journals listed in the directory.

DOAJ has existed since 2003, initially as a project of Sweden’s Lund University and (currently) as its own non-profit entity, managed by Infrastructure Services for Open Access, a UK-based company which aims to “facilitate easy access to OA resources” (source).

The bulk of DOAJ‘s site consists of a searchable, browsable list of vetted OA journals. End-users can search the list or browse it by DOAJ-editor-selected subject area—although the search interface can be a little buggy sometimes.

Each journal page contains basic information about the journal’s aims and scope, the type of peer review used, and links to its instructions for authors and editorial board. Each journal page also includes information about whether or not that journal levies article processing charges (APCs) or submission charges at authors publishing in or submitting to it, as well as whether those fees can be waived in some situations. However, this information is not always available for journals which were added to the directory earlier in its history and which have not recently been reviewed. Some journals also provide article-level metadata to the directory; those that do will list published articles on their journal page as well as the above information.

Although the directory does have its quirks, its status as an index of quality OA journals makes it a must-use resource for anyone looking to publish their research in an OA environment.

Resource: Beall’s List of Predatory Open Access Publishers

Beall’s List of Predatory Open Access Publishers is a web site, maintained by Jeffrey Beall, a librarian at UC Denver. Beall’s list contains several different continually updated lists of OA publishers with questionable or harmful practices. The list was started in 2011 as an annual publication, and moved to a continual-update model in 2014 due to the increasing volume of publishers. Now, the list publishes an annual index with links to the lists and a basic overview of the number of journals in each.

Currently, Beall’s List contains information on predatory publishers split into the following categories:

Beall’s list is an excellent resource for anyone with questions about an OA publisher or journal. The 2016 index post to Beall’s List of Predatory OA Publishers, with links to each list, can be found here.