Home  / About OSuR

About OSuR

Know at a glance the keypoints of the Open Source universities Ranking.

We are likely to see more university rankings, not less. But the good news is they will become increasingly specialised

Simon Marginson, editor member of ranking Times Higher and adviser in ARWU ranking





Whats is the OSuR (Open Source universities Ranking)?

It's a classification of universities according their commitment with the use, diffusion and creation of open source software. It's an specialized ranking that ranks educational centers with 18 indicators that measure the work they do in open learning from all sectors. It's have been done by an experts committee of the industry.

There are 4 characteristics evaluated for every university: Teaching criteria, Research criteria, Technological criteria and Webometrics. Each of these aspects have a certain number of indicators with their own weight. Here you can check our methodology.

Why is the OSuR necessary?

Because universities enhance the open source software from many areas and it's necessary to have a standardized methodology to measure, compare and know their strengths and weaknesses. The OSuR offers the opportunity to learn in detail the impact of the actions taken, in what areas is best and how it spread the open source software in relation to other universities.

The OSuR appears in response to the need for universities to have a standard methodology for measuring homogeneously all universities under the same criteria, that make the information collected useful and accurate. The OSuR provides the answer to this need.

Universities enhance the open source software in many different ways. They have many initiatives and is not easy to know them all, because they are made from different education areas. So it's even more difficult to assess them. This implies that you can't know the real effort made in each university to make popular the free software.

To evaluate the work done by each university it's necessary to have an easy way to understand its strenghts and weaknesses; to know other universities initiatives and identify which aspects they should improve.

Ranking objectives

The main objective is to analyze the work done by universities in Open Source software making and difussion. We do this through a ranking based upon empirical concepts that offers information to measure and compare how much the Open Source software is supported in each university:

1

Recognize the efforts of universities

There are universities with a strong commitment to free software, they deserve to be known and to be widespread and have recognition of their work for the common good.

2

Knowing the strengths and areas for improvement in each university

The criteria measure the spred of free software from all levels of the university. Thanks to that we can know which aspects are best and which ones have room to improvement.

3

Encourage collaboration between universities

By introducing initiatives and areas that each university works in open source software. By this way they can collaborate with each other to improve wherever needed.

We intend to provide extra motivation for universities to devote more efforts to promote free software. So in them there are people with great knowledge and commitment with this philosophy, and so they can enhance open source software.

This ranking helps to release initiatives of each university in this area, and creates a methodology to assess it and compare it.

What's the difference between OSuR and other university rankings?

In the scope. While most ranking classify globally the universities for the impact of its scientific production, this ranking is based specifically on knowing the spreading of open source software. By focusing on a limited scope, the criteria used to rank universities are interrelated.

A lot of indicators analyzed

One of the most important parts is the definition of indicators is to decide what is valued for universities ranking.

See indicators chosen

Which are its main features?

There are many rankings to rank universities, but this one differs from the others in 4 aspects:

  • It focuses specifically on the open source software. By focusing on a single aspect, all criteria taken into measurements are interrelated and therefore it makes sense to group them into a single index.
  • Presents a methodology that measures a number of qualitative and quantitative indicators, synthesizing them into a single value that facilitates comparison between universities.
  • Indicators are mainly based on objective information and in no case were used indicators that rely on subjective assessments of researchers. That has meant that some aspects of the spreading of open source software are not taken into account, but the ranking has gained in reliability and accuracy.
  • The sources of information are public are offered by the university.
  • The measurement period is about 1 year to get the latest information. We've taken in consideration that new technologies advance and change frequently, so we choose a relatively short period of time, so it helps to know the progress.
  • Those indicators for which we have no information, or it's invalid, are simply empty. To get the score for each criterion we consider only those indicators for which information is available, as long at least 75% of information is available. Otherwise, the indicator is empty.

Who is it for?

A ranking is a user-oriented process: is developed to be used by anyone. The definition of the aspects and indicators to study has been made considering those who contributed more users to ranking users:

  • Responsibles for IT departments and offices of open source software, so they can have a metric to help them know how well they are doing, and if their work and effort can be considered enough.
  • Heads of universities, as a marketing and support tool for decision making. To know if the quality of studies center corresponds to their ranking on free software topics.
  • Teachers and students, to increase their awareness of efforts made from their university, so they can also collaborate.

How is the information obtained?

1Analysis of public documents: the website of the institution


We work as accurately as possible to get correct data. Even so we cannot ensure 100% the accuracy of all information from all universities because it has not always been possible to have direct information from the institution. In order to verify the accuracy of the used data, we provide the spreadsheets with the results to download.

What is not OSuR

We don't try in any way to rank universities for the quality of their education, or the resource levels provided. This is not an academic ranking and should not be used as criteria for choosing an university. It's a ranking focused on a particular area of the university and whose purpose is to know the efforts made ​​by each center in this area and help to improve them.

To choose a university there are many more indicators, not taken for this study (such as the number of teachers per student, research projects ...). Several rankings worldwide are responsible for assessing these other aspects, and although there is no single methodology to rank universities, can be found if the purpose is to consult an academic ranking. You can find them in our bibliography.

Present day of university rankings

There is no public ranking that measures how universities enhance open source software. University rankings are mainly performed to evaluate them using educational criteria and serve to help students to choose a college.

Each ranking, according to their purpose, measures different aspects of universities. Som of them focus on cybermetrics (web) aspects , scientific, academic ... more about other university rankings.

Highlights of OSuR

Use of standardized procedures at european level

Indicators that comprise our methodology are designed to measure the full range of activities on open source software that are made in a University. Like any other ranking, is a simple way to know the work done by universities, yet it is imperfect because there may be methods that use other factors. This is a common problem in all the rankings, and to maximize the degree of accuracy of our ranking methodology, it follows the principles set by the IREG (International Ranking Experts Group), a group of experts appointed by UNESCO who developed principles of Berlin to rank Superior Education Institutes [PDF] and that serve to unify the methodologies followed in preparing the rankings so you can compare them. Also it explain why we have chosen these factors and the benefits of the methodology we use.

Classification independecy

In addition, weights given to each criterion have been chosen by vote of a group of professors and industry professionals. The study was fully funded by PortalProgramas.com, a software download site without any linkage with universities or other organization.

Verifiable and reproducible data

In scientific research, it's a basic condition that investigations are reproducible. To that end, OSuR provides information on the data used, sources and methodology.

The universities themselves have provided information on several criteria, which is also considered as a verified source.

The data sources are public web services (like search engines), manual data collection and data provided by the universities themselves. Both the criteria for classification as the data with which wrote the report have been released to have a high transparency. View work methodology.

The only criteria that can not be reproduced accurately due to its volatile nature, are the web metrics. Unlike citations in scientific publications, website links vary over time: may decrease when links, pages or entire websites are removed. So you can not make an accurate back-calculation of these criteria, but only a very rough estimate.

Empirical analysis of the information

The information have been extracted by a manual analysis of the websites of all the universities that are part of the study, and from contacts with universities. Only external web services have been used for measurements of webometrics as they are very complex aspects to measure.

Translation of bibliometric criteria and webometrics

Nearly all university rankings consider the scientific production of a university as one of the criteria used for classification. For this ranking bibliometric indices have been adapted and used in other rankings webometrics to the scope of open source software. E.g., the 'number of citations' which is an important indicator of the impact it has had a scientific research, translates as 'external links' which is also an important indicator in open source software. In addition, we have considered other webometrics indicators from other rankings like the 'web impact'.

Why webometrics information (number of universities pages, number of links ...) is not entirely reliable?

Because they are obtained through third party tools, outside the university, which count the number of pages, so their results are approximate:

  • It is possible, and in fact it occurs, to find non indexed pages, it is known as the 'hidden web'.
  • There may be duplicate pages: the website shows the same information in different URLs, which counted as different pages.
  • Outdated information: search engines scour the web site from time to time (depending on the size, content update and several more criteria). It may be the case that the information shown on the number of pages is not updated (the review period is indeterminate).



About us:
PortalProgramas  |  About us