ROER4D Question Harmonisation Process


As the first project of its kind to explore OER activity across multiple continents in the Global South, the ROER4D team feels a special obligation to set a high standard by collecting the most accurate and useful research data possible so that educators, students and policymakers can learn from our findings and use them as starting points for future OER research activity in the region.

Part of this effort includes ensuring that we learn from other OER research projects (including the many that have been carried out in the Global North), especially from the survey and interview questions that they employed to obtain their data. As many of our sub-projects include plans for similar research activities, we have wanted to make sure that they leverage the insights of those projects while simultaneously ensuring that they are locally meaningful.

To do this, we have embarked on an ambitious research question harmonisation process with the goal of:
· Harmonising our research questions as much as possible with those of other OER studies and across our own 12 sub-projects.
· Enhancing the research capacity of ROER4D researchers.
· Improving the quality and comparability of the data that ROER4D obtains through its dozen sub-projects.
· Providing a model of best practices for other projects engaged in research for development.

The question harmonisation process has thus far consisted of a series of activities aimed at gradually achieving these goals. In chronological order, they are:

1. Identify relevant surveys
Cheryl Hodgkinson-Williams, the ROER4D Principal Investigator started by consulting nine major OER surveys that Leigh-Ann Perryman at the OER Research Hub identified as being worthy of exploration (such as those from UNESCO/COL, CERI/OECD, JISC, OPAL, ORIOLE and OER Asia). Tess Cartmill, the ROER4D Project Manager typed these up onto one Google Spreadsheet. To complement these, I also consulted a number of other smaller surveys dealing with particular elements concerning OER – such as awareness of, attitude towards, etc. – that were available through the OER Knowledge Cloud. We then harvested the questions from these surveys into a long, master list.

2. Categorize questions according the themes
We then organized the questions according to categories such as awareness, access, creation, reuse, impact, etc. This gave us an idea of the kind of coverage the surveys provided for each theme. We provided a draft definition of how these concepts are understood in the literature and shared our ROER4D Research Concepts with our researchers. Colleagues from Sub-Project 3 in India added ideas on OER quality as did Jose Dutra from Sub-Project 2 in Brazil.

3. Highlight the questions that best fit our context
Then we grouped together questions that were similar to each other and assessed which ones were the most relevant for our needs.

4. Explicate the deeper research purpose and hypothesis of each question
With all of the questions legible in a conceptually organized fashion, we then started explicating their deeper research purposes as well as our assumptions and hypotheses for them. Thus, for example, while a survey question may ask for some basic demographic data, such as which language an educator teaches in, the deeper research question behind that would be: Is there a relationship between educators’ linguistic contexts and their adoption and use of OER? Our assumption is that, yes, language contexts do influence educators’ adoption and use of OER (primarily because so many OER materials are based in English, thus many educators in the Global South may feel unable to use such materials, or that they have to translate them into their locally relevant languages before they can be used for teaching).

5. Identify gaps in the question list and develop our own new ones
From here, we could see where the gaps were in our questions and start developing new ones from scratch to suit our needs. This means that, while we have generally sought alignment with other OER surveys, we have also felt free to go beyond them where needed.

6. Amend questions after critical discussions together
Cheryl and I then debated the merits of each question against other similar questions and against our own research requirements. Through this deliberative process, we have been able to gauge the potential of each question and decide which ones we think would have the greatest utility for our project’s surveys and interviews.

7. Share them with project team and obtain feedback
Over the past two months, we have been connecting with the ROER4D researchers and mentors every couple of weeks through Adobe Connect (a virtual conferencing tool) to collectively deliberate the various questions that Cheryl and I have prioritised. In those sessions, we shared the rationale and value of each question from our perspective, then sought feedback from the participants. The conversations during these sessions have typically been quite spirited and robust. Many of the assumptions that we had about our questions have been challenged by the team members who pointed out better ways to ask the questions. They also ensured that the questions remained broad enough to cater to their diverse linguistic, cultural, educational and socio-economic contexts.

8. Amend questions in light of feedback
After the sessions, I would then take the feedback we obtained and make the recommended changes to our list of questions, then re-post them through Google Docs where the researchers could continue suggesting further edits.

9. Provide access to each step of the process for participants
While Cheryl and I have been responsible for a lot of upfront activities (such as identifying the initial pool of questions to draw from), we have tried to share with the broader team our work and processes at each step along the way (typically through various Google Docs iterations). That has allowed for greater transparency for everyone, engendering a high degree of trust in the process.

10. Seek feedback on process so as to improve process
Lastly, so we could continually improve our effectiveness during this rather experimental – and fairly complex – process, we sought feedback on it every couple of weeks from the participants. Their insights and comments allowed us to reflect on what was working and what was not working so that we could adjust and improve along the way.

Of course, this QH process has not been without its challenges. For instance, our researchers and mentors are located across a span of 16 time zones – from GMT-8 in Chile to GMT+8 in the Philippines. Coordinating meeting times can be tricky, but we developed a handy time zone guide that allows us to achieve the broadest possible participation for our interactions. Many of us also face technological difficulties in engaging online with each other live due to bandwidth or connectivity issues. And, because we all hail from different linguistic, cultural and educational environments, we always face the possibility of misunderstanding each other.

Moreover, despite the fact that that the process above is written in the past tense (because we have gone through all of these steps concerning certain question categories), we are still busy now with the QH process, gradually going through the many sets of questions that emerge in each theme.

It’s worth noting that the point of this QH process is not that everyone in ROER4D asks all of the questions that we have collectively developed, but that, where relevant for their research, they do use them; and more importantly, it creates a rigorous research mindset that ensures that their work – indeed, our work! – is as robust, effective, comparable and relevant as it can be.

You may also like...

Leave a Reply