Exploring the future of the interface between ICTs and education for UNICEF recently provided a valuable opportunity to reflect on the conflicting evidence about the influence of ICTs on education. Despite all of the research and evidence gathering about the use of ICTs in education, it still remains extremely difficult to know what their real impact is, and how best to deliver on the potential that they offer, especially among the poorest and most marginalised. There are at least seven main reasons for this.
1. The time for educational change to have an outcome
Learning and education are cumulative; they take a lifetime. Measuring the impact of education interventions is therefore fundamentally different from measuring, for example, most health-related impacts. It is possible to inoculate populations with a vaccine, and to measure its impact almost immediately in terms of the health outcomes. However, it is impossible to inoculate against ignorance; there is no vaccine that can guarantee successful learning.
It is therefore extremely difficult to measure the long-term significant outcome of a relatively short and novel educational intervention, such as the introduction of tablets into schools for a couple of years, without there being a consistent and long-term method of actually measuring those outcomes. Some things can certainly be measured in the short-term, but these may not actually be the most important and significant long-term learning outcomes. Moreover, it is extremely difficult over a long period of time to assess the precise impacts of any one intervention. Many factors influence educational change over time, and it may be that observed learning outcomes are not necessarily caused by the specific technological intervention being studied. Determining real causality in education is extremely difficult, especially in longitudinal studies.
Linked with this, many ICT for education interventions are specifically initially planned for a relatively short periods of 3-5 years. This is usually the sort of duration of research grants and donor-funded projects, but it is far too short a term to enable real impacts fully to be grasped. The pressure of reporting, and the need to show success within a short time, to seek to guarantee further funding, also has a significant impact on the types of evidence used and the ways through which it is gained.
2. Diversity of research methods: you can show almost anything that you want to
Different kinds of research lead to different types of conclusion. Research results also depend fundamentally on what the aims of the research are. Two pieces of perfectly good research, that are well designed within their own fields and published in peer-reviewed journals, can thus show very different results. Three particular challenges are relevant.
First, there are often very different results from short-term quantitative and long-term qualitative research. It is relatively easy to go into a number of schools for a short period, gather quantitative data about inputs and outputs, and find the evidence to write a glowing report about the positive outcomes of an ICT for education intervention. However, most such accounts are based on self-reporting, schools can prepare to show off their best attributes for the day of the visit, and researchers can be beguiled into believing what they hear. In contrast, long term qualitative immersion in a small group of schools for several months can show much more clearly exactly what is going on in them, and usually leads to very differing types of conclusions with respect to ICT in education. Moreover, there is a systemic bias in much evidence-based policy making, especially by governments and international organisations, whereby they prefer large scale quantitative studies, which have apparently representative samples, to the insights gained from in-depth hermeneutic and qualitative approaches. This tends to lead to a focus on inputs rather than outcomes.
Second, biases are introduced because of the interests of the people doing the research or monitoring and evaluation. Many ICT for education initiatives have begun as pilot projects, either by companies eager to show the success of their technologies, or by researchers eager to prove that their innovation works. It is perfectly natural that the ways through which they design their research, and the indicators that they choose to assess will seek to highlight the intended positive outcomes. All too often, though, unintended consequences are ignored or simply not looked for, despite the fact that these frequently provide the most interesting insights. Very little research on the use of ICTs in schools to date, for example, has explored the impact that this might have on online child sexual abuse, or other forms of harassment and bullying.
Third, much depends on the aims of the research. Tightly constrained experimental design to explore, for example, how the use of a particular device influences activity in certain parts of the brain, can indeed show apparent causality. Linking that, though, to wider conclusions about children’s learning and the desirability of incorporating a specific technology into schools is much more difficult. Much of the good quality research to date has tended to focus on relatively closed systems, where it is indeed possible to undertake more rigorous experimental design. Much less research has been undertaken on the more holistic and systemic interventions that are required to ensure the successful adoption of new technologies. In part, this is because of the different approaches that exist in the academic community between the physical sciences and the social sciences. The aims of research in computer science or mathematics are, for example, often very different from those in sociology or the humanities. This reinforces the need for there to be much more emphasis on multi-disciplinary research for there to be clearer conclusions drawn about the overall impact of ICTs in education. Moreover, much of the experimental research, for example using Randomised Control Trials, has been undertaken in the richer countries of the world, and all too often conclusions from this are then also applied to poorer contexts where they may well not be appropriate.
3. Transferability and context
There is considerable pressure to identify solutions that can work universally, and it is a natural tendency for people to hear of something that has appeared to work in one context and then try to apply it to another. All too often, though, they do not realise that it may have been something very specific about the original context of the intervention that made it successful. The pressure for universal solutions has in large part been driven by the interests of the private sector in wishing to manufacture products for a global market, and also by donors and international organisations eager to find universal solutions that work and can be applied globally. All too often the reality is that they cannot be applied in this way.
4. The diversity of technologies
Many contrasting ICTs are being used in education and learning in different contexts, and it is therefore not easy to make generalisations about the overall effectiveness of such technologies. The use of an assistive technology mobile app, for example, is very different from using a tablet to access the internet. Determining exactly what the critical intervention is that can benefit, or indeed harm, learning is thus far from easy. Indeed, because of this diversity, it is actually rather meaningless to talk about the overall impact of technology on learning.
5. The focus on inputs
Inputs are much easier to measure than are real learning outcomes. Indeed, performance in examinations or tests, which is the most widespread measure of educational success, is only one measure of the learning achievements of children, and may often not be a particularly good one. Most studies of the application of ICTs in education therefore focus mainly on the inputs, such as numbers of computers or tablets, hours of connectivity, amount of content, and hours of access to the resources, that have been implemented. They show what the funding has been spent on, and they are relatively easy to measure. Using such data, it is possible to write convincing reports on how resources are being used on “improving” schools and other learning environments. This is one reason why governments often prefer quantitative studies that measure and represent such expenditure, since it reflects well on what they have done in their term of office.
However, it is extremely difficult to link this directly and exclusively to the actual learning achievements of the children, not least because of the multiple factors influencing learning, and the great difficulty in actually proving causality. All too often a dangerous assumption is made. This is that just because something is new, and indeed modern, it will be of benefit to education. There have been far too few studies that seek to explore what might have happened if the large amounts of money spent by governments on new ICTs had actually been spent on some other kind of novel intervention, such as improving the quality of teachers, redesigning school classrooms, or event putting toilets in schools. What evidence that does exist suggests that almost any well-intentioned intervention can improve the learning experiences of teachers and pupils, primarily because they feel that attention is being given to them, and they therefore want to respond enthusiastically and positively.
6. Success motives
One advantage that ICTs have in this context is that they are seen by most people as being new, modern, and an essential part of life in the 21st century. Parents and children across the world are therefore increasingly viewing them as an integral and “natural” part of any good education system, regardless of whether they actually are or not. The myth of modernity has been carefully constructed. The motives for success of those advocating their adoption in education, may not, though, be strictly to do with enhancing education. The need to show that ICTs contribute positively to education, and thus the results achieved, may not actually be driven primarily by educational objectives. Politicians who give laptops with their party’s logos on to schoolchildren are often more interested in getting re-elected than in actually making an educational impact; technology companies involved in educational partnerships are at least as likely to be involved because of the opportunity they offer to network with government officials and donors as they are because of any educational outcomes. The key point to emphasise here is that monitoring and evaluation studies in such instances may not actually be primarily concerned with the educational outcomes, but rather with the success anticipated by those with powerful interests, and should therefore be treated with considerable caution.
7. Monitoring and evaluation: a failure of funding, and reinventing the wheel
A final reason why it is so difficult to interpret the evidence about the impact of ICTs on education concerns the general process of monitoring and evaluation of such initiatives. All too often, insufficient funding is given to monitoring and evaluation, regular self-enhancing monitoring is not undertaken, and any thinking about evaluation is left until the very end of a project. A general rule of thumb is that the amount spent on monitoring and evaluation should be around 10% of total project costs, but those seeking to use ICTs for education, particularly civil society organisations, often argue that this is far too high a figure, and that they want to spend as much as possible of their limited resources on delivering better education to the most needy. All too often, monitoring and evaluation is left as an afterthought near the end of a project at the time when reports are necessary to convince funding agencies to continue their support. If good baseline data were not gathered at the beginning of a project, particularly about learning attainment levels, then it is not possible to obtain accurate evidence about the real impact of a specific piece of technology.
A second main challenge with monitoring and evaluation is that practitioners and researchers often seem to reinvent the wheel and develop their own approaches to identifying successes and failures of a particular intervention, rather than drawing on tried and tested good practices. As a result, they frequently miss important aspects of the rather different processes of monitoring and of evaluation, and their work may also not be directly comparable to the evidence from other studies.
Implications
One obvious implication of the above is that we need more independent, multi-disciplinary, cross-sectoral and longitudinal research on the use of technology in education. However, all research will represent the interests of those involved in its commissioning and implementation, and needs to be treated with the circumspection that it deserves.
A second important conclusion is to question the validity of much so-called evidence-based policy making in the field of technology and education. If research evidence is based upon a particular set of interests, then it is logical to suggest that any policy based on it will in turn also reflect those interests. Such policies can never be purely “objective” or “right”, just because they claim to be based on evidence. Indeed, a strong argument can be made that policies should be based upon visions of what should be (the normative) and not just what is (the positive).
This is the second of a series of short summaries of aspects of the use of ICTs in children’s education across the world based on my work for UNICEF (the first was on Interesting practices in the use of ICTs for education). I must stress that these contain my own opinions, and do not in any way reflect official UNICEF policy or practice. I very much hope that they will be of use and interest to practitioners in the field. The original report for UNICEF contains a wealth of references upon which the above arguments were based, and will be available should the report be published in full.