In the past few weeks I have been talking to both teachers and businesses about evidence. It seems there is great enthusiasm for making sure that the ed tech solutions we use in schools have evidence for their impact on learning, but there is also much uncertainty as to how this could work in practice.
The big question I have been asked several times recently is ‘how do we measure whether it works?’. The answer to this, I think, is to ask a slightly different question, one that comes from my days training new teachers just getting started in the profession.
When teachers first start their teaching practice in schools the big focus is most often what they are going to get the children to do. Confronted with a class of 30 young people who need to be occupied, this focus is understandable. What students do is what we see, you cannot see learning it is an abstract concept and a process that happens in someone’s head. When you are new it’s easy to gloss over the fact that the main aim of teaching is not to keep students occupied, but to have them learn something.
What I’ve seen happen many times with fresh new teachers is that they concentrate on coming up with a good lesson in terms of the visible attributes of what happens, and then try to figure out what the students have learned from this lesson. They will have the students do something that really speaks to their interests and engages them, or that results in them having produced some work that looks great. Often the students have learned something, but when they come to having to assess what they have learned they find it a real challenge.
Year after year in initial teacher education I found student evaluations were saying they wanted ‘more input on assessment’, because so often they came up against this challenge. They didn’t need more input on assessment, at least not in the way they expressed it, what they needed was more input on planning and constructing a lesson around learning rather than around activity.
If you start to plan a lesson with what it is you want the students to learn, then design an experience through which they will learn it, assessment is often straightforward. You just have to build in a check that they learned what you intended at the end (or even regularly part way through).
It sounds so simple when presented in that way, but there is something about designing and delivering learning experiences that so often pushes people away from focused consideration of learning and towards the details of activity.
So, if we are looking to figure out how we measure if ed tech has an impact on learning, the first step is to be really clear and specific from the outset what the intended learning is.
For ed tech designed with a specific learning purpose then this is key. For products designed as more of a tool then the key is that it is open enough for teachers to be able to bring to it their intended learning and their own evaluation of whether its use has been effective in making that learning happen.
Simply asking ‘do iPads have an impact on learning?’ is (on its own) an unanswerable question. The questions is what the intended learning is and whether the specific use of iPads in a certain way achieved that more effectively or efficiently than another teaching method.
There are some caveats to this, we are better equipped at the moment with tools and with structures of thinking to evidence some types of knowledge and skills than others. This will be the focus of a future post.
This is not the whole process, but the first step for evidence in ed tech is making sure that we are really clear and specific about what the intended learning is, and we focus on that and then design a delightful and engaging activity to achieve it.
Creating an experience and then throwing learning objectives at it until one of them sticks can result in learning, but it results in learning that is not very focused and as a result really difficult to evidence. The best teachers get this process the right way round, as does the best ed tech.
Photo: CC BY henricksent on Flickr