The Fetishization of Educational Data

As the term “Big Data” muscles into education, we’re starting to see what that collection of data means for learning outcomes, and it’s not pretty. Within the last few years “Big Data” has easily become one of the biggest trends in education. Everyone wants more data; we’ve got huge databases full of it, some people support it while others fear and oppose it. But after it’s all said and done, what is actually happening with that data?

Through a wide range of products I’ve seen, the term “educational data” can now refer to practically everything about a learner, some are fairly obvious and givens, but others leave me scratching my head. For example; during student enrolment, it would be important for the staff and teachers to know who a learner’s legal guardian is, or what their living arrangements are. i.e., they might live with a distant relative, or even potentially homeless, living in a hostel. All of which would be critically important to their teachers. But as some systems suggest, this data can be leveraged with software solutions to improve their learning outcomes, but we’ll come back to this shortly.

The promise of big data was that you could use it to inform your decisions. Because that promise was so general, the educational implementation of that has for the most part become “Collect everything, we’ll figure out what we can do with it later”. While that’s a bold ambition, the result of that has seen a lot of teachers manually updating and entering student data in many different systems with no clear vision of “why?”.

The range of data collected includes everything from the normal attendance, behaviour and individual class marks to standardised test results, biographical and geographical data. Including even one absurd product that asked for estimated parental / guardian income… you know. To “boost outcomes” …

Some of this data has to be collected anyway, attendance is a legal requirement for the most part, standardised test results are there whether you like it or not, but this isn’t just about a single instance of the data. The lack of interoperability between systems means that even after deciding to hand the data over, it’s being handled twice, three times or even more, across many disjointed systems and solutions.

These scenarios don’t even begin to cover the data that becomes “orphaned” when teachers migrate systems, try new tools, or school procurement overwrites their platform choices. Which leaves a lot to be said about school procurements policies.

It’s a fair argument to make that the more digital products you use, the more students you have, the greater the data footprint. This can be either behind the scene or at the forefront of the learning experience. At a recent event in London hosted by the Assignment Report, there were some interesting discussions around teacher workflows and educational data adding additional workload to a teacher. The most troubling parts of It came when an Ed Tech executive encouraged that this data (strengths, weaknesses, attendance and preliminary summative test results) could be useful for their learning products, or that it was critical to their learning journey to begin with. If in the age of machine learning, cognitive science and psychometrics, your digital product requires you to input strengths and weaknesses, attendance or any other arbitrary data points, you’re doing it wrong, and here’s why.

  • Manual data entry can be error prone and takes dedicated chunks of time to perform.
  • This assumes that there must be some other diagnostic before learning can happen, and the result of that diagnostic must be fed into the system to “warm it up”.
  • People lie, not always intentionally, but we’re all terrible at evaluating our own skills, why would we trust ourselves or anyone else to input our personal strengths and weaknesses?
  • This takes the focus away from formative learning back into to summative.

On the surface those points don’t sound too bad, but they are when you think in the context of machine learning and observable metrics.

Observable metrics don’t lie, and they don’t require personal or teacher involvement to collect or curate them. By simply having learners engage with your platform you can observe students logging in to systems, completing actions, consuming content, machine gradable test results, and even reflect that data back to evaluate how your content is performing. Here’s a sample of some of the data you can pull from observable interactions with learning systems.

  • Attendance — Capture log in / log out and session details.
  • Location — Utilise an IP lookup to see rough locations of the device when it connected. Is the learning happening mostly at school or elsewhere?
  • Engagement — If they’re meant to be watching an instructional video that’s 4:30 long and they’ve clicked through after 30 seconds that’s just one data point. Imagine taking into account reading levels, mouse movements, keyboard strokes and window focus (what window is actually open on their machine?)
  • Correct v’s Incorrect — Machine gradable questions easily allow you to establish quick assessment baselines in a formative and summative flow.
  • Strengths through media — Providing varied instruction and assessment can easily allow you to aggregate the results to see individual learning style preference (not excluding against Strength of Content).
  • Strengths through learning objectives — By correctly mapping your content to the very learning objectives it aims to achieve, you not only ensure your lessons and content are appropriate, but you also create a network of rankings. This then allows you to see raw data against learning outcomes, not arbitrary rubrics.
  • Strength of Content — By reversing the approach, you start to easily see which content is performing and which isn’t, purely by the impact it’s had on the learning objectives.

But let’s take this up a notch. This is just all the data you can observe through student interactions. What happens if we were to apply modern cognitive services and machine learning against it? Here’s just a few:

  • Predict attendance and engagement issues before they arise
  • Measure sentiment of student work to evaluate positive / negative writing patterns
  • Predict answer accuracy
  • Group and Individual Proficiency
  • Predict potential proficiency

Now compare what we can extract and infer, with what we’re meant to be manually entering. Can anyone tell me why we’re entering this data? Or managing it ourselves? The next shift in educational technology and data has to be a movement towards Evidence Based Data. Where observations of patterns and behaviours show the real results, rather than trying to leverage data that’s been handled.

Some of these solutions aren’t cheap to build and implement. But when you think about the cost to the teacher, parent, school and learner of having to maintain this data;

  • The lost opportunity cost of teaching a student.
  • The lost opportunity cost of spending hours dedicated to data admin.
  • The lost opportunity cost of improving as an educator because the data is wrong.
  • and the lost opportunity cost of a burnt-out teacher.

The cost v’s benefit becomes a lot clearer.

The aim of data should always be to help improve the learner and teacher abilities while also informing the school and guardians. If the data you’re handling, collecting or managing isn’t helping achieve those goals. Why is it being collected at all?

Educational technology continues to evolve, but supporting content still lets us down

Over the past few years, I’ve been fortunate enough to meet some of the world’s biggest players in the education product space. Working with them on their products, and witnessing the teacher / learner adoption, you get to see the common patterns and problems they all face in different ways. Where do they all struggle? What is their biggest problem? Typically It’s content.

For all of my time spent in the classroom as a teacher and a learner, EdTech content usually only came in a few flavours; a direct e-book copy of the printed book, with some kind of “quiz” wrapped in a Flash component, buried deep inside of an LMS. For a lot of the teachers I graduated with, that was their only real exposure to EdTech and education resources.

Fortunately, since then EdTech has moved beyond that, but raw content is still seen as a blocker. Speaking to authors around the world, their view of content is similar.

  • It’s time consuming to design and create.
  • Becomes harder to update over time.
  • Lacks standardized, flexible change management and editing workflows.
  • Tooling for authors suck, both print editors and the majority of interactive web editors (I’ve actually worked with an author who had hand drawn (pencil / paper) manuscripts physically posted into the office).
  • Digital content very rarely supports disability standards for the web.
  • Digital content very rarely supports mobile design.
  • It’s overall expensive to create and manage.

The proof educational content has lagged behind technology is everywhere. In the last few weeks alone, I met with a company in the Middle East who were “just figuring out eBooks” and a publisher in Europe who were upset to learn that Flash is dead, and they’re going to have to (please don’t tell them otherwise…) re-build every asset they own for their next product.

Digital educational content has become ubiquitous. But that content is not always appropriate, in the right format, up to date, compliant to disability standards, or even efficacious.

When you see some of the emerging tech for education; IOT products such as Kano, patterns like Adaptive Learning and Augmented Reality devices similar to the HoloLens, you can be instantly blown away by the direct pedagogical application both in and out of the classroom. But let’s take a step back and look at it from a content perspective.

What do I mean by Content?

In this context, “content” for the learner is the material provided to enable their learning. And for the teacher, it’s the

material / tools provided to enable their learners to engage with the technology.

Content for IOT devices

IOT devices like Kano, the Raspberry Pi (by itself), Arduino, et al, started life as consumer facing products, before finding strong roots in education communities. But the gap between those phases is by no means small. Raspberry Pi was founded in 2009, and only started entering classrooms on mass around 2014. Kano, founded in 2013 hired their head of education in late 2015. These are products we clearly now see as predominantly educationally focused, operating for significant periods of time with minimal to no educational support.

Why? It’s easy to blame teachers for not wanting to explore new tech, it’s easy to blame the cost for keeping it out of the classroom, but a lot of the teachers I’ve spoken to directly talk about resources and lesson content. No lesson plans, no curriculum, no mapping to outcomes (i.e. how do I know this will help my learners achieve an outcome?). So how can I use it in my classroom? How can I recommend it to my learners parents?

These devices were simply thrust out into the world with no real strategy for engaging with the education market. There was lots of hype, individual adoption, i.e. teachers, parents, learners purchased or were gifted the device, but no bigger classroom play to begin with. Since then people like Kano have gone out of their way to make the education market a priority, creating great content

, but still all potentially behind where they could be with earlier educational content support.

Content for Adaptive Learning

Adaptive Learning is the principle of providing the right content at the right time to increase learner outcomes. Great, You might think, I’ve read Goosebumps, it’s a pick and choose your own adventure. But the reality is to provide an adaptive learning environment, content must be:

  • Small enough to ensure the algorithms know what it’s doing.

  • Discrete enough to make sure it’s only doing one thing.

  • Must not assume any specific order (because it will be the algorithm choosing, not you).

  • And you have to have enough of it to cater to different learner pathways.

Which part of a Textbook or Word Document in Moodle do you think that applies to? Hint: none.

Similar to the raspberry pi, adaptive learning has also been around for a while, but it’s only within the last few years do we see the response from the market and the true benefit to learners. Companies like Alinea in Denmark are now ahead of the market with products that work like CampMat.

 CampMat is successful because of the work they put into their product of course, but also the transformative work they put into the way they think about, design and publish content.

The finished product would not have been achievable without applying the properties of adaptive content to their editing workflow. Again, it was content that presented the original barrier to shipping an adaptive learning product as the adaptive learning technology has been around for many years.

Content for Virtual and Augmented Reality

The Microsoft Hololens

 looks to become on of the most interesting advancements in technology in recent times. Smaller scale devices like the Oculus, Vive, Gear et al are now becoming mainstream, and as expected teachers are exploring avenues for education.

Initially the easiest applications are in modeling and drawing within virtual environments, and virtual field trips, but that is just scratching the surface of the technology. It’s so new that the technology itself only really now “exists as a thing”, no one has truly figured out how to extract pedagogical value from it yet.

If you do a simple search of “Augmented Reality in the Classroom” you can find many think pieces about how you might be able to leverage it one day, but a fundamental lack of available content to support it now. It’s only been within the last few weeks that Microsoft has released an SDK for developers to create 3D content.

The true application of VR & AR in the classroom will probably still be years away because there’s no real educational content to support it.

N.B I’m calling it now, Minecraft will become (either organically or with mods) more of a platform to support other VR and AR products. Why? Because It will be the first AR compliant mainstream edu product with content to market.

So, more content?

Even now, a lot of education content is still a long way behind current web technology. Next time you look for a OER lesson or take a test online, take a minute to have a closer look.

  • Does that video you’re watching have captions to support the hearing impaired?
  • Is that image high enough quality and supply alternate text for the vision impaired or high resolution devices?
  • Is it a Flash component?
  • Do you know what outcome this is addressing?

We can start addressing the content gap in EdTech by working to create more standards around educational content specifications and workflows. But by also taking the time to explore the educational impact of “new”.

If you think something you’re building, or working on, or even know about, might have implications for education, why not start exploring that link now? Put it in front of a teacher and ask what curriculum standards this might help with.

Technology will always continue to evolve, and education will, for a little while longer have to still play catch up. But those who consider the education implication and support options will always be ahead of the market. Teachers are hungry for technology, and if you don’t provide them with content to support their learning journey, adoption will always be slower.

Adaptive Parenting

As adaptive learning products reach more and more learners, we are starting to see exactly how the impact of adaptive learning is changing the way we teach and learn, not just in the classroom but in the home as well.

When I was teaching, the idea of issuing homework was mainly an afterthought, “if you haven’t finished by the end of the lesson, complete it at home”, but across the globe homework can take many forms: “work on it over the weekend and let me know what you learn”, “read page 20” or “complete chapter 2 questions”. These examples are fairly typical, but they’re also very static exercises that have a very defined scope. What impact does homework have on the learning experience, when the goal line changes and the scope adapts to the individual learner’s needs?

I was recently lucky enough to visit a classroom where they were using an adaptive product both in the classroom and at home. Talking to some of the learners, an issue became apparent where selected learners were using the product at home, who then came to rely on their parents for help when they became stuck. This presented itself as students excelling at home, but struggling in the classroom for no visible reason.

Traditionally, a learner asking a parent or guardian, or even a friend for help is not really an issue at all! The fact the learner is motivated enough to ask for help is clearly better than ignoring the task and failing to complete it. But when these responses are collected, analysed and used to inform data models, these models can become out of sync with the learners knowledge reality.

How is this a bad thing for adaptive based learning products? Quite simply by adapting to the wrong person. In any capacity where the work issued, or information collected is impacted by the third party, the baseline understanding of the learner is compromised. Meaning in our example, the content being provided to the learner was better suited to the learner with a parent by their side.

None of this is to suggest that you shouldn’t help your learner if they become stuck! But when helping your learners in an adaptive environment, it is important to consider the impacts this has on the learning environment in the future.

Here’s some helpful tips for supporting your learner when not just using adaptive products, but also completing homework in general:

1) Don’t answer for your learner.
Your learner should be the one to make the final answer, regardless of being correct or incorrect. Try not to lead them to one answer or another, let them talk through the potential answers and why they think it would be correct

2) Allow your learner to make mistakes.
It’s important to let the adaptive product know what the learner thinks the answer is, even if that’s incorrect. Let them submit what they think the answers are and encourage them to keep trying!

3) Support by explaining.
The best support you can give the learner (knowing that even an incorrect answer is helpful) is to make sure they understand the question they’re being asked. If you explain the question in a way that’s more helpful for your learner, without changing the core idea, then feel free to do so within reason.

For the last several years I’ve tried and failed to attend BETT, between being on the opposite end of the planet and then never being in London when It was on, I missed them all. This year, everything seemed to align and I was lucky enough to make it. I’m glad I did as BETT 2016 did not disappoint. There’s something genuinely inspiring about having not just hordes of people trying to sell you stuff, but also the educator community that comes out in droves to meet, talk, share and support each other.

While I had constant meetings and booth hours to fill, I still took time to soak it all in, in addition to being lucky enough to attend the TeachMeet. Here’s some things I picked up over the last few days.

Every man, woman child and animal has a platform to sell.
Nearly every second booth I visited, someone was spruiking some sort of platform, Homework, attendance, learning. It seems that having a “Platform” is in this year.

No one really knows what a Platform is.
While listening to the 18th pitch on another generic SIMS compatible learning platform, it became obvious no one actually knew what a platform is. “Platform” has become a blanket term for a product or service that can do more than one thing.

I think from a technical and product understanding of what a “platform” really is, we would agree it’s similar to this https://en.wikipedia.org/wiki/Economic_catalyst. But suffice to say, saving homework for a student isn’t going to make your web app a platform, any more than strapping feathers to my arms makes me a bird.

Productivity and Organisation tools are out.
I once thought there was a gap in the market for tools that allowed teachers to work more efficiently, and abstract away some of the “administrative” tasks of running a classroom to facilitate real learning. I think that space is packed for now. That’s not to say there isn’t a market for apps that do it better, but demonstrating that is going to be hard.

There’s plenty of companies using BETT as a testing ground for products that may or may not even exist yet
Take a minute to think back to any demos you were given which were obviously heavily scripted (I counted 3), then think back at how they looked. Did any of it look like this?

bootstrap-v2-02

Quizzing these people, asking to click around “off script” or asking about which schools are currently using their “revolutionary product” was usually all it took for them to move on. I can’t say for 100% certain, but all of that Is classic fishing to see if there’s any actual point in building the real version.

Testing and validation is important for any product, but I was completely surprised to see it at BETT.

The Raspberry Pi / Microbit isn’t as popular as I thought it would be.
I thought the Raspberry Pi was going to be the next Minecraft, or the next Tablet, but looking at the Lesson Plans, and queues to see Pi based booths, I’m a little surprised. Pedagogically the Pi is a solid tool, but looking at it from a wider context I can see how there would be a limit to their potential classroom use.

Virtual / Augmented Reality isn’t there yet.
I had a great play with some of the HP / Intel VR equipment and was completely and utterly blown away, but that’s it. Google cardboard is progressing but those were the only ones present in a real capacity. There are several VR / AR apps demonstrated, but with the technology currently not even in the mainstream, their adoption will be slow and expensive.

No stand out winner
I was on the look out for the “Next Thing” and just didn’t find it. I’ve been to EdTech conferences previously where the next thing felt blaringly obvious, but I left BETT asking myself, what is next?

Perhaps this was just a year of refinement, where we finally get the time to refine how we implement programs like Minecraft in the classroom, or how we can better use Tablets / mobile devices. I’m not really sure what’s next but I don’t think we’ve found it yet.

7 Things I learned at BETT 2016

I remember my first class at university, a stifling hot tuesday morning, in a packed lecture theater of hundreds of soon to be teachers. It was in an aptly named course called “Introduction to Education” and funnily enough, the lecturer talked a lot about the fundamental components of teaching that we’d all have to cover over the next four years of our double degrees. One of the most obvious yet, least explored concepts that came out of that lecture was the idea that each student is different and may require a different teaching approach. This is a logical statement, but never really sat right with me. Sure, we’re told about it, must demonstrate we can do it, but within the context of a class, a cohort and a school, an individual student can quickly become “students”. How do we ensure our students don’t get fed a “one size fits all” approach and receive the personalised learning they deserve?

One of the core teaching strategies deployed in most education degrees is some variation of the personalised learning plan, or PLP. The personalised learning plan is meant to be a living document, written by the teacher and student (sometimes including their family) that reflects on the strengths and weaknesses of a given student (New Jersey Department of Education, 2014). Through this reflection, they plan and agree on the tailored learning experience needed to support their weaknesses and expand their strengths. This result of this can be anything from adjusting the target outcomes for a lesson or assessment, to more one on one focused teaching and learning. These plans require constant evolution as the student learns, and requires both the teacher and student to be committed to managing their plan and working towards their goals (New Jersey Department of Education, 2014).

Even if there is no documented personalised learning plan for a given student, complex adaptive support from an educator can be witnessed in the “Zone of Proximal Development” concept explained by the educational psychologist Lev Vygotsgy. Vygostky (1978) proposed the idea of the difference between what a learner can do without help, and what he or she can do with help. This has been explored and expanded to an elaborate support matrix that evaluates and supports this zone of development through varied instruction, encouragement, demonstration and general scaffolding techniques. This directly impacts on the educator as they automatically and manually have to identify, guage and adapt to the needs of the individual student during everyday instruction whilst recalling the last piece of instruction or assessment and how it was adapted.

What does this mean for a teacher and their learners if they decide to take the approach of creating a PLP document for each student? Well let’s look at the numbers. The following is based on my personal experience when I was teaching in Australia. Compared to some other countries and even teachers in my schools, I actually had a pretty light load. In one term I was teaching 4 unique subjects across 9 different classes, each class being held twice a week. That’s 18 individual lessons per week. With an average class size of 23, I would now be responsible for writing, updated and reviewing 414 individual documents. Even if each interaction with a student and the PLP document was 1 minute long, to find, open, review, edit, discuss, save and close, it’s still going to take me just under 7 hours a week. This hasn’t even yet taken into account sourcing, validating the efficacy of and then issuing individual tasks to the students before collecting, evaluating and returning them.

Now theoretically at this point, all we’re doing is following what we’re told as teachers to do. But in the same way blended and flipped learning have changed education with technology, technology can help change personalised learning. Using adaptive learning technology to provide personalised learning isn’t a substitute for teacher involvement. Adaptive learning technology by its very nature is an academic “add-on” to replace the manual elements of the PLP process teachers would otherwise struggle to achieve, and in the process promote a quality, personal learning experience for their students.

Adaptive learning technology directly and indirectly, through a mix of complex modeling, bayesian networks and item response theory provide deep insights to your learners, while also providing them the right content and assessment at the right time. Adaptive learning removes not only the need to manually micromanage the personal learning of your students but also harness our already existing vast computer power to close the teacher, student, parent feedback loop with rich, contextual learning analytics.

References
Vygotsky, L. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.

New Jersey Department of Education (2014). A GUIDE FOR IMPLEMENTING PERSONALIZED STUDENT LEARNING PLAN (PSLP) PROGRAMS. Retrieved September 19, 2015, from http://www.state.nj.us/education/cte/pslp/PSLPGuide.pdf

Adaptive learning is the tech solution to personalised learning plans we desperately need.

I was recently inspired by a blog post from Ray Fleming, about the new academic potential found when using the intelligent voice-activated assistant on Windows Phone called Cortana. Ray lists a few things that you can do with Cortana, like creating shopping lists and sending text messages, but then talks about real world academic uses for it.

  • “When/what/where is my next lesson?”
  • “When is my next science lesson?”
  • “When is my next essay due?”

With my usual Teacher first thinking, I went straight to the potential for making a teachers life just that little bit easier. I thought about a lesson I had to teach, and how quickly I had to pull together a lesson plan on a subject I knew nothing about. Wouldn’t it be great if I could just pull down a 21CLD relevant lesson ready to go, straight on to my phone with the help of Cortana?

Find a lesson for X.

I started hacking away at a prototype, the Cortana documentation isn’t too difficult, but I wasn’t really sure where I could find these lessons… Having worked with the Partners in Learning team to build the 21CLD app, I thought it would be the perfect way to shine an additional light on some of the amazing user submitted resources found on the Partners in Learning activity search.

This is the end result. It’s only a prototype and at this stage won’t be a publicly available app, because the PiL site doesn’t have a public api. If you’re interested in more details you can always find me on twitter @LucasMoffitt or see my other apps at Teacher Collection, Windows apps for Educators.

If you’d like to see something like this hit the stores leave a comment below and i’ll pass it on!

Microsoft Partners in Learning & Cortana, an unofficial experiment in the future of Education Technology

Rest(ish) Toolkit for Windows 8 / 8.1

When I first started building apps on Windows 8, I wasn’t doing it that seriously, as I built more and more apps, it started to make sense that given their content, perhaps some of them talk to each other or back to a central server.

Having used RestSharp before I was pretty keen to just drop that in and start firing off requests. It was fairly painful when I realised that it doesn’t support Windows 8 and I’d be left to hacking together my own.

After a several months if not a year of tinkering and updating between dozens of projects, I’ve finally produced a rest helper that works for me, but might also hopefully work for you. Keep in mind it’s still a bit hacky, but better than nothing.

This helper is fairly primitive, but it does the basic things like handling basic authentication and allows you to send off and receive json objects. More than enough for most apps.

It can then be used like this

await ApiToolkit.Get<ObservableCollection<MyItem>>("MyEnd/Point");

and

await ApiToolkit.Post(myObject, "myEnd/Point");

My authenticate method fires off a GET (should be a POST, don’t get me started…) to another endpoint to let me know which apps are connecting to the server most frequently (this lets me know which apps I should be updating!) You might want to tweak that a little depending on what your API looks like.

Always open to feedback, suggestions or pull requests.

https://gist.github.com/LucasMoffitt/8abf13630558e42faecb.js