Episode # 51

Researching EdTech with Dr. Megan Conrad

February 29, 2024

About This Episode

On this episode, Kara and guest co-host Matt Gerberick speaking Dr. Megan Conrad, a senior researcher at Explore Learning, about how strong, rigorous research helps her team decide how to impliment technology into the classroom.

Guests

Dr. Megan Conrad

Dr. Megan Conrad is the Senior Researcher for ExploreLearning, an EdTech company, that creates research-based digital learning tools for supporting critical K-12 STEM learning challenges, like scientific thinking and mathematics foundational skills. She works with district administrators, curriculum coordinators, and teachers to uncover evidence of student success from product usage, and helps districts make evidence-based decisions regarding product implementation. She earned her Ph.D. in Psychology (specializing in Developmental and Cognitive Science) from Rutgers University and has 12+ years of experience designing and conducting research in child development, learning environments, and youth & technology. She previously worked in higher education as an Assistant Professor of Psychology and founded a research lab that explored children’s STEM learning in informal learning settings.

Transcript

[00:00:00]

Kara: Joining us today is Dr. Megan Conrad, senior researcher at Explore Learning, as well as my co host, Matt Gerberich from SOITA. So hello to both of you.

Matt Gerberick: Good morning. Good morning, Kara.

Dr. Megan Conrad: Hello. Thank you for having me.

Kara: Yeah. So Megan, we like to start off this podcast always by asking people about their journey into education.

So I’m just going to ask you how you ended up in your role as senior researcher and kind of what led you to the educational field.

Dr. Megan Conrad: Yeah, I took a little bit of a roundabout way to get here. I think initially I wanted to be an elementary school teacher. I wanted to be a kindergarten teacher. That’s [00:01:00] where I saw myself thriving and having a lot of fun.

I always worked with kids, babysat, camp counselor, all of that. But then I discovered psychology in high school taking it as an elective. And I. Became really, really interested and fascinated with it. So I said, you know, I’m going to major in psychology thinking I would do clinical work with kids.

And then I realized that psychology is much more than just clinical work. I discovered research and the process of it and just fell in love. I realized that this is what I wanted to do. Take. These difficult questions about human behavior about learning and be able to explore them and answer them.

That was just a really cool process to me. So I went to graduate school at Rutgers. I got my PhD in cognitive and developmental psychology. I did a lot of research looking at children’s learning in everyday settings. So I was really interested in what learning happens outside of school. I looked at how parents and kids talk to each other and what’s, being focused on and what kids are learning from that.

I looked at watching television, reading books [00:02:00] did some really cool studies at zoos and science centers and all of those kinds of informal experiences. I worked as an assistant professor at William Patterson in New Jersey for a couple of years I got tenure and then the same year left, I you know, I think I encountered what I know a lot of teachers are encountering that frustration point.

We were having mass layoffs. There was a lot of things that were no longer making it feel like the right fit for me. So I knew I wanted to stay in working with children and, and doing something that to me felt as fulfilling as teaching. Teaching did. And so that’s how I started to look into ed tech.

So I’ve been at explore learning as their senior researcher for about a year and a half now. And it feels amazing. It feels like home. It gives me that, you know, ability to ask all of those research questions that I want to ask. But I still know that I’m impacting the Thousands and thousands of children who [00:03:00] are using our platform every day in their classrooms.

So it’s, it’s been really fun to get to do this very applied research, working with real students and real teachers, real classroom settings and knowing that I have this potential to make a really large impact with my research.

Matt Gerberick: So to kind of kick us off then into, you know, our work with teachers and, and ed tech and in the K 12 setting you know, a lot of times it’s.

Tool first and then figure out what to do with it. Kind of thing. So like, can you highlight the significance of relying on high quality, rigorous research when selecting ed tech for the, for the classroom, rather than something that a tech director or someone finds at a workshop and brings back and tries to figure out how to make it fit.

Dr. Megan Conrad: Yeah. So In my role, I do a lot of work with both administrators, curriculum coordinators, and teachers. And, you know, when I’m talking about research in ed tech selection, a lot of times I make sure that [00:04:00] people understand the importance of both being research based and being evidence based. So the first thing is that An edtech product that’s designed really well should be research based, meaning that they are designed using insights from this high quality, timely and relevant empirical research, right?

It should be grounded in the learning sciences. So a lot of products will have white paper. I’ll talk a little bit about ESSA, but they might have an ESSA tier four logic model. They have these kinds of things that show that the product was really grounded. In empirical research. So we’ve designed it to you know, based on these learning sciences, it should work for our students, right?

Because it is built on this high quality, rigorous research. So I always say that educators and administrators should should evaluate these studies so that they can make informed decisions about is this going to work for my Educational goals for my curriculum and for my particular student [00:05:00] needs, right?

So looking at who are these studies done with that’s a big thing that I talk about a lot. But that they should support. those learning outcomes that are relevant to you and your curriculum and your student needs. They should be aligned to research based standards. They should be aligned to current research based pedagogical approaches, right?

So do they have inquiry based learning, project based learning, all those kinds of great things. That’s the first thing that we look for, right? Is it research based? But then the next thing that I think is important is making sure that they’re also. evidence based. So evidence based means that they are routinely engaging in empirical research to assess the efficacy of that tool in real world classroom settings, right?

So yes, I’ve designed it based on this pedagogical approach. I’ve designed it based on this research, but does that mean that it’s Actually works when I implement it with my students. Does it actually work? [00:06:00] You know, when we have all of these problems about implementation and all of the real life things that get in the way.

So a company should be doing that ongoing evaluation. This is what my role is with explore learning. I am the one who is doing this ongoing constant evaluation. Seeing are the tools Delivering on their promises, right? So this is where teachers also can take a really active role in making sure is this doing what I needed to do in my classroom with my students?

Is it achieving those learning outcomes? Is it facilitating student engagement? Is it contributing positively, not just to academic success, but also things like curiosity, critical thinking, problem solving skills, right? I’m really interested in things like mindset. Is it fostering a growth mindset, right?

Is it helping my students discover a love of learning? All of those things should be routinely assessed. And both, you know, by this high quality, rigorous research by internal researchers like myself who work for the company, also external [00:07:00] researchers you know, academics, all of that. So, so that’s what I think is really important to look at both that base of the research and that ongoing evidence.

Matt Gerberick: Yeah, I think that’s really, I love all of that and you know, it’s an important part about the evidence because as you go out and you try and work with teachers or get teachers to, in some cases, still buy in, there’s that, there’s that, well, what’s going to, how is this going to make this better? Because I’m going to have to take on learning something and then how.

How is this gonna actually be used? So, you know, there’s a lot of P. D. That’s done about here’s how to use something like where the buttons are. And here’s how you make this do this, making sure that we also address the use of it. And what you alluded to there with the. With the evidence base and how am I going to use this with my students?

So I think those are excellent points.

Dr. Megan Conrad: Yeah. And you know, along that line, I know that we at explore [00:08:00] learning have a lot of different options for professional development. And when I’m working with district administrators, I talk about the process of research. Not as like evaluative like I’m not trying to see if your teachers are doing the right thing like this isn’t a test.

It’s not a gotcha thing. You know what I say is I’m evaluating it to see where we seeing success and where we not seeing success. And then how do we particularly, Taylor, things like professional development. What kind of messaging do we need to give to teachers? What do we need to find out about teachers about how they’re using or how they’re experiencing outcomes so that we can support those most at risk, those who aren’t you know, maybe implementing it well or, or getting the benefits that they expected.

So we’re really thinking about research as an ongoing process to help support and facilitate teachers in, in their use of the ed tech.

Kara: Okay. So I’m curious and I, it’s awful. I don’t, the [00:09:00] abbreviation that you mentioned for evaluation. What was that abbreviation? Okay. So can you talk a little bit about what exactly that is?

And then maybe it will help lead into. strategies that you could offer educators or overall districts to find and evaluate the efficiency of research associated with various products, ed tech products specifically.

Dr. Megan Conrad: Yeah. So I personally relied pretty heavily on, on the essay tiers of evidence. So.

These are put forth by the Institution of Education Sciences and they talk about four different tiers of, of determining the type of research, the type of evidence that was done to show the outcome of that product. So you could have, that’s four tiers. So the, the lowest tier tier four would be this white paper or this [00:10:00] logic model, right?

So it shows that the product was. built on research. And these are the ways that we expect the product to lead to learning outcomes, right? So that’s kind of the lowest step. And then as you go progressively higher, it gets more and more rigorous in the terms of the type of study, right? So you might have a correlational study, which would be an essay tier three.

So we’re just looking at the product in use our students who are using it more doing better. Right? And so that would be a great thing to see, but it also could be, well, maybe just the most prepared students are having the easiest time using it, and that’s why they’re achieving better. So then the next step, Tier 2 would be something like a quasi experimental, so we might do something a bit more rigorous in terms of making sure that students are equivalent at best.

baseline. I do a lot of this research because it’s it’s very hard to do pure experimental research in a school setting, right? So a lot of what I do is say, okay, are they [00:11:00] statistically equivalent at the beginning of the year? And then we compare these students who used it a lot versus those who used it less and what the outcomes were.

So we can have more confidence in the tool. A tier one study would be a What we call an RCT, a randomized control trial. This is like what you would do if you were testing a new medication, right? Something that we really need to be sure that, okay, you’re getting the treatment. You’re randomly assigned to get the control.

What is the difference that we see? So that would be like the ultimate in terms of the rigor of the study, but it’s very, very difficult to practically do that in a school setting. You often need to have some kind of very large grant to support it. So, the essay tears of evidence are helpful for people.

If you see like a company is releasing an asset to your to study. We know that that’s a little bit more rigorous than an asset to your three study. This also can align with W W C, which is another acronym that people might be familiar with in the terms of research. The what works [00:12:00] clearinghouse. This is a space where studies that are In certain academic databases will get reviewed and they’ll determine the, the level of the study.

So it maps on a little bit to the esoteres, but not perfectly. These are good resources, right? So you can look in the WWC and you can see, is my product listed there? The problem with WWC is that it’s only for publications that have gotten into the ERIC database or other kinds of databases. So if a company is doing, like if I’m doing something internal, it might not be even eligible to be reviewed.

So some studies are in there and not all studies are. So if you don’t see a product in something like WWC or the evidence for ESSA Doesn’t mean that they haven’t done the work doesn’t mean that it’s not a solid program. It just for one reason or another hasn’t been eligible for review. We can also look at websites like learn platform.

That’s another way to look for. They’re, they’re A body [00:13:00] that often reviews research so they can stamp a study as a tier two as a tier one. There is no This is the official governing body to stamp something as a tier one, two, three, four. So anybody can determine it based on the guidelines.

So that’s where it can get a little bit tricky. Digital promise product certifications are another place to look. So some of explore learning products have a couple of their product certifications that’ll tell you that this is a research based product. They’ve gone through the review and they’ve determined that this is research based.

So those are just some of the places you can, of course, go to a company’s website, see what else they have that may not have made it into those kinds of databases.

Kara: Yeah. I honestly didn’t realize that those type of places that offered research evidence existed as well. a teacher.

Dr. Megan Conrad: Yeah, and maybe, you know, maybe that’s bad to admit.

I don’t know. No, I think, I think that it is a real struggle. I do a [00:14:00] lot of that in my work as well, of educating people on research and what does research mean? How do we interpret it? Where do we look for it? How do we understand it? You know, when I was a professor, I published a lot of papers. I’m sure that maybe 20 people read it in my very little circle of academia who are doing those research studies that are the same as me.

It can be really hard to find research and then to know what research to trust. So I also think, you know, in additional professional development on, on products, I love when teachers get professional development about understanding data literacy. I know that we think about data literacy as something we teach our students.

But you know, there’s, there’s a lot of aspects of research and data literacy that I think people could you know, do, it would be really beneficial for them to learn more about, get more information about.

Kara: Yeah, for sure. Because I mean, we do a lot of data collection and. You know, you, you read the data, but it’s like I wouldn’t say [00:15:00] you dive deep into the data necessarily, unless you’re really trying, you know, exactly what you’re looking for or need to accomplish for something.

Or there’s, you know, new curriculum coming or, you know, there are instances where you dive deeper, but for the most part on the daily, it’s. You know, you’re just looking for a certain piece of data, and once you’re getting it, you can see, and then you adjust accordingly, and you move forward. There’s not really, like, that deep dive all the time.

So, that’s, that’s really interesting.

Dr. Megan Conrad: Yeah, and I mean, I know teachers already have a million and one things on their plate. It’s not something that I would expect teachers to be doing on the regular. But it’s it’s why in my communications with administrators and curriculum coordinators, I also say, you know, I’d love to create this, this research report that I’ve written you this 10 page research report.

I want to put it in a one or two page brief, and then you can share that with your teachers. I want to distill the points from this data that are [00:16:00] most impactful to under for teachers to understand. So for instance, I did a study recently where. We have this thing in one of our products called a green light.

So in Reflex, there’s this green light, and you want your students to achieve a green light in their usage. So it’s not about, oh, use it for 15 minutes, 20 minutes, 30 minutes. It’s your student should get to this green light. And this is something that we’ve programmed within to say that a student has reached a certain level of fidelity in their usage of the program for the day.

So we’ll tell teachers in professional development. Go for the green light, have your students get the green light, but does that message always get across? I’m not so sure. So what I did recently with one district is I looked at students and their green light usage. And I compared students who are identical at baseline and then compared the students who got the green light almost every day in their usage versus students who didn’t.

So they were still using it often, but they weren’t going to that measure that we want them to go to. And I showed huge statistical differences. [00:17:00] between the students who used with Fidelity and those who didn’t. That is a data point that I want to get to teachers. I want them to see this is why we’re telling you.

We’re not just telling you to tell you. We’re not, you know, we’re not telling you because we want to sell more product. We’re telling you because this, we’ve designed the program to work in a very certain way and we want to make sure you’re using it with Fidelity. So then making sure that I am telling teachers that in a way that is accessible and, and, you know, they, they get that information regularly, I think is really, really important.

Matt Gerberick: So I, I think maybe you [00:18:00] touched on that with, with, with the, with your answer, but so you mentioned about, you know, teachers and the amount of time that they have and, and in some districts, you know, especially smaller districts, they don’t have a curriculum coach or their administrators, maybe. other things that they are required to do.

And so the time isn’t there to do the in depth pieces that would be nice to be able to do. So in your opinion, what specific criteria or indicators should educators look for when assessing the reliability or validity? Of these efficacy studies. So like the ESSA research and things like that, what are some things it’s like, if you’ve got 15 minutes, is it possible to, or a half an hour?

I mean, maybe the timing could be longer than that, but that would be like the first place to go for determining validity and reliability.

Dr. Megan Conrad: Yeah, that’s, that’s a really great question. So I think there’s a couple of things, like you said, like looking for the asset tiers of the study, what was the methodology of the study done?

Was it a correlational study? Was it a quasi [00:19:00] experimental or experimental? And if you can’t even. the f word. I’m not going to try to find those words, they’re probably trying to hide it from you, right? So, looking for those kinds of things about how this study was done is important I also think it’s really, really important for external validity, right?

Meaning how well it applies to different groups of students in the real world. What is the population? If you can’t even find the population of the study, I think that’s another one. You want to make sure that the population of the study looks like the population of your students, right? So what what are the demographics of the student used?

Is it a high resource or a low resource area? Is there a lot of demographic variability? Is it a private or public school looking at those kinds of indicators to see like, This looks like my students or this doesn’t look like my students. It’s not end all be all, but I think it is important for thinking about, you know, is this a study that’s going to tell me about how this product is going to work with my students?

You know, so looking at the sample size, that’s another thing that’s really important, right? That’s already built into the asset tiers that it has to [00:20:00] be a minimum sample size of like. 300. So looking, was this done with 50 students? Was it done with 500 students? Was it done with 10, 000 students?

You know, looking at those kinds of indicators of the study, I think are important. And

Kara: I guess on that same note, the importance of that ongoing piece. So like how, I guess, how does that kind of work? In your experience with the ongoing monitoring, like, is it ongoing monitoring of the same students or is it just continuous ongoing monitoring of students using a certain product?

Or how, like, what does that look like? And what is the importance of that ongoing piece?

Dr. Megan Conrad: Yeah, so, I mean, I think, first of all, the tools should have something built in that’s providing real time data, so you’re monitoring individuals. So for instance, our gizmo stem cases product [00:21:00] has a heat map that in real time is filling out as students are going through it.

Right? So something like this that’s constantly giving you new data on how they’re doing allows for more timely intervention, more timely support. But the, the. The thing about ongoing evaluation is that student needs are constantly changing. The environment is constantly changing. We went through COVID, that, that radically changed how we think about educating and what our students really need, right?

So, you know, it’s not, I don’t think we could ever say, okay, we did the research study. It worked with our students. Let’s just keep using this. It’s important that we are constantly looking at research as this cycle of continuous improvement. And so this isn’t just student data, but it could be things like, you know Are we surveying our teachers and asking them is this still working for you?

What’s working about it? What’s not? I lead focus groups a lot I’ll come into different districts and i’ll sit down with a group of teachers who are using the product pretty heavily And i’ll ask them. [00:22:00] How are you using it? Why are you using it in this way? What are your problem points or even reaching out to those teachers who?

Aren’t using the tool as much even though they have a district wide license and finding out well, why aren’t you using it? What what is the problem? So, you know knowing that things are constantly going to change in terms of what our students need you know, whether the standards change or our ideas about the best pedagogical practices change so making sure that we’re continually questioning and looking at this data in different ways like I said, it might be data that’s from within the system, like this real time data monitoring or these data updates that you might get about student progress.

Looking at external student performance that might be related to the use of the tools, right? So just thinking about, you know, Why are you using this tool? Is it because you want to see improvements on some standardized test? Is it because you want to see improvements in, I don’t know, students choosing STEM careers?

[00:23:00] Like, what is your point that you want to be evaluating? And whether that needs to be changed at other times. We might need to reassess and decide, you know what? I want to see improvements on this metric instead of this. So that’s what I mean by this ongoing continual monitoring. And again, this isn’t fully led by teachers, this totally has to be a large shift you know, I, I also think it’s really important for administrators to create an environment where this is seen as the norm, and again is seen as as, as a shift.

Not evaluative, right? As not a test of a teacher as not a test of how well you’re doing, but as a continuous cycle of we want to do the best for our students. So let’s make sure that things are working for you. This is working in your classroom, and it’s providing the help that we want it to.

Matt Gerberick: Yeah. And so, and, you know, I think you mentioned about the, the shift since, you know, pre COVID and, and how ed tech was [00:24:00] viewed as, you know, sometimes kind of a, of something that’s nice. And we found how it’s, how vital it can be. So how would you, how would, maybe this is a chance to kind of talk more, more specifically about the different products and so forth that you have.

so my son’s in sixth grade. And he’s in an advanced or an accelerated math class. How, how could these tools be utilized in a situation like that versus worksheet packets, quizzes, traditional sit and get quiz kinds of things like that? So how, how could this be utilized in that kind of an environment?

Dr. Megan Conrad: Yeah. And, and that’s a great point, right? We’ve seen a large shift from pre COVID to immediately in a pandemic times. And then to now in terms of how we’re viewing ed tech. And like you said before, it was just kind of like a nice thing or a fancy thing. Like when I was a professor, like, you know, everybody would ooh and ah because I was the youngest person in the department and I was using these cool tech tools, right?

Like, [00:25:00] you know it was, it was just seen as, Ooh, look at this. Right. And then. It was, man, we really need this because we’re in this, this, this setting that is not typical. And then now we’re seeing a little bit of a reevaluation period of, okay, what is vital? You know, are my students spending too much time on their computers?

And, and this is where I think this evaluation piece really comes into play and why it’s so important as we’re thinking about that. So our products in particular, we have four products That are K through 12 science and math. So we have reflex, which is our math fact fluency. So that’s for you know, those younger grades.

We have fracks, which is fractions program. So we usually start students around third or fourth grade on fracks. That’s our newest program. We have science for us, which is K through two science and literacy put together. And then we have gizmos, which is elementary. Through high school. And so our heaviest users are probably late middle school and high school of that program.

And in [00:26:00] all of these the, the benefit of the program is that it hits students where they are. So, you know, especially with reflex and frax, it is tailored to how students are performing in the program. It gives you, you know, there’s a lot of talk about the benefit of one on one tutoring. And I think we all have heard that research.

We know that that’s awesome, but most districts do not have the resources to implement one on one tutoring all the time. Teachers do not have the ability to you know, create a different worksheet for every single student in their class and constantly assess where they are and where they need to go.

So the real value of those programs is that it does that for you. It It sets those students where they need to be, and it allows you to take the step back and look at, okay, where are students needing more support? Where are the students who are excelling, and how do I support them? Where are the students who are struggling, how do I support them?

And so it, it, It takes a lot of [00:27:00] that burden off of you. Gizmos are a great product in that it is, again, really, really flexible. It allows teachers to tailor the materials and fit it to their learning standards and their curriculum. And again, with our heat map in our STEM cases, which are really cool.

There are, you know, let students take the role of being like a scientist and answering certain kinds of questions. And it gives you that real time data that again, is so hard for one teacher to do when they have a full classroom of students. So I do think it’s important to ask the questions of, is this benefiting my students and not just a cool thing to be using or, you know, something that I’m doing just for accountability and to check that box off but I think that All of our products as supplemental products really do.

What ed tech should do which is supporting teachers, taking some of that effort off of their plate, allowing them to differentiate for students. And [00:28:00] then also giving them that feedback, that data, that, that piece that they need to see to, to track students.

Kara: It’s like personalized learning for each of your students.

Dr. Megan Conrad: Yeah, absolutely.

Kara: Yeah, that’s awesome. And I love that they, Like it in itself evaluates where the student is as opposed to just kind of like I mean with a worksheet I mean everybody’s getting the same like cuz like you said you can’t you don’t have time to create 30 different worksheets Based on where they are And so that’s where edtech can come in where they can assess where the student is and take them from there with the correct activities

Matt Gerberick: and then being able to grade all those things

Dr. Megan Conrad: Oh, yeah, that’s, that’s definitely no, it’s, you know, like I said, I understand the concern about, are we relying too much on this?

Are kids spending too much time on, on this? But, you know, like you said, This gives you [00:29:00] the benefits takes that time off of your plate and, you know, because I, I’m doing this research every day, we know that it works. We are seeing amazing, significant gains in these students, even when they’re equivalent of base.

I literally just finished an analysis yesterday where I looked at students for completely identical baseline and then looked at the difference in students in terms of their fluency growth. So students were starting at like. 10 percent fluency at the beginning of the year. Students who used it just a little bit got to like an average of 30 or 40 students who used our program with Fidelity.

We’re ending at like 85, even though they all started at the exact same point. Right. So it is, it is helping the teacher. It is taking that off their plate, giving them what we know is research based to tell us that students will have the best outcomes and then evidence based in terms of we see the outcomes. [00:30:00]