The quest to understand how the COVID-19 pandemic has affected college-level learning is understandable, especially so that colleges and universities can deal with any potential setbacks students have suffered, as many are hoping to return to more “normal” learning environments this fall.
With that goal in mind, more researchers will likely try to follow the lead of economists at Auburn University, University of Southern Mississippi, and American University, who released a discussion paper this week. through the National Bureau of Economic Research, in which they use a large-scale data set from a public research university to compare how in-person and online study affected course completion rates and student grades before and after the pandemic.
They find that when accounting for some differences in the characteristics of students and instructors, students in face-to-face classes “outperform their online counterparts when it comes to their grades, the propensity to withdraw. course and the probability of obtaining a passing grade. note. ”The researchers say their findings remain stable before and after the pandemic falls in the spring of 2020.
This leads them to titling their article “Does Online Education Work?” And overall their answer is no.
Shanna Smith Jaggars, associate vice-president of research and program evaluation in the Office of Student Success at Ohio State University, described the paper as this rare “rigorous” study of online learning with a large sample, making it a “welcome addition”. to literature. She said the article’s findings that students with weaker academic backgrounds have more difficulty in virtual classes and that grades were inflated in the spring 2020 semester, the furthest away, were compelling.
But several experts who study learning in multiple modalities say the study has methodological flaws and grossly excessive conclusions, which they attribute to researchers’ lack of knowledge about online education and a possible bias against online education. online education.
They are particularly troubled by the pandemic-related sections of the study results, which “do not recognize that this happens during a pandemic, and these are not” normal “online courses,” said Jeff Seaman, director of Bay View Analytics and a leading educational technology researcher.
Duha T. Altindag, associate professor of economics at Auburn and lead author of the study, said the outbreak of COVID-19 has motivated researchers to revisit the long-standing debate over the effectiveness of education in line compared to in-person education, given predictions that the wide (albeit temporary) pivot of the industry would lead to wider adoption of virtual learning in the future. (The co-authors are Elif Filiz, assistant professor of economics at the University of Southern Mississippi, and Erdal Tekin, associate researcher at American University.)
“Given this perspective,” the authors write, “it is all the more important to have a full understanding of the impact of online education on student learning in general and during the COVID pandemic -19 in particular. “
In an attempt to provide this understanding, the authors compare performance data for about 18,000 students in fully in-person courses versus fully online courses in an unnamed “mid-size public course.” [Research 1] academic “starting in the Spring 2019, Fall 2019, and Fall 2020 semesters. Their measures of student” learning outcomes “(course completion and grades) do not reflect what students have learned, if applicable, but this is a broad limitation in higher education.
Looking only at student performance in pre-pandemic semesters (Spring and Fall 2019), the researchers found no substantial difference in completion rates, but they report that students in face-to-face classes were between five and seven percentage points less likely to score high (A or B) than their peers in online courses. This gap in final marks between online students and face-to-face students narrowed in spring 2020, however, because even “face-to-face” students found themselves in “emergency distance” courses.
This is not surprising, the authors note, given that many institutions and individual instructors – recognizing the damaging impact of the pandemic on student health, mental health and other issues – have adopted more flexible policies. on things like grading, homework and attendance.
The researchers speculated that the apparent advantage of online students over their face-to-face peers might be related to some external factors, “such as grade inflation caused by a lenient grade by instructors teaching courses. online courses or a more widespread breach of academic integrity in those courses. “So they applied a series of filters designed to account for heterogeneity -” student and instructor fixed effects “- to the data.
An important effect was related to the rating policies of the instructors. When they examine the differences between these policies, the researchers report, they find that the results reverse: “Students in face-to-face classes in fall 2019 were 2.4 percentage points less likely to withdraw from their course and 4.1 percentage points more likely to pass. note. “(The other scoring-related results were statistically insignificant.) This leads them to conclude that” instructors teaching online courses may be more forgiving in their approach to scoring than instructors teaching F2F courses. “
The authors apply a similar filter to see if apparently higher marks for students in online courses could be caused by “students engaging in academic integrity violations if less strict supervision by instructors leads to more. of cheating “. In an attempt to find out, they use information from the online monitoring service of the university in question, which also leads them to conclude that students in courses (face-to-face and online) in which instructors use the monitoring service. Online supervision for exams earn lower marks than for which instructors do not proctor exams.
Finally, to try to assess how the academic “quality” of students affects their performance in online versus face-to-face classes, the researchers compare students in the university’s honors program to its others. students. They find that specialized students perform equally well regardless of the mode of delivery of instruction, while non-specialized students perform better face-to-face.
The majority of scores, if not hundreds of studies examining the comparative performance of online learning versus face-to-face learning found “no significant difference” in student outcomes. But the subject remains sufficiently contested that any well-designed (or even some imperfect) research stimulates discussion and debate.
The NBER study deserves attention, according to Jaggars of Ohio State, because “there are only a handful of rigorous studies on online learning with large sample sizes,” and this is one of them. Jaggars says the study largely reinforces previous findings that more academically qualified students do better in online courses than their less prepared peers, and describes the claim that cheating students may inflate grades as “interesting” but unreliable because the authors had limited data to shoot them.
Others who reviewed the study had much harsher ratings.
Seaman, of Bay View Analytics, said parts of the study related to what happened during the pandemic should be “totally ignored,” as the researchers – comparing data from that period to what happened during the pandemic happened before – did not take into account the huge differences.
“We know from a lot of other research (ours included) that most of the professors who taught online during this time had never done so before and had to go online without having time to plan,” Seaman said via email. “We also know that they said they were under considerable stress because of it. The main concern of institutions at that time was student stress. Yet the discussion reads as if we were in normal times. , and the conclusions could be applied in general. “
Deb Adair, president of Quality Matters, a nonprofit group that focuses on improving and ensuring the quality of online education, said the paper focused on “instructor fixed effects and students, ”but“ they are neither operationalized nor fully discussed, ”leaving readers in the dark about them.
Adair also said the researchers are making assumptions that reveal either ignorance or prejudice.
For example, comparing the approaches used by instructors in in-person and face-to-face classes (including instructors who have taught in both modalities), “their assumption … is that an instructor, or a student , would approach teaching and learning in the same way. regardless of whether the course is F2F or online, “and any difference in results” must be due to the modality, “Adair said via email. “What is sorely lacking is the strong impact of online course design and instructor training in online learning (or the lack thereof) that might explain the differences … sure about other things that might explain the differences between an instructor’s student scores in F2F versus online, particularly how the institution supports or does not support online education. ”
And attributing the grading differences in the online and in-person courses to “instructor leniency and breaches of academic integrity in the online course” ignores the intentional and often well-founded differences between the two modalities. . “Well-designed online courses can have totally different types of assessment than the face-to-face counterpart – replacing midterm / final exams with authentic assessment,” Adair said. “I should also point out that well-designed courses using authentic assessment instead of mid-term and final tests will be need have remote monitoring. Cheating, in this sense, is not a problem. Instead, the authors speculate that the lack of remote monitoring is because instructors are less vigilant about cheating, and not because they are using better research-based assessment practices. “
Asked to respond to some of the document’s criticisms, Altindag, Auburn’s economist, conceded that the data from the Spring 2020 pivot to distance learning should be looked at differently from other data in the document.
“But it’s not about COVID, it’s about whether online education works above all else,” he said. “I am convinced that if we eliminated the spring data, we would come to the same conclusion. If you eliminated COVID this year, I would not change the text.”