Quimbee logo with url
Making Instructional Videos Law Students Love

Making Instructional Videos Law Students Love

Quimbee ran a video satisfaction survey that found students prefer illustrated videos opposed to talking head or PowerPoint-style videos.

Dewald, M. Sellers, N. Quesenberry, D. Swearingen

Quimbee, Inc.

Abstract

Quimbee ran a satisfaction study to examine the impact of visual representations on student satisfaction. Quimbee created three visual representations: talking head (meaning a lecturer in front of a camera), PowerPoint, and multimedia. Each visual representation was combined with a narration track and took the form of a learning video. Student responses to survey questions along four constructs (narration and audio, visual design, comprehension, and holistic) found a strong and often significant preference for multimedia-based visual representations. The results of this study inform the instructional design of legal-education content and suggest that to accomplish instructors’ cognitive and affective goals, appropriate multimedia-based learning visuals should be employed whenever possible.

Introduction    

Learning with multimedia

Multimedia-based instructional videos are quickly becoming one of the most common forms of delivering instruction online. Numerous companies like Khan Academy are creating videos for science and mathematics; Quimbee creates videos for legal education; and even individual teachers across the world are creating and uploading hundreds of millions of instructional videos to YouTube for purposes of instructing students, especially in blended classrooms. Among the primary reasons for this is presumably the fact that instructional videos are becoming less expensive to create and can often be created on any modern computer with a minimal investment in hardware. However, research has not yet caught up to the implementation and use of these types of learning tools .

Research has shown a distinct advantage to learning with multimedia (Mayer, Bove, Bryman, & Mars, 1996). Mayer et al. (1996) investigated the effects of diagrams on learning about lightning and electricity. found that students performed better on recall and transfer posttests when they read a passage of text that was augmented with simple illustrations compared to a passage of text alone. The advantage of multimedia representations over text-only representations can be attributed to the multimodal nature of information in multimedia (Clark & Mayer, 2016). Although text-only representations take advantage of a verbal representation only, a multimedia representation can also provide a visual representation of the same information, allowing learners to encode the concept more deeply using both representation types. This effect, called the multimedia effect, has yielded better recall of information, as well as deeper, transferable learning.

Few studies exist that examine the impact of visuals on student satisfaction during learning. The objectives for this research study were to examine the effect of three treatments of visual representation on the perceived satisfaction of learners: talking head, PowerPoint, and multimedia. Specifically, wanted to know the impact of three visual-representation types along four different constructs on satisfaction: narration and audio, visual design, comprehension, and holistic.

Methods

Participants and design

The participants of this research study were 260 students enrolled in law school who were Quimbee subscribers. Quimbee displayed a banner at the top of the log-in landing page asking students if they would like to participate in the study. Quimbee offered a $25 gift card to students who finished the entire protocol.

Because this was an online study that used no in-person researcher to supervise, students were able to control the environment in which they participated. This means that students could have participated in their homes, but also other locations, as the survey was formatted to work on mobile devices. This information was not captured or tracked by the research team. In addition, students were able to start and not finish. Therefore, the number of students in each condition was not completely balanced.

The survey used a three condition repeated-measures design. Students saw three videos in total, one of each visual-representation type. The visual-representation type varied the type of instructional video seen by students: a talking-head video of an instructor speaking to the camera (n=92), a PowerPoint-based video that contained bullet points and text (n=92), and a multimedia video that combined illustrated visuals with text (n=76). Each video included narration based on the talking-head narration. Each of the three videos was created on a different topic, based on three separate areas of law. The order of visual representation and the topic of the video were counterbalanced and randomized per student. Students could only participate once. Participation was measured by flagging their account.

Materials

Participants were presented with three different sets of materials during the course of this study: instructional videos, a satisfaction survey, and a demographic survey. A custom-designed web application delivered all materials.

Instructional videos. The instructional videos presented three first-year legal concepts: the First Amendment (protected and symbolic speech), contracts (performance), and torts (misdemeanor manslaughter). Each of the instructional videos was embedded on a web page. Controls, such as pause, stop, and fast-forward, were provided, so learners were able to control how they viewed each video. The average video length was 395 seconds (6 minutes, 35 seconds).

Satisfaction survey. The satisfaction survey asked participants to provide their opinions on the video they had just watched. The survey measured student sentiment along four different constructs: narration and audio, visual design, comprehension, and holistic. Each of the questions attempted to measure student satisfaction on those constructs.

Demographic survey. The demographic survey consisted of six questions about the characteristics of the participant (e.g., gender, age, year in law school) and the participant’s preference of learning-video type.

Procedure 

Participation was anonymous, and participants were assigned random numbers at the beginning of the research protocol. After reading the instructions and agreeing to participate, students watched a video that provided a high-level overview and instructions about what they would be doing. After viewing the overview video, students then moved into the experimental phase. Students watched three videos. For each video, students were randomly assigned a visual-representation type and topic. Students saw one of each visual-representation type and one of each topic. Upon completion of the video, students then took a 14-question survey that measured their sentiments about the video just viewed. After completing the survey, students moved to the next visual-representation type and topic, followed by the survey. Upon completion of all three videos and surveys, students then took a holistic survey meant to measure their sentiments about the videos overall. Upon completion of the holistic survey, students were notified that the protocol was complete and that they would receive their compensation within four to six weeks. 

Results

In all analyses, a standard alpha level of .05 was used. The results are grouped into four constructs: narration and audio, visual design, comprehension, and holistic.

Narration and Audio   

Question 1: How do you feel about the way that concepts were explained in this lesson video? The frequency distribution can be seen in Figure 1.

Question 7: How effective were the visuals used in this video at helping you understand the narration? A Repeated ANOVA was conducted using the visual-representation type as the independent variable. Dependent measures were the responses of each question. Mauchly’s test of sphericity was violated, so the Huynh-Feldt correction was used in interpreting effects. Visual representation has a significant effect on the perception of helping students understand the narration (F(1.885, 488.162) = 106.332; p < .001). A post-hoc test using the Bonferroni correction revealed that multimedia visual representations were perceived as more helpful in understanding than talking-head videos (p < .001), but not PowerPoint visual representations (p = .141). PowerPoint visual representations were also perceived as more helpful than talking-head visual representations (p < .001). Table 1 shows the estimated marginal means of helpfulness of understanding. Figure 7 shows the graph of the estimated marginal means of helpfulness of understanding.

Question 8: How do you feel about the organization of the content of the lesson video? A Repeated ANOVA was conducted using the visual-representation type as the independent variable. Dependent measures were the responses of each question. Mauchly’s test of sphericity was violated, so the Huynh-Feldt correction was used in interpreting effects. Visual representation had a significant effect on perception of content organization (F(1.973, 510.942) = 14.339; p < .001). A post-hoc test using the Bonferroni correction revealed that multimedia visual representations were perceived as more organized than talking-head videos (p < .001), but not PowerPoint visual representations (p = 1). PowerPoint visual representations were also seen as significantly more organized than talking-head videos (p < .001). Table 1 shows the estimated marginal means of organization. Figure 8 shows the graph of the estimated marginal means of organization.

Question 9: Was the pace/speed of the explanation: The frequency distribution can be seen in Figure 9. 

Question 10: What was your perception of the quality of the audio heard in this lesson video? A Repeated ANOVA was conducted using the visual-representation type as the independent variable. Dependent measures were the responses of each question. Mauchly’s test of sphericity was not violated. Visualization had no significant effects on perception of the quality of audio (F(2, 518) = .433; p > .05). Table 1 shows the estimated marginal means of helpfulness of understanding. Figure 10 shows the graph of the estimated marginal means of ratings of the audio quality.

Visual Design

Question 2: How do you feel about the examples that were used in this lesson video? The frequency distribution can be seen in Figure 2.

Question 3: How do you feel about the length of this lesson video? The frequency distribution can be seen in Figure 3.

Question 4: How professional were the visuals contained in this lesson video? For proper analysis as a continuous variable and due to the low number of responses, the top category option was collapsed into the adjacent lower category. A repeated-measures analysis of variance (Repeated ANOVA) was conducted using visual-representation type as the independent variable. Dependent measures were the responses of each question. Mauchly’s test of sphericity was not violated. Visual representation had no significant effects on the perceived professionalism of the visuals (F(2,518) = .381; p > .05). Table 1 shows the estimated marginal means of professionalism, and Figure 4 shows a graph of the estimated marginal means of professionalism.

Question 5: How enjoyable did the visuals used in this lesson video make learning the concept? A Repeated ANOVA was conducted using visual-representation type as the independent variable. Dependent measures were the responses of each question. Mauchly’s test of sphericity was violated, so the Huynh-Feldt correction was used in interpreting effects. There was a significant effect of visual representation on enjoyment of the visuals (F(1.95, 504.91) = 74.469; p < .001). Post-hoc tests using the Bonferroni correction revealed multimedia visual representations were significantly more preferred than PowerPoint visual representations (p < .001) or talking-head visual representations (p < .001); PowerPoint visual representations were also preferred more than talking-head visual representations (p < .001). Table 1 shows the estimated marginal means of enjoyment. Figure 5 shows the graph of the estimated marginal means of enjoyment.

Comprehension

Question 6: How effective were the visuals used in this lesson video at keeping your attention on learning the content? A Repeated ANOVA was conducted using the visual -representation type as the independent variable. Dependent measures were the responses of each question. Mauchly’s test of sphericity was not violated. There was a significant effect of visual representation on learner attention (F(2, 518) = 87.271; p < .001). A post-hoc test using the Bonferroni correction revealed multimedia visual representations kept a student’s attention significantly better than PowerPoint visual representations (p < .001) or talking-head representations (p < .001). PowerPoint visual representations also kept a student’s attention significantly better than talking-head visual representations (p < .001). Table 1 shows the estimated marginal means of attention. Figure 6 shows the graph of the estimated marginal means of attention.

Question 11: How effective was the narrating voice heard in the video at keeping you focused on learning the content? A Repeated ANOVA was conducted using the visual-representation type as the independent variable. Dependent measures were the responses of each question. Mauchly’s test of sphericity was not violated. There was a significant effect of visual representation on the perception of the narration in keeping students focused on learning (F(2, 518) = 3.126; p = .045). A post-hoc test using the Bonferroni correction revealed that the multimedia visual representation was perceived as significantly better at keeping students focused on learning than talking-head visual representations (p = .037), but not better than PowerPoint visual representations (p > .05). PowerPoint visual representations were not perceived as significantly better at keeping students’ attention on learning (p > .05). Table 1 shows the estimated marginal means of helpfulness of understanding. Figure 11 shows the graph of the estimated marginal means of the ratings of the narrating voice and focus.

Question 13: How much do you feel you remember and understand from this lesson video? A Repeated ANOVA was conducted using the visual-representation type as the independent variable. Dependent measures were the responses of each question. Mauchly’s test of sphericity was not violated. There was a significant effect of visual representation on the perception of students remembering and understanding from the lesson video (F(2, 518) = 51.026; p < .001). A post-hoc test using the Bonferroni correction revealed that students viewing the multimedia visual representation felt they remembered and understood significantly more than those viewing the talking-head visual representations (p < .001) or PowerPoint visual representations (p = .004). Students viewing PowerPoint visual representations felt they remembered and understood significantly more than students viewing the talking-head visual representation (p < .001). Table 1 shows the estimated marginal means of helpfulness of understanding. Figure 13 shows the graph of the estimated marginal means of the ratings of comprehension.

Holistic

Survey Question 4: Of all three videos, from which would you most prefer to learn? The frequency distribution can be seen in Figure 14.

Survey Question 5: Of all three videos, from which would you least prefer to learn? The frequency distribution can be seen in Figure 15.

Question 12: Based on this video only, how likely is it that you would recommend Quimbee to a fellow student or colleague? A Repeated ANOVA was conducted using the visual-representation type as the independent variable. Dependent measures were the responses of each question. Mauchly’s test of sphericity was violated, so the Huynh-Feldt correction was used in interpreting effects. Visual representation had a significant effect on the likelihood of recommending Quimbee to another student (F(1.892, 489.917) = 49.538; p < .001). A post-hoc test using the Bonferroni correction revealed that students viewing the multimedia visual representation were significantly more likely to recommend Quimbee than students viewing the talking-head visual representation (p < .001), and moderately significantly more likely than students viewing the PowerPoint visual representation (p = .059). Students viewing the PowerPoint visual representation were significantly more likely to recommend Quimbee than students viewing the talking-head visual representation (p < .001). Table 1 shows the estimated marginal means of helpfulness of understanding. Figure 12 shows the graph of the estimated marginal means of ratings of recommendation.

Discussion and Recommendations

Overall, students tended to prefer the multimedia representations to both the PowerPoint and talking-head representations. Further, most differences were found to be significant, with those that were of moderate significance or nonsignificance to be trending toward preference of multimedia visual representations. This discussion section will first state three main takeaways and recommendations and then follow up with a question-by-question discussion.

Takeaway #1: Quimbee’s students really do not enjoy talking-head videos.

One of the clearest findings of this study is that Quimbee’s students prefer virtually every type of visual representation to talking-head visual representations. Talking-head videos consistently scored lower on satisfaction measures than either PowerPoint visual representations or multimedia visual representations, and nearly always by a significant margin. The talking-head videos were significantly worse at keeping students’ attention (Figures 11 and 13), significantly worse in terms of students feeling they remembered and understood the concepts (Figures 7 and 13), and perhaps most importantly, significantly less enjoyable than the other visual representations (Figure 5). Additionally, students overwhelmingly chose the talking-head video as the visual from which they would least prefer to learn (Figure 14). Students also seemed to have less patience for talking-head videos (Figure 3). The PowerPoint and multimedia videos seemed “just right” in length, but the talking-head videos were perceived as “too long,” even though the durations of all three visual representations were exactly the same.

This finding contradicts results found by Guo et al. (2014) that talking-head videos were more engaging. One potential explanation for this contrast is that Guo et al. attempted to extract engagement through data, where this study explicitly asked viewers to state their preferences regarding the visual representation. Perhaps more engagement is not always a good thing, as it could suggest misunderstandings or difficulty in learning. Increased engagement could be due to lack of attention or understanding concepts upon first viewing. More work could be done to examine the situations or characteristics of talking-head visual representations that might lead to more engagement, alongside higher satisfaction.

Our recommendation based on these results is to minimize the use of talking-head style visual representations in learning and increase the use of visuals to support the learning content.

Takeaway #2: Having some visuals is superior to having no visuals

In a similar thread, the inclusion of visuals seemed to provide learners with an opportunity to better comprehend the narration in a visual representation (Question 7). This could be due to the fact that the visuals provide some visual structure to the learning that the talking-head visual representation cannot. In this instance, if it is difficult or impossible to create visual representations with appropriate images, then providing some form of visual structure to learners is the next-best strategy. PowerPoint visual representations, in the form of bullets or charts, promote relationships and organization, even though there is no true visual representation in the traditional sense. Advance organizers (Ausubel, 1968) and scaffolding of knowledge (citation needed) have been shown to promote construction of initial mental models. By using appropriate PowerPoint or multimedia representations, students can build knowledge that, in turn, can lead to better learning outcomes in the classroom.

Our recommendation is to create educational videos with some type of visuals. At the very least, a PowerPoint presentation can provide a scaffold of the topic to students as they watch the video. At best, students can learn more deeply with a video that integrates appropriate visuals, as evidenced by our next finding.

Takeaway #3: Students feel they are more attentive and learn more with multimedia visual representations.

Mayer (2012) suggests that the appropriate inclusion of pictures to accompany words can yield significant improvements in recall and comprehension for learners. Though our study did not have a direct measure of recall and comprehension, nor  did it ask how much students felt they learned and remembered from the videos. Question 13 indicates that students who viewed the multimedia representation felt they remembered and learned significantly more than students in either of the other two visual-representation conditions. This result could also be partially explained by the fact that students in the multimedia condition felt the video kept their attention significantly better than those in the other two conditions. Similarly, students in the multimedia-representation group felt learning with their representation was significantly more enjoyable than those with the other two types of visual representations (Questions 5 & 6).

Additionally, visuals like those found in the multimedia condition can augment the comprehension of narration significantly more than PowerPoint visuals (Question 11). Students felt that the visuals assisted focus on learning the content.

Our recommendation is to examine ways to include more relevant images into visual representations. Though PowerPoint videos are superior to talking-head videos, they are still preferred less than multimedia videos. Ample literature exists that offers suggestions for the creation of visual representations in this style (Clark & Mayer, 2016). In addition to improved and deeper learning outcomes, the inclusion of appropriate visual representations will yield more satisfied learners.

Question-by-Question Discussion

Question 1: How do you feel about the way that concepts were explained in this lesson video? Students were generally satisfied with the way the concepts were explained in the lesson video across all conditions. However, it appears that students felt concepts were explained better in multimedia-based lesson videos, whereas students viewed talking-head and PowerPoint visual representations as harder to understand

Question 2: How do you feel about the examples that were used in this lesson video? As found in Question 1, students found the examples in the multimedia-based lesson videos easier to understand. Meanwhile, students viewing the examples in talking-head and PowerPoint representations found them to be more difficult. Another interesting observation is that even though the visual representations were based on the same core material, some students found the examples to be “a little easy to understand” in the multimedia visual representations. This result could suggest that although nearly the same number of students found the examples in PowerPoint and multimedia visual representations to be “just right,” those in the multimedia visual representation condition leaned more toward “easy,” while the students using the PowerPoint visual representations had sentiments that leaned more toward “hard.”

Question 3: How do you feel about the length of this lesson video? There were subtle differences between the three groups here. Students who watched the multimedia visual representations were more likely to feel that the length of the video was “just right.” An examination of the “long” category reveals a slight sentiment that both PowerPoint, and especially talking-head videos, felt “long” or “too long” for those students. Because the videos were based on the same scripts, one could infer that the multimedia visual representations make the videos somewhat easier to watch and therefore boost the perception that the video length is neither too long nor too short.

Question 4: How professional were the visuals contained in this lesson video? Because visual representation had no significant effects on the rating of professionalism, all students in all conditions found the visuals used professional. Because the visuals Quimbee uses in the multimedia representation are illustrations or drawings, this is encouraging to see. Students thought that the talking-head representation, which included a well-dressed narrator, was just as professional as the multimedia representation with illustrations.

Question 5: How enjoyable did the visuals used in this lesson video make learning the concept? Visual representation had a significant effect on enjoyment of the videos. Our test revealed multimedia visual representations were significantly more preferred than PowerPoint visual representations (p < .001) or talking-heads visual representations (p < .001); PowerPoint visual representations were also more preferred than talking-head visual representations (p < .001). This is encouraging. It would be reasonable to assume that students who find an activity more enjoyable will continue with that activity when compared to activities found less enjoyable. In this case, the visual representation rated most enjoyable by participants is also the one that is best for comprehension purposes.

Question 6: How effective were the visuals used in this lesson video at keeping your attention on learning the content? Attention to the learning materials in video-based learning environments is one of the key components of better comprehension in these settings (Boucheix and Lowe, 2010). Videos that can keep a learner’s attention on relevant features improve recall and comprehension. Additionally, with the transient nature of information in multimedia learning environments, it is vital that students pay attention for the duration of the video. Students in this study were significantly more able to keep their attention on the multimedia visual representation when compared to those watching the PowerPoint and talking-head representations. PowerPoint kept the learner’s attention significantly better than talking-head representations. Therefore, the inclusion of some visual structure, as mentioned in the key takeaways, could yield better attention in video-based learning environments.

Question 7: How effective were the visuals used in this video at helping you understand the narration? The responses to this question, in light of the findings on Questions 2 and 6, support the assertion that visuals can increase perception of understanding a topic. Students rated the multimedia and PowerPoint representations significantly better at helping them understand the narration when compared to talking-head representations. There was a slight mean difference between multimedia and PowerPoint. Though this resulttrended in the direction of multimedia (M = 3.57 vs. 3.75), it was nonsignificant.

Question 8: How do you feel about the organization of the content of the lesson video? There was a significant difference in perception of organization between multimedia and talking-head representations, which is a trend seen throughout this analysis. There was an additional significant difference between PowerPoint and talking-head representations. However, the difference between multimedia and PowerPoint was not significant. Because all three videos used the same narration, it can be inferred that the inclusion of some visuals assisted learners in understanding structure of the content in a way that the talking-head representation could not.

Question 9: Was the pace/speed of the explanation: Virtually all representation types had the same distribution, meaning the pace was just right. Further research could investigate the inclusion of speed controls on the video and the impact of such controls on satisfaction, and perceived and actual comprehension measures.

Question 10: What was your perception of the quality of the audio heard in this lesson video? Students found the quality of the audio to be the same across all three videos. This finding makes, because the audio for all three visual representations was based on the narration from the talking-head video. Plus, the audio was recorded with professional equipment. This could also be considered a consistency check, as all students were strongly consistent in their ratings. Ultimately, there would be no cognitive rationale for differences across the three visual representation types.

Question 11: How effective was the narrating voice heard in the video at keeping you focused on learning the content? Though the narrative track was the same across all representations, students found the voice to be more effective at keeping them focused when it was paired with a multimedia representation compared to a talking-head representation. This effect was not significant between talking-head and PowerPoint presentations.

The comparison between the results of this question and those to Question 6 is interesting. In Question 6, students were asked to rate the effect of the visuals on their attention, which is a similar concept to focus. Those results showed a significant effect favoring the multimedia visual representations. Here, though the mean score for the multimedia representation was higher than for the PowerPoint (M = 3.745 vs. 3.86), these differences were nonsignificant. It could then be inferred that appropriately designed visual-content delivery, not the verbal delivery, is the key to keeping learners’ attention.

Question 12: Based on this video only, how likely is it that you would recommend Quimbee to a fellow student or colleague? Students watching a multimedia or PowerPoint video were more likely to recommend Quimbee to another student. There was a moderately significant difference between multimedia and PowerPoint (p < .059). Though not quite significant in the traditional statistical sense, students were more likely to recommend Quimbee after watching multimedia videos when compared to the other visual-representation types (Ms: 3.954 vs. 3.812 vs. 3.308).

Question 13: How much do you feel you remember and understand from this lesson video? This question directly asked students how well they felt they remembered and understood the material based on the visual representation they saw. In this case, students viewing a multimedia representation were more likely to feel they remembered and understood content from the lesson video when compared to PowerPoint and talking-head representations. Though there were no direct comprehension measures (i.e., pretest and posttest), students felt significantly better about their learning in multimedia visual representations. This finding aligns well with Mayer’s (2012) assertion that video-based learning environments that utilize appropriate visual representations alongside verbal information can improve retention and yield deeper comprehension than when the visuals are not included or not appropriately designed.

Appendix: Tables and Charts

Making videos law students love table 1.png

Figure 1. Graph of the distribution of responses for Question 1.

Figure 2. Graph of the distribution of responses for Question 2.

Figure 3. Graph of the distribution of responses for Question 3.

Figure 4. Graph of the mean ratings per visual representation for Question 4.

Figure 5. Graph of the mean ratings per visual representation for Question 5.

Figure 6. Graph of the mean ratings per visual representation for Question 6.

Figure 7. Graph of the mean ratings per visual representation for Question 7.

Figure 8. Graph of the mean ratings per visual representation for Question 8.

Figure 9. Graph of the distribution of responses for Question 9.

Figure 10. Graph of the mean ratings per visual representation for Question 10.

Figure 11. Graph of the mean ratings per visual representation for Question 11.

Figure 12. Graph of the mean ratings per visual representation for Question 12.

Figure 13. Graph of the mean ratings per visual representation for Question 13.

Figure 14. Graph of the distribution of responses for Survey Question 4.

Figure 15. Graph of the distribution of responses for Survey Question 5.

References

Boucheix, J. M., & Lowe, R. K. (2010). An eye tracking comparison of external pointing cues and internal continuous cues in learning with complex animations. Learning and instruction, 20(2), 123-135.

Clark, R. C., & Mayer, R. E. (2016). e-Learning and the Science of Instruction. John Wiley & Sons.

Danielson, R. W., Sinatra, G. M., & Kendeou, P. (2016). Augmenting the Refutation Text Effect with Analogies and Graphics. Discourse Processes, 53(5-6), 392–414.

Guo, P. J., Kim, J., & Rubin, R. (2014, March). How video production affects student engagement: An empirical study of MOOC videos. In Proceedings of the first ACM conference on Learning@ scale conference (pp. 41-50). ACM.

Mayer, R. E. (2012). Multimedia learning. Cambridge.

Mayer, R. E., Bove, W., Bryman, A., & Mars, R. (1996). When less is more: Meaningful learning from visual and verbal summaries of science textbook lessons. Journal of Educational Psychology

Schroeder, N. L. (2016). A Preliminary Investigation of the Influences of Refutation Text and Instructional Design. Technology.