After attending the 2015 cCWCS workshop on Active Learning in Organic Chemistry, I starting flipping my Organic Chemistry 1 & 2 courses based on the Just In Time Teaching (JiTT) model. After the success of implementing this strategy, I wanted to further enhance my courses with making supplemental videos, but I was intimidated by making video lectures. Instead, I began creating video keys. First, for practice exams, then for the additional problems at the end of in class guided inquiry exercises, and, finally, chapter content videos that can replace one day of lecture, allowing for a truly flipped model, which allows for more formal active learning and problem solving to be done during class time. This paper discusses different methods used to create videos and reports on student data and feedback.
Introduction
There has recently been a lot of interest in increasing active learning in the STEM classroom, including efforts to “flip” the classroom, where the content of the course is delivered outside of the class and the practice of applying the content occurs during class time. Chemistry has somewhat lagged behind its counterparts in Physics and Biology on this front, with General and Physical Chemistry leading the efforts. For Organic Chemistry, there is a learning community (both in person at cCWCS ALOC workshops and symposia, as well as online at www.organicers.com) of faculty interested in and actively implementing active learning in Organic Chemistry (ALOC).
Figure 1. Participants and facilitators at the 2014 ALOC
In 2015, I attended the cCWCS ALOC mini-workshop in Washington, DC (Figure 1), where I became plugged in to this network and found many resources. I wanted to increase active learning in my classroom and planned to eventually fully flip my class. Video lectures are the most common way of delivering lecture content outside of the classroom (Seery, 2015), and there is even a perception that they are the only way to do so (Eichler, 2015). However, I (like many faculty interested in flipping the classroom) was intimidated by the amount of effort required to make lecture videos (Eichler, 2015), so did not start my own journey with the flipping the classroom with making video lectures.
My initial foray into making videos for Organic Chemistry started with making video keys for practice exams, then expanding into other problem-solving tutorials. There have been many published reports of faculty making problem-solving videos for General Chemistry (Benedict, 2012; Richards-Babb, 2014; Ranga, 2017), Analytical Chemistry (Benedict, 2012; He, 2012), Organic Chemistry Lecture (Fautch, 2013), and Organic Chemistry Laboratory (Box, 2017; Seethaler, 2007). Results from these and others indicate that these types of how-to videos are preferred to straight recording of lectures: students prefer these types of videos (Guo, 2014), interact with them more (Ranga, 2017; Guo, 2014), find them more helpful than other courses resources such as the textbook and face-to-face exam reviews (Richards-Babb, 2014), and have increased outcomes in the courses where they watched these videos (Sturm-Beiss, 2013). If you, like me, speak fast and worry that this will translate poorly to video, you should be comforted by the findings in Guo, 2014, that students actually engage more with videos where the instructor speaks more quickly.
One explanation for the success of these tutorial-style videos lies in Mayer’s cognitive theory of multimedia learning, which states that “people learn more deeply from words and pictures than from words alone” (Mayer, 2009). There are three underlying, contributing factors: 1) the mind uses different channels to process audio and visual stimuli; 2) each channel has a limited capacity to the amount of information it can process; and 3) sorting and processing this information is key to learning (Mayer, 2009). Cognitive overload is possible and can hinder student learning (Mayer, 2003). These tutorial-style videos may actually reduce cognitive load for novices when compared to problem solving alone (Seery, 2015; Kalyuga, 2001). Students can pause the videos to process the information or rewind and rewatch if they do not understand. Students often use these problem-solving videos as a heuristic model, working along with the video example or a similar type problem, pausing to complete each step.
Technology
There are a variety of applications to choose from and it can be somewhat overwhelming in both choices and cost to explore them all. The cCWCS ALOC workshop exposed me to many of these options, including the initial software I used, Explain Everything. I now have access to Camtasia, which is considered the gold standard for screen capture, but this software is only available on computers in the Teaching and Learning Center and I want to be able to create videos from my home or office. As detailed below, I have used a variety of methods to create, edit, host, and caption videos for my courses (Table 1).
Table 1. Technology used to create various videos.
Devices |
Problem Solving Style |
Recording |
Editing |
Hosting |
Closed Captions |
iPad |
Explain Everything with stylus on iPad |
Explain Everything |
Annotate in YouTube (Figure 2) |
YouTube |
Automatic from YouTube |
Touchscreen laptop (Windows 10) |
PowerPoint with stylus on touchscreen laptop |
Windows GameBar |
None |
YouTube |
Automatic from YouTube |
Desktop (Mac) |
PowerPoint with mouse on desktop screen |
Quicktime |
iMovie |
YouTube |
Automatic from YouTube |
Wacom tablet |
PowerPoint with stylus on Wacom tablet |
Quicktime |
iMovie |
YouTube |
Automatic from YouTube |
Desktop (Mac) |
PowerPoint with animations + mouse on desktop screen |
Quicktime |
iMovie |
YouTube |
Automatic from YouTube |
Desktop (Mac) |
PowerPoint with animations |
Quicktime |
iMovie |
LMS |
None |
First, I used an iPad (2nd generation) with the app Explain Everything to create screen capture videos of myself solving problems on screen while talking about my mental process for how to approach the problems. I uploaded the PDFs of the problems to Explain Everything using Google Drive, then recorded myself talking while solving the problems by writing with a stylus on the iPad. I did not edit them and if I made an error, I stopped and rerecorded sections, which I could then combine. The pros to this method were that it was easy to set up using technology I already had and the bar for entry was low. The cons to using the iPad were that because it was old, it was slow to respond and some apps were not available or able to be updated, and that writing on the iPad with the stylus required careful arm positioning so that a touch from a hand or arm would not result in stray marks. The cons to using Explain Everything were that there was not an in-app method for editing videos, it required a paid subscription, the app crashed frequently without saving, the videos took hours to upload, and some of the writing ended up blurry and/or pixelated after conversion to YouTube (Figure 2). Next, I obtained a Lenovo Thinkpad X1 YOGA touchscreen laptop powered with Windows 10 (Figure 3). I used the native Windows GameBar to do a screen capture of myself writing on the presented PowerPoint presentation of the problems using the provided stylus while talking. The cons to using this method were that the Windows GameBar screen capture was only available in the standard laptop mode, not in tablet mode (when the computer is folded the other way), which made it awkward to write on the screen; there was still the issue of stray marks from arm/hand touches, and an additional complication of lagtime for the writing to show up on the screen. I did not further pursue methods that entailed writing directly on screens, partially because both the iPad and Thinkpad stopping working, and partially because I found a better solution with using my Mac desktop and Quicktime screen capture. I have used three different methods for presenting the problem solving steps in this manner. First, I used the pen embedded in the PowerPoint presentation mode (Figure 4) to write on the screen using the mouse. There was a bit of a learning curve to drawing organic structures this way, but I quickly became proficient. Next, I borrowed a Wacom Tablet (Figure 5) from our Center for Teaching and Learning to see if this would allow for smoother structure drawing. However, I found that writing on a separate surface where you cannot see where the pen tip is located to be frustrating and that my handwriting was actually worse this way than through the mouse on screen. Third, I began creating animations in my PowerPoint that I can add in as I speak, which has helped pace the recordings and gives me flexibility with also adding in annotations with the pen if needed. This animation/pen method has become my “go to” for making quick video keys, as it now takes me only an hour or two to create a video, edit it, and upload it.
Figure 2. Example of blurry/pixelated writing and YouTube annotation (yellow box).
Figure 3. Lenovo Thinkpad X1 YOGA touchscreen laptop.
Figure 4. The pen in PowerPoint presentation mode can be used to draw on the presentation in a variety of colors.
Figure 5. Wacom tablet and stylus.
At first, I did not edit the recordings, simply stitching together parts of videos or rerecording entire videos if there were blatant errors made while recording. I was able to add annotations to the videos in YouTube (until this feature was discontinued in May 2017), which was useful to correct small mistakes were missed and that could not be otherwise edited out (ex: mislabeling in writing the order of priority of substituents for CIP rules while speaking the correct order, Figure 2.) After this feature was discontinued, I began searching for an editing solution. I found this solution in iMovie, which came standard with my work Mac desktop computer. Using iMovie has been easy and rather intuitive, with the ability to cut out parts, edit the audio and visual portions separately, and export the videos in a variety of formats, including direct upload to YouTube.
In general, I hosted the videos on my YouTube Channel through my Armstrong State University account (sarah.zingales@armstrong.edu). However, when Armstrong State University and Georgia Southern University were consolidated, I was notified that I was going to lose access to my YouTube channel and was advised to create a “brand account” that contained my existing videos (ZingalesChemistry). The management of that “brand account” could then be transferred to my new Georgia Southern account (szingales@georgiasouthern.edu). Throughout this time, I did not use a script for my speaking and relied on YouTube’s native closed captioning system to create the closed captions (which is required for ADA compliance for educational videos). This required a lot of effort on the back end to correct the chemistry-specific words (Figure 6A). Unfortunately, YouTube disable my brand account ZingalesChemistry (Figure 6B). I no longer have access to the account or the videos hosted there, but can still link students to the content. Despite repeatedly contacting YouTube about this issue, they have not responded to my requests to explain why the account was disabled it has not been resolved. Since that time, I have hosted videos directly to our LMS (Desire to Learn), which unfortunately does not have a native closed captioning system.
Figure 6. A) YouTube automatic captions for an example video. B) Notification of disabled YouTube account.
Overall, I have made 23 videos for Organic Chemistry 1 and 2, including lecture and laboratory topics (Table 2), with 20 problem solving videos and three full lecture videos. There were three main times that I created the videos. First was Fall 2016 with the four Organic 1 practice exam videos. Next was Spring 2018 with nine new videos created to align with the ten “big ideas” covered in Organic 1 (visualizing organic structures, acid/base chemistry, functional groups and naming, chairs, stereochemistry, reaction types and arrows, alkene reactions and mechanisms, alkyne reactions and synthesis, sn1/sn2/e1/e2, and radical halogenation). Fall 2018 was when I began making videos for Organic 2, which included three practice tests, three keys to problem sets, and one full lecture video.
Table 2. Videos. a) data not available due to suspension of YouTube account; b) only used for one semester so no cumulative data; c) no data because video was never assigned to a class, but used for supplement/students who were absent that day/created for hurricane preparedness; d) videos were posted directly to LMS after YouTube account was suspended
|
Length |
Initial Views |
Cumulative Views |
4 video keys for practice exams for Organic 1 |
25:34-35:12 (Avg 28:55) |
>70 each the first semester created (54 students enrolled) |
206, 238, 218, 202 |
2 quiz keys for Organic 1 |
4:29, 6:39 |
NAa |
9, 20 |
2 online homework word problem assignment keys (ch 11 word, ch 8 word) |
3:20, 3:53 |
NAa |
9, 3 |
6 keys for in class activities (ch 11 practice, drawing stereoisomers, chairs, drawing organic structures, stereochem practice, Ch 9 synthesisd) |
11:17, 7:05, 6:48, 15:44, 14:39; 8:53 |
NAa |
20, 61, 30, 32, 50, b |
1 entire chapter lecture for Organic 1 (ch 10 lecture) |
21:35 |
NAa |
51 |
3 video keys for practice exams for Organic 2 |
20:00, 18:14, 29:55 |
31, 29, 11 |
NAb |
3 video keys for problem sets for Organic 2 (PS 1, PS2, PS3d) |
20:28, 11:33; 8:14 |
13, 13, NAa |
NAb |
1 entire chapter lecture for Organic 1 (ch 24 lectured) |
18:32 |
10 |
NAb |
1 lab lecture for spectroscopy (NMR) |
47:41 |
NAc |
13 |
Student Responses
After the first semester with the practice exam videos only (Fall 2016), students mentioned the videos in response to the end of year survey question “Aspects Greatest to Learning” multiple times and rated them as very useful (Figure 7A).
In Fall 2018, when most of the Organic Chemistry 1 homework and in class activities keys were created and implemented, along with the final assessment, students were asked two additional questions at the end of the semester: 1) Did you watch the videos; and 2) If yes, what did you think? If no, why not? Of the 36 respondents, 24 had watched the videos and eight had not (four did not respond to these questions). Student responses were mixed, but mostly positive (Figure 7B). Students were also given the Attitude toward the Subject of Chemistry Inventory Version 2 (ASCIv2) (Xu, 2011) on the first day of class and on the day of the final exam. In seven of the eight attitudes surveyed, students reported more negative feelings towards chemistry (Figure 8A). The only attitude that became more positive (3.5% increase) was the feeling that chemistry is not challenging. These results could reflect the students’ feeling about the ACS exam rather than about the course as a whole and in the future, I will give the assessment before the final instead of after. Further breaking down the answers to the assessment at the end of the course by whether the students watched the videos or not was also very insightful. In every category, the students who watched the videos reported a more positive attitude toward chemistry, with the percent difference being heavily skewed toward the negative attitude for the students who did not watch the videos (Figure 8B).
There was no significant change in overall ACS score for the class after addition of the videos in Spring 2018 (34 compared to the average ACS raw score of 33.25 for the previous 4 semesters). The ACS questions were coded for the same ten “big ideas” (visualizing organic structures, acid/base chemistry, functional groups and naming, chairs, stereochemistry, reaction types and arrows, alkene reactions and mechanisms, alkyne reactions and synthesis, sn1/sn2/e1/e2, and radical halogenation). Six of these big ideas were covered in one of the new videos created in 2018 (underlined in the list above). For five of these (all except radical halogenation), there was a significant increase in the number of correct answers on the ACS compared to the previous years (46%, 47%, 66%, 48%, 53%, respectively). The radical halogenation idea was covered in the chapter 10 video lecture and was not part of a problem solving video like the rest of the questions.
Figure 7. A) Fall 2016 data from only four practice exam key videos. Word cloud generated by tagcrowd.com using the student responses to question “aspects greatest to learning” and three responses that mentioned the videos. B) Spring 2018 data from nine additional problem solving videos. Word cloud generated by tagcrowd.com using the student responses to question “If you watched the videos, what did you think?” and five responses.
Figure 8. ASCIv2 results. A) Beginning of the course (grey) vs end of the course (blue) responses. B) End of course responses (grey) comparing students who did not watch the videos (red) to students who did watch the videos (green). The students who did watch the videos reported much more positive attitudes toward chemistry.
References
Benedict, L., and Pence, H. E. (2012) Teaching chemistry using student-created videos and photo blogs accessed with smartphones and two-dimensional barcodes. Journal of Chemical Education. 89, 492-496.
Box, M. C., Dunnagan, C. L., Hirsh, L. A. S., Cherry, C. R., Christianson, K. A., Gibson, R. J., Wolfe, M. I., and Gallardo-Williams, M. T. (2017) Qualitative and quantitative evaluation of three types of student-generated videos as instructional support in organic chemistry laboratories. Journal of Chemical Education. 94, 164-170.
Eichler, J. F., and Peeples, J. (2016) Flipped classroom modules for large enrollment general chemistry courses: A low barrier approach to increase active learning and improve student grades. Chemistry Education Research and Practice. 17, 197-208.
Guo, P., Kim, J., and Rubin, R. (2014) How video production affects student engagement: An empirical study of MOOC videos. Proceedings of the first ACM conference on Learning@ scale conference. 41-50.
He, Y., Swenson, S., and Lents, N. (2012) Online video tutorials increase learning of difficult concepts in an undergraduate analytical chemistry course. Journal of Chemical Education. 89, 1128-1132.
Kalyuga, S., Chandler, P., Tuovinen, J., and Sweller, J. (2001) When problem solving is superior to studying worked examples. Journal of Educational Psychology. 93, 579-588.
Martineau, F. S., Sahu, S., Plantier, V., Buhler, E., Schaller, F., Fournier, L., Chazal, G., Kawasaki, H., Represa, A., Watrin, F., and Manent, J. (2018) OUP accepted manuscript. Cerebral Cortex. 28, 2976-2990.
Mayer, R. E. (2009) Multimedia Learning. 2nd ed., Cambridge University Press, New York, NY.
Mayer, R. E., and Moreno, R. (2003) Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist. 38, 43-52.
Ranga, J. S. (2017) Customized videos on a YouTube channel: A beyond the classroom teaching and learning platform for general chemistry courses. Journal of Chemical Education. 94, 867-872.
Ranga, J. S. (2018) Multipurpose use of explain everything iPad app for teaching chemistry courses. Journal of Chemical Education. 95, 895-898.
Richards-Babb, M., Curtis, R., Smith, V. J., and Xu, M. (2014) Problem solving videos for general chemistry review: Students’ perceptions and use patterns. Journal of Chemical Education. 91, 1796-1803.
Seery, M. K. (2015) Flipped learning in higher education chemistry: Emerging trends and potential directions. Chemistry Education Research and Practice. 16, 758-768.
Seethaler, S. (2007) Organic Chemistry For The YouTube Generation. UCSD News Center https://ucsdnews.ucsd.edu/archive/newsrel/science/12-07sorgovideoSS-.asp
Sturm-Beiss, R. (2013) The efficacy of online exam-review sessions: Reaching both high- and low-performing students. Journal of Online Learning and Teaching. 9, 431.
Xu, X., and Lewis, J. E. (2011) Refinement of a Chemistry Attitude Measure for College Students. Journal of Chemical Education. 88, 561-568.
Comments
Passive consumption?
Over the years I have made videos using Camtasia as well (great software BTW), and I do have evidence that students are watching them. Most of these videos are when I have run out of class time to finish a section of my workbook. I tend to give a brief lecture on some material, give and example, and then provide a new example. In the new example I tell students to pause the video and do the problem themselves. Unfortunately, there is no way to know if my students are just watching passively or actually engaging in the problems. When I ask, sheepishly they say they just watch me work the second example. This lets me know they are missing the point of the activity, and that these videos take a chunk of time to produce may not be worth my time. Editing out errors, and just moving it to various delivery systems is a big investment.
My institution has been using Panopto for lecture capture in classrooms, but I never use the service as my classroom is very much an active learning environment. My colleagues have found that students tend to cram videos before exams and we see an exponential increase in viewing the few days before an exam. (akin to binge watching on netflix?) Sometimes this to check if they had their notes correct from class, but much of the time it is just to rehear the lecture.
You bring up some very interesting points about ADA compliance. More institutions are finding they are having issues with this. You have explored closed captioning, but how do we meet the needs for visually impaired students?
I have also explored using the quizzing tool in camtasia and moodle to not let the student continue with a video until they answer a question. This might be a approach that would help make the viewing of videos passive.
I think videos can be positive as you have shown, but I need to figure out how to ensure students are getting the right help from them.
Thanks for the article,
Ehren
Passive consumption
Hey Ehren, thanks for your comments!
The passive watching of videos is also something I worry about. From talking with students, only about half use the videos as I would – practicing a problem first, then looking at the video keys to see how to solve it if I got the problem wrong or got stuck. The other half just watch. I will definitely be looking into adding quizzes in the future to make the videos more interactive.
As for ADA compliance for visually impaired students, I have struggled with proper “alternative text” for images in my PowerPoint slides. Is simply saying “this is an electrostatic potential map of methoxide with the negative charge on the oxygen” sufficient? Do I need to fully write out the mechanism in words? How can I describe R/S enantiomers without images? I recently read an article that discusses a lot of the issues surrounding ADA compliance and digital content (https://er.educause.edu/articles/2018/9/universal-design-for-learning-and-digital-accessibility-compatible-partners-or-a-conflicted-marriage) that also offers recommendations for how to use both best practices for learning/instruction and make sure they are ADA compliant. Basically, do your best, start slow, use freely available resources, and work with disability services and centers for teaching to find out what they can help you with.
Best,
Sarah
How the Brain Processes Info and Solves Problems
Dr. Zingales –
I appreciated seeing your references to the work of Richard E. Mayer, a cognitive psychologist who is a leader in the study of how the brain most effectively learns from multi-media presentations.
You wrote that for the audio and visual channels the brain uses to process information during problem solving, “each channel has a limited capacity to the amount of information it can process.” Since Dr. Mayer wrote that in 2009, the scientists who study how the brain works have come around to saying that differently, and the difference I think has relevance to both teaching organic chemistry and flipping.
The brain solves problems in organic chemistry and everything else, according to the cognitive science consensus, in structures termed “working memory.” Working memory has an unlimited capacity to hold information it can quickly recall from the brain’s well organized long-term memory, but working memory has a very “limited capacity” when processing what they call “novel” information – all that has not been previously well memorized.
Writing in the (open access) Winter 2012 American Educator, Clark, Sweller, and Kirschner summarize:
“The limitations in working memory apply to new, to-be-learned information (that has not been stored in long-term-memory). When dealing with previously learned, organized information stored in long-term-memory, these limitations disappear.” (AE 2012 p. 9)
So, during a video or live lecture or reading a text, if the information is recallable from previous memorization, it causes no “cognitive load.” If the information is not quickly recallable, the working memory where you are thinking fills up quickly, and once full, new information added tends to bump out previously stored, leading to confusion during problem solving.
In the article quoted above, this “bumping out” process is explained. In a paper I contributed to a previous ConfChem, at https://confchem.ccce.divched.org/content/2017fallconfchemp8 , there is a section on “cognitive architecture” that has additional reference on the impact of “bumping out” on student problem solving.
What does this mean for teaching organic problem solving?
Science says when students solve problems or digest content, to avoid overloading working memory, nearly all the knowledge involved must be recallable from prior storage in long-term memory. This argues that whenever a new topic is introduced, to make understanding possible, the instructor should identify all the new fundamentals, give them to the students in the format of tables or flashcards, and quiz on them two or so days later.
This moves to time outside of class learning students can do outside of class. That’s the value of flipping. When the fundamentals are recallable, the student is ready to use them to solve problems. And solving problems using new and recallable information together is how the brain wires its conceptual frameworks.
Which argues the formula for learning organic is “memorize, memorize, apply, apply,” in that order.
Have them do the flashcards, then watch the videos and solve tough problems with instructor guidance.
Having a brain that is very limited when using non-memorized information seems to be the evolutionary price we paid to guarantee all human children learn the incredible complexity of human speech automatically. Speech is learned instinctively, due to our natural-selection programming. The learning of chem and math, being non-instinctive, is tough.
Advocating for the M-word may start an argument among contemporary organic instructors? I don’t like what science says about the necessity for thorough memorization as a foundation for math and science learning. But as science instructors, I don’t think we can ignore the consensus science of experts who study what the human brain during reasoning can and cannot do.
Again, I think the references of Dr. Zingales’ to the research summaries of Dr. Mayer are a valuable contribution to our thought about how students learn from our lectures, boardwork, texts, and videos, in class and during study time.
-- rick nelson