You are here

A Multimodal Examination of Visual Problem Solving

Author(s): 

Sarah Hansen1, Felicia Moore2, Peter Gordon
1 Columbia University Chemistry Dept., New York, New York, United States;
2 Science Education, Teachers College, New York, New York, United States;
3 Biobehavioral Sciences, Teachers College, New York, New York, United States

05/15/15 to 05/21/15
Abstract: 

Understanding how students, particularly struggling students, engage with interactive chemistry visualizations is key to designing effective interventions using these tools. A multimodal approach for understanding how students tackle visual stoichiometry problems can offer an insight into the misconceptions and the difficulties students have. This mixed methods study combines eye-tracking, oral responses, drawings, algorithmic, and multiple choice questions to investigate how a group of college General Chemistry students solve stoichiometry problems in the PhET simulation “Reactants, Products and Leftovers”. Building from a combined quantitative methods of cluster analysis and principal component analysis of viewing patterns of a larger sample population, this deeper, focused investigation into the multimodal artifacts of two struggling students revealed a richer, more complex portrait of the students’ processes. The triangulation of student problem solving may help to identify key misconceptions, provide opportunities to target support, and ultimately increase performance and decrease student frustration.

Paper: 

Introduction. Stoichiometry problems can be presented and represented multiple ways, challenging students to apply their conceptual understandings. These concepts can be understood on an empirical level by the composition of the system, or on a model level, using formulas and stoichiometric numbers.1 Student difficulty in solving these problems has been linked to the student’s representational focus,2 and problem solving approach.3 Explanations sometimes reveal inconsistent usage of correct concepts.4,5 By comparing oral, symbolic, algorithmic, and gaze-patterns of each participant solving visual chemistry problems with multiple representations, this study sought to gain insight into the problem solving approaches used. Chemistry understanding was investigated using multiple representations, building off the triplet nature of chemical knowledge.6 Following Kress,7 we viewed data from a single assessment or modality as incomplete, an orchestration of meaning was sought using multiple data sources.

Setting, Participants, and Data Collection. Forty-one undergraduate chemistry students, enrolled at an urban Northeastern US university, volunteered to participate in this study. IRB approval was obtained before participants were recruited and data collected. Each participant completed both a pre- and post-assessment. These assessments consisted of an online Chemistry Self Efficacy Questionnaire (CSE) (modified from a published questionnaire),8 a 15 question multiple-choice assessment of the participant’s knowledge of the Particulate Nature of Matter (PNM). Participants then played an adapted version of the PhET chemistry problem solving game called Reactants, Products, and Leftovers9 that scored for the amount of successful responses. The PhET game requires students to determine the number of reactant molecules when given product molecules or the number of product molecules when given the reactant molecules.

The PNM assessment is a combination of three assessments10,11,12 focusing on definitions and concepts related to chemical change. Following an existing protocol for validation,13 the correct answers for all assessments were determined by consensus from three general chemistry professors. After the PNM Assessment, students were asked to reflect on their performance during the assessment and to evaluate three dimensions of their performance: 1) their level of confidence in understanding the questions being asked, 2) their success solving problems, and 3) their confidence in solving problems of a similar type.  Each question asks the participant to rate their confidence on five-point Likert-type scale, where 1 is ‘not at all confident’ and 5 is ‘completely confident’.  Overall the confidence assessment has a total score range of 3 to 15. The Representational Competency Assessment (RCA) is an adaption and extension of an assessment used previously to investigate students’ abilities to draw and interpret diagrams of atoms and molecules.14 The version used here consisted of eight questions with oral, symbolic, and algorithmic answers. After completing the assessment, a retrospectively ‘talk aloud’ protocol provided an opportunity for each participant to describe their problem solving approach.

Eye-tracking. As each participant played the PhET game,9 eye-tracking data was collected using the T-60 Tobii Eye-Tracker. This allows the researcher to detect where a participant is looking, and we assume that this correlates to where their visual attention is directed.15 Eye-tracking alone cannot indicate why a participant looks at a particular area, though deeper analysis is possible by combining multiple modalities of analysis.

 

 

Figure 1: Questions five and six from the PNM Assessment were used to separate participants into two groups for further analysis of misconceptions. Of the 21 participants who incorrectly answered one or both questions, 4 missed only one question on the pre-assessment and selected the correct answer on the post-assessment, leaving 17 participants with one or both questions answered incorrectly on the post-assessment. Of these 11 participants answered one or both questions incorrectly and did not change their answer on the post-assessment, suggesting the participants maintained their misconception(s) throughout this study.

Data Analysis.  Each question attempt lasting longer than 2 seconds was clustered by the percentage of fixations in each of six identified areas of interest (the equation reactants, equation products, submicroscopic reactants, submicroscopic products, reactant numbers, and reactant products). We used principal component analysis to characterize and further investigate each cluster,16 and these viewing pattern clusters were compared to the themes arising from qualitative analysis. Misconceptions related to the concept of conservation before and after a chemical reaction were used as a lens17 to guide both the quantitative and qualitative analysis for this study.  Answers from two specific questions in the PNM assessment (Figure 1) were used to group participants for further analysis. PhET scores for each of the two groups (those who correctly answered both questions in the pre-assessment and groups who incorrectly answered one or both) were compared and a grounded theory, discourse analysis approach was used to qualitatively investigate the talk aloud protocols for participants who answered one or both of the questions incorrectly.

Quantitative Findings. A significant difference was observed (t = 2.975, p = 0.005) in the PhET scores for participants who correctly answered both of Q 5 and 6 of the pre-PNM (n = 20), when compared to the group of participants who missed one or both of the questions (n = 22). A significant difference (t = 1.7, p = 0.04) between the two groups was also observed when comparing the total post-assessment scores (the sum of the CSE, PNM, CA, and RCA that were administered after playing the PhET game). During the session, the only feedback participants received was their PhET score. The pre- and post-assessments were scored after the session ended. The two participants (Table 1) with the lowest PhET game scores were selected for further qualitative analysis.

Table 1: The assessment scores for the two participants selected for deeper qualitative analysis, both participants correctly answered question 5 but incorrectly answered question 6 (selecting answer option E).  Neither changed their answers between the pre- and post- assessments.

Qualitative Findings. Qualitative analysis of the talk aloud protocols for the 22 students who incorrectly answered one or both of the two selected pre-assessment questions, revealed three key themes: using the concepts of moles or molecules interchangeably, discussing concepts related to conservation, and, how each participant reflected on the difficulty of questions when they found a mistake during the talk aloud protocol.

Profiles of Selected Students. Both Ryan and Heather demonstrated a literal translation between representations but their frustrations (and initial CA) were strikingly different. Both students selected answer option e on the PNM, Q6, which represents a translation of coefficients into molecular composition. Ryan was a first-year engineering student in the first semester of the intensive General Chemistry Lecture course. He commented that he “used to feel very confident on chemistry, but since college, I feel that I do not get to allocate enough time to chemistry which has resulted in a drop of my performance.” Heather was a post-baccalaureate, pre-medical student in the first semester of the regular General Chemistry Lecture course. When asked if she felt chemistry was relevant to her daily life, this time she selected ‘not at all’ and stated, “It's really like a foreign language to me that I feel I can rarely use.” Her visible frustration when playing the PhET game was consistent with her earlier comments about the language of chemistry being difficult for her to work with. When interpreting problems with multiple representations she sometimes reported the equation as depicted in the submicro image resulting in coefficients that corresponded to the number of each molecule or atom (Figure 2), including leftover excess reactants if they were present in the diagram. When reporting the limiting reactant she selected the reactant with the least number of molecules, disregarding the mole ratio required. This direct translation of the submicro drawing into an equation suggests her use of equations is developing, but she is not yet using the concepts behind the equation notation.  Although she struggled with the chemistry, she continued to try to solve each problem and when she finished the session she was positive and smiling.

 

Figure 2: Heather’s answer to question 8 from the post-RCA. Heather’s literal translation of the submicroscopic images included both the products and leftover excess reactants in the balanced chemical equation. She reported: “And I could see that from the drawing.  So, then I knew exactly what the limiting reactant was and how many moles of the excess reactant would be there.".

 

PhET Eye-tracking. Ryan’s eye-tracking gaze patterns show almost exclusive use of the numbers section of the game, only occasionally viewing the submicro representations or the equations (Figure 3). This horizontal shifting within the bottom (numbers) section of the simulation can be contrasted with other participants who shifted between different representations of a single species (resulting in a more vertical viewing pattern).

Figure 3: Ryan’s gaze patterns for two attempts on Level 1 Question 5, answered correctly and Level 3 Question 3 (answered incorrectly) respectively. Both gaze plots show Ryan’s heavy use of the numbers portion of the simulation with no use of the equations section. Early in the game Ryan did view the submicro section occasionally but shifted his viewing pattern by the end of the game to exclusive use of the numbers.

 

Ryan’s scrap paper adds an additional element to this analysis since his problem solving relied heavily on accounting done on those pages. Although he did not make notations for every question, most of the questions are worked out on his scrap paper (Figure 4). Heather used a counting approach to determining the number of atoms rather than the mole ratios, shifting her gaze within and between the submicro representations (not using the coefficients). She sometimes ‘searches’ between the representations, with her fixations shifting between each of the viewing areas in a circular or ‘back and forth’ motion.

Figure 4: Ryan’s scrap paper for Level 3 Question 3, similar to his answers from the RCA he writes the equation

Clustered Viewing Patterns. Viewing patterns for each question attempt from each of the 41 participants (410 separate question attempts) were clustered and four distinct groups emerged.14 Participants often shifted between viewing patterns, seven of the 41 participants had viewing patterns in all four clustered groups with only three participants employing the same viewing pattern for all question attempts. Ryan’s viewing segments were clustered with other participants who heavily relied on the numbers section of the simulation. Heather’s viewing patterns shifted between three of the viewing pattern clusters, making use of different representations for different questions. Her visible frustration during the simulation may be related to her shifting visual strategy, as reflected in her circular ‘searching’ viewing patterns and answers that were changed, reconsidered, then returned to their original value before submitting her answer.

Discussion and Implications. The three themes that emerged from the discourse analysis (conservation, reflection, and using moles versus molecules) provided insight into the PNM assessment responses and consistent with conceptual understandings supported by evidence from the multimodal individual student profile. These themes of conservation, reflection, and the use of moles versus molecules have deep implications for the teaching and learning of solving stoichiometry problems. An examination of all the incorrect answers entered in the PhET game by both selected students shows a persistent pattern. Namely they do not conserve atoms, as was observed in their responses to RCA and PNM assessments. This suggests they may hold a misconception about the conservation of mass (not conserving the number of atoms before and after the reaction). Moreover they appear to directly link the balanced equation to the submicro representation.

            A multimodal student assessment offers opportunities for teachers to identify student misconceptions or conflicting conceptual understandings then support the student in rethinking how they approached solving these problems. Posner et al.18 highlighted the importance of Socratic teaching in supporting students in shifting towards fully engaging with new, scientifically correct concepts. The evidence reported here of the student’s opportunity to catch mistakes and rethink problem-solving approaches (during the retrospective talk aloud) supports the inclusion of discourse in chemistry learning environments, providing students with multiple opportunities to verbalize and reflect on how they interact with visual chemistry representations. The use of multimodal artifacts to support this reflection may be particularly important when student answers are inconsistent and viewing patterns represent a searching or limited use of multiple representations. Allowing these students to reflect on their viewing pattern and discuss their algorithmic answers may be particularly useful. One option is to present students with gaze plots that contrast with their own and ask the student to identify similarities and differences between the two.  

By asking students to talk through their problem solving process and reflect on how they solved a problem, this study provided opportunities for students to reconsider how they approach these chemistry problems. Students may exhibit aspects of misconceptions while correctly using correct conceptions; as a result, a deeper view of how students understand and use chemistry ideas is needed to increase understanding and learning. By listening to how students solve problems, discussing their problem solving approaches, and including gaze patterns with this discussion, educators may be better able to interact with and challenge misconceptions held by students. Specifically, designing instruction to provide students with opportunities to reflect on their problem solving approach using both their retrospective talk aloud protocol and a viewing of eye-tracking gaze patterns.

 

References.

1.      BouJaoude, S.; Barakat, H. 2000, School Sci. Rev., 81, 91-98.
2.      Arasasingham, R. D.; Taagepera, M.; Potter, F.; Longers, S. 2004, J. Chem. Educ., 81, 1517.
3.      Fach, M., de Boer, T.; Parchmann, I. 2006, Chem. Educ. Res. Pract., 8(1), 13-31.
4.      Posner, G. J.; Strike, K. A.; Hewson, P. W.; Gertzog, W. A. 1982, Sci. Educ., 66(2), 211-227.
5.      Gauchon, L.; Meheut, M. 2007, Chem. Educ. Res. Pract., 8(4), 362-275.
6.      Johnstone, A. H. 1991, J. Comp. Assisted Learning, 7(2), 75-83.
7.      Kress, G. 2001, Multimodal teaching and learning: Theories and the science classroom. Continuum: New York, NY
8.      Garcia, C. A. 2010, PhD Thesis, University of South Florida.   (Retrieved from http://scholarcommons.usf.edu/etd/1638)
9.      PhET. (2013). Retrieved from http://phet.colorado.edu.
10.  Frank, D. V. Focus on physical science; Pearson Education, Prentice Hall: Upper Saddle, NJ, 2001
11.  Mulford, D. (1996). Masters Thesis. Purdue University. (Retrieved from http://www.jce.divched.org/jcedlib/qbank/collection/CQandChP/CQs/Concept...)
12.  Modic, A. L. 2011, Masters ThesisMontana State University, Bozeman, Montana. (Retrieved from http://scholarworks.montana.edu/xmlui/handle/1/1886)
13.  Yezierski, E. J.; Birk, J. P. 2006J. Chem. Educ., 83(6), 954-960.
14.  Davidowitz, B.; Bhittleborough, G.; Murry, E. 2010, Chem. Educ. Res. Pract., 11, 154-164.
15.  Just, M.; Carpenter; P. Psychol. Rev. 1980, 87, 329–354. 

16.  Hansen, S. J. R. H.,  2014, PhD Thesis, Columbia University.
17.  Saldaña, J. 2013, An introduction to codes and coding. The Coding Manual for Qualitative Researchers (2 ed.) Sage Pubs. Ltd.:Thousand Oaks, CA. pp. 40
18.  Posner, G. J.; Strike, K. A.; Hewson, P. W.; Gertzog, W. A. 1982Sci. Educ., 66(2), 211-227.

Comments

Emily Moore's picture

Hi Sarah,

Thanks so much for this contribution to ConfChem. I’m always fascinated by eye-tracking studies, and I’m particularly interested in eye-tracking studies of students using PhET simulations.
Here are a few areas you mention in your manuscript that I am curious to hear more about:

1. You mentioned that students used an adapted version of the Reactants, Products and Leftovers simulation. Can you describe more about how it was adapted for your purposes?
2. You also mention “algorithmic” patterns in your introduction. Could you tell me more about what you mean by this? Is this coming from looking at student responses to the PNM assessment, and seeing if they pick solutions that are consistent with an algorithmic approach to solving the problem, or is this something you look for in their verbal descriptions of problem solving?
3. I appreciate that your work brings up the utility of multi-modal assessments, their ability to help elucidate new perspectives on student thinking, and to triangulate some possible underlying causes for student difficulties. One of the possible downsides to collecting so many different sources of information from students is assessment fatigue. Did students work through the given tasks all in one session? If so, how long was each session, or did students have an open-ended amount of time to complete the work? Do you think the length of the various assessments could have impacted their problem solving approaches?

Thanks again for sharing some of your analysis with us.

malkayayon's picture

Sarah, Felicia and Peter,
It is very important to perform studies as you did. To understand how students use interactive visualizations. I like the use of eye tracking. Have you checked the eye tracks of experts? I wonder if different experts have similar eye traks? how different it is from the novice's eye tracks.
Maybe in the future the computer will be able to track the eye movement ant tell the student to "look at the chemical equation"
Thanks for the paper
Malka

Hello Malka,

Yes, eye-tracking is an excellent tool for gaining a deeper understanding of how students use these visualizations.

Thank you for expert/novice suggestion, although this study did not classify some participants as experts we did have some participants that easily completed all the assessments with 100% success (this was not an issue as we were interested in problem solving strategy not just success). It is interesting to note that each clustered viewing pattern included participants that were both successful and unsuccessful at solving the PhET simulation questions, that is to say two participants used the same single representation to answer a question but one answered correctly while the other did not. I am interested in delving deeper into the logic of participants that use a single representation to solve the problem, particularly when they are successful, aspects of these strategies were reflected in their RCA talk-aloud responses.

I really like your idea of a responsive eye-tracking system that would suggest the participant consider aspects of the image they may not be viewing, I’m currently investigating a similar intervention (although it is the participant’s reflection, not the computer, that provides this feedback).

Best,

Sarah

Hello Emily,

Thank you for your thoughtful comments, I’ve found the PhET simulations to be an excellent platform for eye-tracking studies particularly because the code is available to modify. For this study the Reactants, Products, and Leftovers simulation was modified to remove the ‘sandwich shop’, instead participants were oriented to the simulation using purposefully selected scenarios with the formation of ammonia and combustion of methane reactions. The simulation was also modified to remove the random selection of reactions, so that each participant encountered the reactions in the same order.

In addition to gaze-patterns, algorithmic responses to the PhET questions were analyzed for themes related to conservation and you are correct that this theme came from the PNM assessment responses. Revisiting conservation through a separate assessment allowed us to consider aspects of conservation related to each reactant or product separately. We also considered responses given in the RCA relevant to conservation, and found that students' responses in multiple assessments sometimes agreed and sometimes disagreed. As an instructor I find this very interesting and supportive of having students talk through their reasoning and helping them to reflect on their problem solving logic.

Regarding assessment fatigue: participants did have an open-ended amount of time to complete each step (the 42 sessions lasted between 42 and 115 min total, including time for the consent forms and compensation paperwork). Observational notes were kept by the researcher, and any visible frustration was recorded. Frustration was definitely noted with some participants during some of the assessments, particularly with the PhET simulation - I think this is due to the fact that they would receive feedback and knew that they were answering incorrectly while incorrect answers on the other assessments were not corrected. However, no fatigue was noted and many participants commented on how they found the session to be fun (they particularly liked the smiley faces in the PhET game). I don’t believe the session length impacted problem solving, particularly because the assessment type shifted and each lasted between 1.5 min (a confidence assessment) and 62 min (a PhET simulation). A future direction might be using the assessments individually, with breaks, to see if fatigue would impact their performance.

Best,

Sarah

Emily Moore's picture

Thanks for your response. Regarding your adaptations to the sim, I think it's a useful thing to point out that the sims can be adapted for studies. I have conducted a few studies myself where I wish I had "un-randomized" the game questions!

I have also been thinking more about your clustering of viewing patterns, and what these viewing patterns might mean. Is there any prior work in the literature that comes to any conclusions about possible meanings of changes (or lack of changes) in viewing patterns?
You mention that 7 students had viewing patterns in all four cluster groups, and 3 students had viewing patterns in a single cluster group. Did the students with viewing patterns in all four cluster attempts answer questions less successfully than those who had viewing patterns in a single cluster group? I know this is a small N to be looking at, but I’m wondering if it is the case that perhaps more viewing patterns could mean that students are more confused, or it could be that they are being more flexible…sampling more strategies to approach different problems. Alternately, it could be that students who use a single strategy are more “expert” and “know” how to do it, or it could be that they are somewhat rigidly applying the same approach and being more flexible may be helpful for them.
Did you notice any trends in viewing patterns across time for students? Did some sample some different viewing patterns and then settle in to one or two, or did students who used more viewing patterns tend to continue changing their patterns throughout the sim use?

Thanks!

Hello Emily,

Great questions, I'm also interested in how viewing patterns shift over time and with respect to success. This is something I'm analyzing for but it is beyond the scope of this paper.

Best,

Sarah

Bob Belford's picture

Dear Sarah, Felicia and Peter,

I have several questions I would like to ask, and please forgive my lack of expertise. On page 2 your describe the RCA-Representational Competency Assessment. Could you describe this in easy-to-understand terms, and is it multimodal?

I am also interested in learning more about the "eye tracking gaze patterns". How do you visualize these, are they some sort of cluster plot superimposed on the simulation? Are they bar charts of the 6 areas? Obviously there is a temporal component to these, how to you visualize that? Now I may have gotten confused, but let me rephrase this. First you have a cluster plot of where people look, and then you cluster those plots into cohorts of similar behavior, right? Does that second part take into account when they looked, not just where? If you have a visualization of this data, could you share it with us, especially the "when they look" cluster, not the "where they look cluster"? (I can help you upload an image and associate it with your comment). In the multimodal analysis, are you able to correlate they "when they look" to the oral discourse? If so, how do you do that?

It seems to me that this type of analysis would be of more value to the designer of a simulation, than the teacher. Can you expound a bit on how a teacher can use this in their practice? How do you design an activity that encourages students to reflect on their gaze patterns?

Thank you for sharing your work with us.
Sincerely,
Bob Belford

Hi Sarah, Felicia and Peter,

I found myself experimenting with the simulation, it seemed to encourage me to take shortcuts instead of using my logical step-by-step procedure for balancing equations. Solving the problems this way I found it very easy to make errors. How do you have your students approach the balancing of the equations, do they have a set procedure to follow?
Thanks,
Brian

Hello Bob,

Thanks for your great questions. The RCA is an assessment that requires students to shift between different types of representations, in some questions they are given a submicroscopic representation and asked to write the balanced equations, other questions give the equation and request a submicroscopic representation and/or a drawing, some questions request a calculated answer and one question starts with a macroscopic image of the chemicals and requests a verbal description of the reaction. The question representations vary, as do the answer representations and mode. Some questions are calculated, some drawn, others written or described. There is a multimodal aspect to this assessment but the multimodal analysis comes in combination with the eye-tracking data and other assessments.

Figure 3 is an example of a gaze plot, the fixations over the length of answering the question are represented in the simulation space. The eye-tracking data was segmented and analyzed per question attempt then both the static gaze plot as well as the dynamic video of the fixations over the course of the attempt were analyzed. Cluster analysis was carried out on the gaze plots by identifying Areas of Interest (AOIs) and comparing all 800+ video segments by the percentage of fixations in each AOI. The temporal component is not analyzed in the clustering of each segment but each question attempt was analyzed separately so there is a temporal component over the course of the game play. When and where they looked was considered for the dynamic video but not the clustering.

I agree that simulation designers may find this analysis useful but I also find this information helpful when considering how many students may be intersecting with simulations, this offers an opportunity to tailor the discussion to specific strategies the student may have employed or design activities that encourage reflection/discussion within groups of students. Getting students to think about where they are looking and why offers a chance for more purposeful interaction with the simulation.

Best,

Sarah

Yuen-ying Carpenter's picture

Hello Sarah,

I enjoyed reading your paper, and reflecting on the differences between the two student cases you highlighted. In particular, I found it positive to look at Heather’s increased scores on both the PNM and her confidence assessment after this process.

I was hoping you could expand on a few ideas you touched on briefly in the paper:

(1) You mentioned in your response to Emily’s question, above, that you used a modified simulation where students begin by exploring the reactions for formation of ammonia and combustion of methane, before they enter the Game portion of the simulation.

Can you comment on whether the students spent much time exploring these initial cases before they tackled the Game, and whether you observed any differences during this time?

Also, were students allowed to return to these introductory cases during the Game to assist them if they were frustrated, or was the Game treated more like a traditional assessment?

Lastly, I am curious what criteria you used to decide which reactions students would be assessed on, and in what order? You mentioned that the randomization of reactions was removed — were the number of reactants or products also invariant from student to student?

(2) I was also interested to hear about the three discourse themes you highlighted. Can you expand a bit on the reflection theme - specifically, was this something that stood out during their think-aloud problem solving process on the assessments/simulation, or was this specific to the post-assessment reflection?

Can you also expand on what you mean by the theme of moles vs. molecules? Are you referring to students use (or lack of use) of the coefficients in the equation? Or are you referring to the challenge of extending this concept to moles of reactants (rather than molecules) — given that the simulation is using only molecules, students can encounter situations where there are leftover molecules of even the limiting reactant? Or something else entirely? This is certainly an intriguing area for discussion, so I am curious to hear what you observed here.

Thanks again for sharing,

Cheers,
Yuen-ying

Hello Yuen-ying,

Thanks for asking about these areas of the study.

The warm-up was constructed to orient participants to the simulation buttons and terminology as well as expose them to situations were the limiting reactant prevented the formation of product. This area of the simulation does not ask questions so the interviewer requested the participant enter specific amounts of each species then asked how many products formed. This was a check to see if the participant was able to enter values and identify reactants versus products. The first scenario asked the participant to add five molecules of nitrogen and two molecules of hydrogen then report how many products were formed. Next they were asked to add one more hydrogen molecule and report how many products were formed, we wanted to know that each participant had recently seen a situation where the number of reactants did not result in the formation of product. The reactions chosen for this warm-up were common reactions covered in General Chemistry courses and removed from the simulation. The simulation reactions were chosen because of the type of coefficient/subscript combinations as well as literature showing experimental evidence. The coefficients were not held constant between participants: this proved to be useful when a participant viewed the answer, wrote it down, then restarted and tried to enter the previous solutions (I think they forgot I was recording the screen). This did not happen a lot but when it did only initial attempts were analyzed.

This discourse themes are only for the last two questions in the pre- and post-RCA (this is the only assessment that included a talk-aloud protocol). The theme of moles versus molecules came up with regard to molecules being integers but moles being possible as non-integer values. Although the simulation had only integer values possible (as students were analyzing numbers of molecules) this theme of moles versus molecules was present in both the pre- and post-RCA assessments (students who kept moles as integers often did this before interacting with the simulation).

Best,

Sarah

SDWoodgate's picture

Dear Sarah,

I very much enjoyed your paper because (as I believe I commented earlier in response to the other paper), I know that students find the concept of limiting reactant difficult. I also enjoyed the SIM - not sure that I would use the sandwiches bit because the molecules one shows the concept so effectively. I was wondering how this translated to real calculations, and I went poking around on the PhET site and found the really neat Basic Stoichiometry activity that had been posted by Chris Bires. His(Her) exercises do the translation from molecules to moles very effectively. However that activity and indeed the SIM stopped short of using the term limiting reactant which is the natural extension of this. Is there any reason for this? I notice that you have an excess, but it is the limiting reactant that controls the amount of product form and thus is the basis for the extension calculation.

On another tack - I, like Mark, have found that the best advice to give students when balancing equations by inspection is first to balance an atom that appears in only one reactant and one product.

Hello!

I used this simulation to study viewing patterns but did not design the program, this is a good question for the PhET simulation design team.

Best,

Sarah

Hi Sarah, Felicia and Peter,

I found myself experimenting with the simulation, it seemed to encourage me to take shortcuts instead of using my logical step-by-step procedure for balancing equations. Solving the problems this way I found it very easy to make errors. How do you have your students approach the balancing of the equations, do they have a set procedure to follow?
Thanks,
Brian

Hello Brian,

For this study we intentionally did not provide set directions, simply asked the participants to work through the game as the goal was to see what strategies they employed. When using these simulations in an instructional environment I find the simulations useful for sparking reflection and class discussion. When I have my college students use a PhET simulation, individually or in pairs, I like to have them explain their reasoning and reflect on their process. I've also used these simulations in a large lecture setting by setting up situations with the simulation and asking the class questions, again the key is to have them explain their reasoning. When working with K-12 science teachers and HS students I've found the lessons posted on the PhET simulation quite helpful.

Thanks for your question,

Sarah