You are here

Specifications Grading in the Flipped Organic Classroom

Author(s): 

Joshua Ring, Lenoir-Rhyne University

11/03/16 to 11/05/16
DownloadPDF: 
Abstract: 

Specifications Grading, developed by Linda Nilson, is a system of course-long student assessment based on the division of learning objectives into clearly-defined skill tests or assignments. Each skill is evaluated at a mastery level, with opportunities for students to learn from their mistakes and then be re-evaluated for skill tests, or resubmit assignments.  In this paper, the author explores the background that led him to adopt Specs Grading in his Organic Chemistry course, and then details both the implementation thereof as well as student results and feedback.

Paper: 

            I love asking questions; I think I got that from my Mom.  Throughout my education, from kindergarten through graduate school, I have always felt empowered whenever my questions were fielded in good faith by teachers, professors, and teaching assistants; I have also felt empowered whenever teachers, professors, or teaching assistants would ask me (or the class) questions, and allow the time and framework for me (or us) to figure out the answers on my (our) own. 

The first part of this is a narrative evolution of my teaching style in Organic Chemistry, arranged around a sequence of questions that I’ve asked myself over the last decade.  These questions and my good-faith attempt to answer them has led to my adaptation of Specifications (Specs) Grading into my flipped Organic Chemistry classroom.  Specs grading, a system for student evaluation developed by Dr. Linda Nilson, is quite different than the classic exam-exam-exam-final method of assessing student learning (Nilson, 2014).  Therefore, I believe it is important to describe how, when I first discovered the seemingly-radical system of Specs Grading, it instead felt like a natural answer to a culmination of pedagogical frustrations and questions that I had all along.

The second part of this paper is a description of Specs Grading and its implementation into the first semester of my Organic Chemistry sequence, including qualitative and quantitative results from that semester. 

I have found Specs Grading to be part of the Organic Chemistry course that is authentic to me.  I hope that some of the reflective questions and possible answers that I share herein will inspire the reader to contemplate many facets of her or his own class, and share ideas to help refine our courses as a community.  I believe that as chemistry educators, we share the best job on earth, and I seek your feedback that we may hone our craft together.

 

Part I: The Road to Specifications Grading

 

While the purpose of this paper is present Specs Grading, and not to argue for a flipped classroom, I don’t believe that I would or could have implemented Specs Grading without first flipping (both for practical reasons and the growth of my confidence to question the paradigms under which I was educated).  Furthermore, I would not have embraced the flipped classroom without first being challenged to question the usage of inquiry-guided learning (IGL) in my “lecture” course.  Therefore, I believe that a brief discussion of my discovery and usage of those pedagogical approaches provides an appropriate scaffolding to discourse about Specs Grading.

 

How on earth can we expect students to correctly figure out Organic Chemistry on their own?

 

Since I began teaching full-time in 2007, a number of my previously-held notions of “what a (science) course should look like” have been, and continue to be, shaken.  This process began with long arguments with Lenoir-Rhyne’s (LR’s) then-Director of Institutional Research Dr. Ginger Bishop about Inquiry-Guided Learning (IGL) in the classroom. My main argument was that surely we can’t expect students to figure out information that took trained chemists centuries to discover! 

Even after joining an LR IGL faculty group with the goal of implementing in-class IGL in 2010, I could only see how this method could be applied to laboratory courses, where students had time to explore content that had already been covered in class.  The lecture component, in my opinion, had to be carried out in the traditional fashion. How on earth can we expect students to correctly figure out Organic Chemistry on their own? 

While I still believe that much of the information contained in an Organic Chemistry course is best delivered in some type of lecture form, I’m humbled to admit that Dr. Bishop was right. I committed one class period in the spring of 2011 to allowing students, in groups, to discern a mechanism that we had not “covered”, but one that was reasonably within their grasps as second-semester organic students.  Twenty minutes into the class period, to my surprise, they had proposed two different, but reasonable, mechanisms.  With time to spare, I shared the two mechanisms with the class, and pushed them into the deep end of the inquiry pool by asking them to construct experiments that could allow them to provide evidence about which of the mechanism occurs.  Again, they astounded me with their enthusiasm and ideas.  But I shouldn’t have been surprised, for I know (and previously expressed in the first paragraph of this paper) that a class period like that would have enthralled me at their age.  Clearly, students can inquire their way to advanced knowledge, given enough basic information as a guide. 

Many science faculty have published amazing works on inquiry in the science classroom; I especially encourage those interested to read Organic Chemistry: A Guided Inquiry by Andre Straumanis (Straumanis, 2012).

 

How can a student be expected to tackle difficult example problems when they had just seen concept for the first time?

 

            Inquiry takes time, and I wasn’t sure how to create more time/space for more in-class inquiry.  Furthermore, while my course delivery has always included time for students to work problems during class, the problems offered to students during the same hour as they first saw the concept/skills were, by necessity, quite simple (a consistent complaint on my course evaluations was “the exam problems are always harder than the problems in class”; but how could a student be expected to tackle difficult example problems when they had just seen concept for the first time?).  In the summer of 2013, I attended the cCWCS (Chemistry Collaborations, Workshops, and Communities of Scholars) conference on Active Learning in Organic Chemistry, and was exposed to the Flipped Classroom for the first time. 

In a flipped classroom, lectures are recorded and delivered before class, and the class period is spent on problems.  I could immediately envision the advantages: I could still give the students that baseline scaffolding of knowledge, and yet allow the time for inquiry, especially with applications to biochemistry and pharmaceutical science.  I jumped in face-first, and spent summer 2013 (and the rest of the 2013-2014 academic year) recording on Explain Everything and uploading to YouTube the entire year of lectures.  The newfound freedom and flexibility in class time afforded the opportunity to offer challenging problems and allow students time to work through (and productively struggle with) these problems with my support.  I was also able to devote time to inquiry-guided lessons, generally allowing students to discover how the concepts and skills from organic chemistry could be applied to biochemical systems.  The flipped classroom allowed and encouraged IGL in a way that I never expected.

 

How could you not see what I’m asking you to be able to do?

 

            Organic Chemistry has always had a relatively high fail rate, and students often enter the course with little more than the expectation of a difficult weed-out course for potential M.D.’s. And like many courses, its true place in science education is easily missed.  I know that maturity and perspective can often only come with age.  However, I often asked myself how to communicate the importance and excitement that accompanies an understanding of why and how organic molecules act and react, and how Organic Chemistry can tie together physics, chemistry, and biology in a way that few other courses can.

I had a student three years ago memorably tell me “I love the way that police officer training courses are set up: we’re given a learning objective, and then taught how to do it, practice it, and then are tested on it.”  I pointed out that I had listed the course learning objectives on the syllabus!  But to be fair, I hadn’t spent much time being explicit with students about transitions from one objective or skill to another; and the course was mostly structured around the textbook’s chapters. 

Me (grumpily): Do I really have to spell this all out for them?

Me (sheepishly): If it would help the students learn and improve perspective, is it that big of a deal?

Starting the week after that conversation, I began every class with a list of learning objectives for that class period, usually only one or two. The students were thrilled.

Me: Seriously?  How could you not see what I’m asking you to be able to do?

Yet for most students, the “learning objectives” were reduced to “the stuff on exam 2”.  And I hadn’t helped them change that viewpoint: exams in my course were still given every month or so, roughly three chapters at a time, instead of being structured around the learning objectives that I had just started explicitly sharing. 

 

 

 

What if students know exactly what benchmarks they need to hit in order to pass (or to earn an A)? 

 

When students would come to my office hours and review before exams, I could help them lump the seemingly disparate pile of skills and content into a concise list of skills that I thought was obvious, but usually left them saying “really, that’s it?”  It struck me that if students weren’t seeing the context naturally, then why wasn’t I arranging the course around objectives to help them do so?  Certainly the feeling of being overwhelmed affects performance on exams (furthermore, I can remember entering exams being sure that I understood the information presented in class, but a several-page examination on it appeared to be so foreign that it may as well have been from a different class).  What if students know exactly what benchmarks they need to hit in order to pass (or to earn an A)?  At the same time, I was asking myself in laboratory, why are students allowed to pass from general chemistry into organic lab without being absolutely, positively sure how to calculate percent yield?

I resolved to attempt to restructure the course around learning objectives, instead of chapters; I was introduced to the concept of Backwards Design (McTighe, 2005), in which courses are designed around the end goals (e.g. students need to be able to quickly and correctly interconvert Lewis Dot Structures and Line-Angle Formulas).  I was also determined to assess students on fewer learning objectives at time; intentional clarity could only help student performance, I assumed. 

Yet the question remained, how can I accurately assess student knowledge in a course like that?  I researched some ways in which faculty assess students using Standards-Based Grading, but while I know and respect many colleague who use that system, I struggled to combine my assessment goals with it.

 

Clearer objectives, higher standards, AND simplified grading?  Where have you been all my life?

           

In January 2015, Robert Talbert came to LR to speak about the flipped mathematics classroom (I invite you to check out his Casting Out Nines blog at http://rtalbert.org/blog/).  While it was great to hear his experiences and opinions about flipping, what truly caught my attention was an off-the-cuff remark he made to me after a session about how his math courses were organized around pass-fail learning outcome tests.  To me, this sounded like an amazing system: giving students a list of outcomes (mostly skills, some factual knowledge, some application), and letting their grades be dictated by how many of those skills were mastered.  Furthermore, there was no more partial credit; mastery means doing things right.  My mind immediately drifted to the nightmare of assigning fair partial credit to students, considering the myriad of so-close-but-so-so-wrong answers that students can muster for even the most simple nomenclature problem.  Clearer objectives, higher standards, AND simplified grading?  Where have you been all my life?

Several months later, LR’s Director of the Center for Teaching and Learning, Devon Fisher, turned me onto a book published in 2014 by Dr. Linda Nilson about a system of grading called “Specifications Grading”, and over the course of a week in jury duty, I devoured the book, and began adapting it to my own course (I later found out that Dr. Talbert had adapted his grading system from the same book).  In brief, Dr. Nilson’s system is arranged objectives that are clearly established, and graded pass/fail (but with a passing grade set at a mastery level, instead of the commonly-assumed level of basic proficiency); overall course grades are dictated by how many objectives are mastered.  Furthermore, Specs Grading includes the opportunity to retake (for tests) or redo (for assignments) failed objectives.

 

Part II: Experimental Methods of Specifications Grading adaptation into my Organic class:

 

            In fall 2015, I presented to the students a new approach to grading in Organic Chemistry: the first semester of the course was arranged around 22 outcomes. Six were “Essential Outcomes” (Eos), these were outcomes that I considered fundamental to everything else in the first and second semester of the course; the students had to demonstrate mastery of each of the 6 in order to have a chance to pass the course.  The remaining sixteen were “General Outcomes” (GOs), and each student’s course grade would be be determined by how many of these outcomes were mastered.  Grades were assigned as follows:

 

A:         Pass 6 EOs + 15-16 GOs

A- :       Pass 6 EOs + 14 GOs

B+:       Pass 6 EOs + 13 GOs

B:         Pass 6 EOs + 11-12 GOs

B-:        Pass 6 EOs + 10 GOs

C+:       Pass 6 EOs + 9 GOs

C:         Pass 6 EOs + 7-8 GOs

C-:        Pass 6 EOs + 6 GOs

D+:       Pass 6 EOs + 5 GOs

D:         Pass 6 EOs + 3-4 GOs

D-:       Pass 6 EOs + 2 GOs

F:         Pass less than 6 Eos and/or 2 GOs

 

Outcome quiz/tests, which the students began calling “quests”, were given one-at-a-time, and were graded pass-fail with no partial credit.  Students were generally given 5 questions and needed to answer 4 of them perfectly in order to provide evidence of mastery (i.e. pass).

            The outcomes were arranged and ordered as follows.  Like most textbooks, the first eleven outcomes were mainly focused on the nature of molecules by themselves, or simple non-reaction relationships to other molecules:

 

EO1: Drawing Lewis Dot Structures

EO2: Interconverting Lewis Dot Structures, Condensed Formulas, and Line-Angle Structures

EO3: Using Basic Nomenclature

GO1: Identification of Hybridization, Bond Angles, and Bond Types

GO2: Application of Intermolecular Forces

GO3: Indicating Stability of Molecular Conformations

GO4: Drawing Structural Isomers

GO5: Identifying and Designating Chirality

GO6: Identifying Stereoisomers

GO7: Using Advanced Nomenclature

 

            The second half of the class began with, in my opinion, the most crucial block of skills in Organic Chemistry: understanding the stability of ions (EO4, including the introduction of resonance with intramolecular mechanism arrows) in order to begin understanding why reactions occur, predicting relative acidity and basicity based on structure (GO8, an application of ion stability, and also a simple introduction to intermolecular mechanism arrows), drawing intermolecular mechanism arrows in nucleophile-electrophile reactions (EO5), and identifying the reactive sites in molecules in order to predict reactions based on structure (EO6).  These four skills were followed by the remaining general outcomes, which focused on specifics of substitution, elimination, and addition reactions:

 

EO4: Identifying and Explaining Charge Stability

GO8: Predicting Relative Acidity and Basicity

EO5: Drawing Elementary Reaction Steps (Acid-Base, SN2, E2)

EO6: Understanding and Predicting Electron Motion

GO9: Understanding Multistep Substitution and Elimination

GO10: Predicting Reaction Mechanisms and Products (SN2/E2/SN1/E1)

GO11: Predicting Products of Advanced SN2 Reactions

GO12: Identifying Reactants and Reagents for Substitution and Elimination Reactions

GO13: Understanding Electrophilic Addition Reactions

GO14: Predicting Products of Advanced Addition Reactions

GO15: Identifying Reactants and Reagents for Addition Reactions

GO16: Biochemical Application of Organic Chemistry

 

Three class periods (essentially the traditional exam periods) were devoted to quest retakes.  Students who had failed any outcome evaluation would have another chances to learn the skill and pass a new test on that outcome; these exam periods essentially represented exams of various lengths and content, but tailor-made to fit their previously-exposed weaknesses.  Furthermore, students who had passed every test the first time it was offered during class had the day off. 

At the beginning of the two-hour final exam period, students were given a one-hour cumulative final exam. Final exam grades could impact their course grade slightly, with a maximum of approximately one grade either up or down.  Depending on the student’s grade entering the final exam, they were given a sliding scale with range of exam grades and their effects on the final course grade (e.g. a student entering with a C needed to score a B or A on the final exam to increase their course grade, an F to decrease their course grade, while scoring a C or D on the final wouldn’t affect their final grade.  Just like the quests during the semester, no partial credit was given on the final exam.

The second hour of the exam period was another allotment of time for quest retakes, exactly the same as the in-semester retake periods.  There was no limit on the number of quest attempts beyond the end of the semester; nor was there any penalty for completing them later in the semester.  

 

Data and Results:

 

            At the beginning of the semester, student feedback was mixed, as I would expect for a grading system so different from their expectations.  Some students felt like they were being tested too often, but several remarked that they had never “learned so much so quickly” in a class before.  Personally, I think the two comments go hand-in-hand… while students (or perhaps humans in general) are trained to only study when an exam is in the near future, this course always had an exam in the near future, and therefore staying on top of the material was necessary.  Later in the semester, students began complaining that other courses weren’t testing often enough and that those tests involved too much material (which I suppose is positive feedback for this system), and many of the students expressed how comforting it was to know that they had a second chance (or third, etc.) to master an objective. 

            The single piece of feedback from my colleagues, which I had also considered, was the question of whether many small tests will result in a decrease in long-term retention of information.  I continue to share the concern.  However, I do not distribute graded final exams, therefore I was able to compare my cumulative final exam grades from the fall of 2014 (traditional model of 3 exams and 1 final exam) to the final exam grades from fall 2015 (Specs Grading).  The first column shows the results on the final exam, as well as question types that were most comparable across both final exams.  For data in the second column, I regraded the fall 2014 exams without allowing any partial credit, which provides a more accurate comparison to the way that the fall 2015 class (and final exam) were graded. 

 

Fall 2014, including partial credit (n=46)

Fall 2014, regraded without partial credit

Fall 2015, no partial credit given (n=35)

Final Exam Average

65.6%

41.4%

61.3%

Lewis/LA/CF*

63.7%

28.5%

67.8%

Naming

71.9%

24.7%

61.0%

Resonance

67.8%

39.6%

63.5%

Acid-Base Explanations**

68.3%

48.1%

50.5%

A + B à ?***

61.9%

49.4%

62.0%

A + ? à C****

61.0%

42.9%

72.5%

 

*Questions about interconversions of Lewis Structures, Line-Angle Structures, Condensed Formulas

**Questions in which students were asked to compare structures and predict relative acidity/basicity

***Questions in which students were given reactants and reagents, and asked to give the products

****Questions in which students were given reactants and products, and asked to give the reagents

 

Overall as well as in most of the categories of comparable question types, the students in fall 2015 performed markedly better on their cumulative final exam.  It is possible that these results are not due to the dramatically different Specs Grading style, but simply due to the increased rigor that accompanied a no-partial-credit rule over the course of the entire semester.  However, I would not have implemented this increase in rigor, if I wasn’t giving the students multiple chances to demonstrate mastery. 

I am also aware of a relatively small sample size, but I intend to continue to collect data and hope to one day assimilate the data into a larger set from other classrooms in order to more accurately evaluate Specs Grading in Organic Chemistry.

 

Conclusions:

 

            I believe that the system can work well for both excellent students and those that struggle.  I hadn’t considered this before implementing the system, but with the ability to learn from their mistakes after a failed first attempt at a test, many of the students who might otherwise flounder and give up instead found themselves seeking help from both me and from other students.  The majority of those students ended up with C’s instead of F’s, and I believe that every student in the class earned their grades.  In addition, by only allowing students to pass with completely correct responses, I eliminated the tedium of partial credit for my grading, and held these students to a higher standard than ever before.  

For students, failing grades represent the fact that they haven’t mastered the information yet.  The course dedicates a lot of time to the in-class quests, but I believe the time is well spent.  I always spend a few minutes providing the correct answers, and will field question from students about them.  Students who pass the quest receive immediate feedback that they are ready to move on.  More importantly, I encourage those who haven’t passed to consider the attempt as a learning opportunity, and remind them that they have another chance to learn and master the outcome at hand. 

The most striking benefit of this system for students, I believe, is that every student is provided with both the impetus and opportunity to learn from their mistakes. 

 

Thank you for your time, and I look forward to answering any questions that I can.

Works Cited

McTighe, G. W. (2005). Understanding by Design, Expanded 2nd Edition. Association for Supervision and Curriculum Development (ACSD).

Nilson, L. (2014). Specifications Grading: Restoring Rigor, Motivating Students, and Saving Faculty Time. Stylus Publishing.

Straumanis, A. (2012). Organic Chemstry: A Guided Inquiry for Recitation, Volumes 1 and 2. Brooks/Cole, Cengage Learning.

Comments

Joshua

What a great big undertaking! I've heard specifications grading mentioned a couple of times, but was uncertain how it might be applied in ochem (or orgo for the East Coast folks). 

My questions:

how did you decide what topics were E's and G's? 

I didn't see resonance or MO theory mentioned in your objectives; are these part of other measurables? Or did these get eliminated with "less is more"?

Certainly specifications grading could potentially be used with any text, but I am curious which one you use?

Is laboratory a part of your course? Are you grading it in a specifications manner?

Finally, what do you do in your flipped classroom other than quizzing/testing? Do you do POGIL activities?

Thanks in advance for your answers, and thanks for an intriguing paper.

Kimberley Cousins

Cal State San Bernardino

Joshua Ring's picture

Hi Kimberley,

Thank you very much for your interest and questions; I'll do my best to go through them one at a time.  If you have any follow-up question or need more clarification, please don't hesitate to ask.

1) I tried to tease out which skills were most crucial to both success in later sections of Organic 1, and also for moving into Organic 2.  So the big question I asked myself was "What skills must every one of these students master in order to pass this class?"  I decided to start with three that are key to knowing exactly what a molecule is (and how to effectively communicate that), which are EOs 1-3:

EO1. drawing Lewis Structures (with and without Formal Charges),

EO2. interconverting Lewis Structures/Line-Angle Structures/Condensed Formulas, and

EO3. basic nomenclature (which included alkanes, cycloalkanes, and alcohols).  

Halfway through the class, when we transition into reactions, I wanted to front-load reactions with a deep understanding of why and how, in general, reactions occur.  So my EO4-6 are:

EO4. understanding stability of ions (which includes using intramolecualr mechanism arrows to convert an ion into its resonance structures),

EO5. drawing intermolecular mechanistic arrows for elementary reaction steps (for this test, there's questions where I give them reactants and draw arrows, and ask them to give me the products, and also questions where I give them reactants and products and ask them the one-step mechanisms by which they occur).

EO6. predicting reactive sites (this is, to me, the most important thing for a student to master when starting to tackle reactions: looking at a structure and being able to identify the parts of it that can be nucleophiles, bases, leaving groups, electrophiles, or acids).

I don't pretend to have decided upon the perfect set of essential outcomes, and certainly other professors will teach skills at different times (or add/remove some), but I'd love to hear opinions.  I also believe that different outcomes are probably better with different sets of students; I think that EO5 is very simple, but some students need to me to hold their feet to the fire on it, or else they never quite get the arrows right and OChem 2 is a bigger struggle than it needs to be.

2) Resonance is certainly important and I cannot imagine skipping it!  As I mentioned above, that's half of the "stability of ions" essential outcome.  I decided that I'd teach resonance as if it's intramolecular electron-motion mechanisms, and therefore I don't introduce it until we're about to start on reactions and their intermolecular electron-motion mechanisms.  I don't introduce MO theory until the second semester, but I'd like to move aromaticity in the first semester, so I'm all ears for suggestions on that one.

3) I have used Joel Karty's "Organic Chemistry: Principles and Mechanisms" for the last two years.

4) Laboratory is required, but it's officially a separate course, and therefore the grade for lab doesn't incorporate into class.  I do not yet grade it in a specs manner, but I think it would be incredibly useful there, and probably much easier to implement.  Gateway Specs Grading? :)

5) Quizzing/testing and talking through the answers together is 12-15 minutes every 2-3 days... the rest of the class time is dedicated to answering questions from videos, working problems, IGL-style activities, and biochemical/pharmaceutical application examples.  I don't do official "POGIL"-style activities, but we do a good amount of inquiry group work, especially related to new outcomes that are very strongly tied into the previous skill (often I'll do those on test days, so that I can eaily point out how closely related this "new stuff" is to the answers they gave on the previous outcome's quiz).  I have biochemical-application problems ready to go almost every day, but we don't end up getting to them more than once a week or so in the first semester.  

Take care,

Josh

 

Gregorius's picture

After the students take their first attempt at quiz/test and fail - is there a remediation protocol in place (do you go over what they missed, are they sent to tutorial services), or are the students expected to correct their misconceptions on their own? Are there students who then "game" the system and take the first exam just to see what the quiz/test is like and then study for it the next round?

Greg

Joshua Ring's picture

Hi Greg!  I am fortunate to have relatively few students, so I simply invite them to come to my office hours and go through their failed outcome tests with me (but often they simply realize they didn't study hard enough, and go back and watch videos or work problems instead).  I'd imagine that with many students, having a scheduled review session about an individual outcome with a tutor would be a great way to do it.

As for retakes, I always make new tests for every make-up.  We use last year's quizzes as our practice problems, and it's very clear what they need to be able to do... so a failed first test is just a missed opportunity, I don't believe that there's not any real advantage. 

-Josh

Joshua,

As you did, after reading Linda Nilson's book, I decided to implement specifications grading this year into our General Chemistry laboratory hoping to improve the outcome of the student's laboratory reports.  We have been doing guided inquiry labs for the last several years and specs grading seemed to be a natural progression.  My question is, what thoughts have you given to using specification grading in the organic lab and how you might incorporate spec grading in that setting?  

Joshua Ring's picture

Hi Weigand, 

I plan to integrate Specs Grading into lab, but I have done so yet, so let me think about this a bit and I'll try to get you some ideas in the next day or so.

Thanks, Josh

 

 

Thanks for sharing your work with us, Josh.  Specifications grading is so radically different from anything I experienced as a student.  In 25+ years of teaching at the college level, you are the only faculty member I know who has adopted this approach.  Are other faculty members at Lenoir-Rhyne using specs grading?  Maybe in other disciplines?  I haven't yet read Linda Nilson's book; perhaps reading the book in its entirety would make adoption of this method seem closer to my comfort zone.  What inspired you to make this change in your teaching?  What gave you the courage to take this adventurous step?

Jennifer

Joshua Ring's picture

Hi Jennifer!  One of the chemists with whom I work is planning to use Specs Grading in his one-semester non-majors Chemistry course soon, but to the best of my knowledge, nobody else on our campus has adopted it yet.

I think that a large part of my willingness to jump into Specs Grading came from CCWCs, particularly the Active Learning in Organic workshop.  Seeing how many vastly different active approaches to instruction exist helped me see instruction in Organic as flexible, and one which can benefit from a variety of styles... which, as I mentioned in my paper, is the first time I was exposed to Flipping.  (So, unlike my students, you receive partial credit... or blame?)

I also had a lot of internal University support to experiment with classroom techniques, and many colleagues who were willing to share ideas.  The die was cast for paradigm-questioning... Dr. Nilson's book landed in my lap at a time when I was trying to figure out how I could rearrange my class around smaller outcomes, it just seemed like the right fit (or at least, I believe, an approach worth trying/considering). 

-Josh

Greg Baxley's picture

Hello,

Thank you for presenting an interesting paper with such an intriguing idea. You noted that you class sizes are pretty small. In a typical semester, I would have 2 classes of 50 or 75 students, plus several tabs to teach. I have a few questions if you have time to answer:

What kind of scale-up problems would you anticipate in using your system with two or three times as many students?

Do you hand back the quests to each student, and if so, was it difficult to keep up with writing new quests that were of similar difficulty but different enough so that students couldn't just memorize problems from a previous quest?

Would you be willing to share a syllabus so that I could see how you presented a new and fairly complicated grading scheme to students?

Do you have any ideas of how one could blend your system into the lab, so courses with lecture/lab grades rolled into one course grade could be computed?

Thank you again,

Greg

Joshua Ring's picture

Hi Greg,

Thanks for reading, and for your questions!

The grading has become much simpler for me (without giving any partial credit), but I do more grading with make-up quests.  I probably spend a bit less time grading than I did when I was giving the traditional 4xBigTests, but it comes in smaller chunks.  For me, the trade-off is that I create new quests every time, so I end up spending more time preparing them.  Overall, I wouldn't see any trouble with scale-up when it comes to the Specs Grading part.  However, I do end up devoting more classtime to testing, and while I think it's worthwhile in my flipped classroom, I'd be reluctant to suggest doing it without the extra time afforded by flipping.

I always hand back every quest; I think this is the most impactful part of this grading system (that they can see what they did wrong and try it again soon).  I'm pretty clear with their learning outcomes, so new versions of the quests always contain different structures but similar-styled questions, and therefore aren't too difficult to create*. 

I'm happy to share a syllabus (attached below), but I actually present this grading system to the students on day one with an interactive discussion-style reflecting on their study habits before I distribute the syllabus to them.  I tell them that Organic is a lot like a math class without a lot of numbers, then I use clickers to poll them about what they consider is the best way to learn in a math class (the answer they choose by a large majority is "by working problems").  Then I ask them when it's most helpful to have the professor's help (and they choose a combination of "while I'm working problems" and "after working problems", as opposed to "during the lecture")... so I'll tell them about what a flipped classroom means, and that their answers to those questions are a simplistic justifaction as to why we're doing it this way.  Then I ask them when they study the hardest (and "right before a test" wins by a large margin), how they self-assess, and learn... in doing so, I'm able to have them paint a picture of their own education that I can steer towards the Specs Grading benefits of having clear skills to learn (with fewer at a time) and the ability to learn from their mistakes and try again.  With a big smile, I share with them that if we only study in the days leading up to the test, and every day is a day leading up to a test, then logically, we don't have to change our procrastinting ways to succeed!

Finally, I'm not quite sure how to work Specs Grading with lab (a previous commenter asked about this as well), but I think it's a great question, and while I have some ideas, I'll think about it some more and share what I've got in the next day or two.  It took quite a bit of time and reflection to integrate this style into class, so please reply to this if you have any thoughts about how to do it in lab (or class)!

*My last outcome is "biological applications of organic chemistry", and newer versions of that are completely different and require a lot of creativity/time to generate.

Syllabus from the course:

Joshua and Greg,

I have started using specs grading in our general chemistry labs this fall semester primarily to help the students produce a well thought out and written lab report. The way I am trying that this semester is to provide them with a fairly detailed rubric on what I want to see in a report for each lab that they conduct.  So within the major headings of Title, Abstract, Introduction, Procedure, Data and Conclusions, I provide suggestions on what should be included in each of those headings and sometimes questions to help guide them as they write the report.  Their reports are either "in spec" or "out of spec" based on the rubric.  This allows me to quickly assess the report and determine if the student met the specs.  If the report doesn't meet specs, I send the student an email noting the deficiencies and then give them the option of editing and resubmitting their report via email.  My grading scheme is similar to one Nilson talks about in her book, for example, 6 of 6 reports within specs earns an "A", 5 of 6 reports within specs earns a "B," etc.  I also have clicker quizzes and require the students to hand in a copy of their lab book of their days work using one of the carbonless lab books of which I get the carbon copy.  Again, I use the scheme of 6 of 6 clicker quizzes, 6 of 6 lab copies, etc.  Essentially, 6 of 6 of each of the reports, clicker quizzes and lab book copies earns them an "A" and variations for B, C, etc.  I am also assigning points for each deliverable, so in case the student ends up with some combination that doesn't match well with my A, B, C scheme, I can still assign a grade.  A total cumulative range of points earns an A, B, etc.  I am at the early stages of implementing specs grading but it seems like the students like it so far, although I have had a few comments that it puts them under quite a bit of pressure.  I am not sure where that is coming from as they have an opportunity to edit the "out of spec" report.  

Willis

Joshua Ring's picture

Hey Willis,

Thanks for sharing, these ideas sound great.  Most of my students' grades for CHE201 are about keeping a good lab notebook, including discussions that demonstrate that they understand the hows and whys of the techniques.  

I think I'd grade technique in the second semester.  I don't know about you all, but the second semester is a lot of repetition in terms of purification, and I think I'd have specs that amount to "Do you know how to recrystallize a solid product or not (without input from anyone but your lab notebook)?", and "Can you create a logical plan for purification of an organic product, knowing the identity of the starting material, products, and likely side-products?"  There should be several opportunities to observe these in the second semester, and I have lab assistants to help watch multiple students at a time.  I usually have a lab final, but perhaps the lab final could just be a retake opportunity... that would give students powerful incentive to learn and pass technique-based specs the first time.

I can imagine that if I created the specs and discussed them on day 1 of organic 1 lab, my students would have a clearer big-picture understanding of what they're doing in lab (as a whole).

Peace,

Josh

 

Joshua, 

You mentioned that three class periods were dedicated to quest retakes and I am wondering how exactly did you work out the logistics for this? Are students essentially allowed to retake the quizzes three times? When do they take the quests for the first time? How much time do you allocate for each quest?

You hint at an evolution of the way students thought about this new approach, and I hope you don't mind me asking this, but how did that translate to student evaluations of your teaching?

Thank you for such an interesting contribution to ConfChem.

Frank

Joshua Ring's picture

Hi Frank, I'm going to do my best to answer your questions, but not in the order you asked them (sorry!)

The average outcome is two days of classtime, and then a 10-minute quiz at the beginning of class on the third day.  For example, at the beginning of the semester, I talked to them about the grading system on day 1 (August 24th).  On day 2 (August 26th), we worked problems on how to draw Lewis Structures for neutral molecules (they were assigned a video to watch before class).  On day 3 (Augst 29th), we worked problems on how to draw Lewis Structures for molecules with formal charges.  On day 4 (August 31st), they were given a 10-minute quest at the beginning of class, and then I spent about 2-3 minutes working through the problems on the board for them, and answering questions... and since that was their first test, reminding them about how the grading system works, and their opportunities for retakes.

The primary retake times are just like exam periods... they're just during a normal class period, and students show up and grab their "test" and have 50 minutes to complete it.  The main difference is that instead of one test in a stack, I have many stacks of retakes (re-quests?), and the students grab whichever ones they haven't passed yet to try them again.  As you can imagine, the early retake periods had few options... I'm giving a retest tomorrow and I'll be bringing 11 little stacks of retakes.  In the Specs Grading book, Dr. Nilson suggested a "token" system where the students could earn opportunities to retake outcomes, but my students have one initial chance and then additional chances during any of the later retake periods, for most outcomes.

I also give the students a few extra out-of-class opporunities to retake the Essential Outcomes (EOs; I don't know if this is necessary, but I'm still tinkering with best practices in Specs Grading).  Primarily, I want them to master those skills right away, and the best way I know how to encourage that is to give them quicker opportunities to be retested (which, I think, gives them a reason to study them sooner).  Also, they know that they cannot pass the class without passing every EO, and I want them to know that they'll have ample chances to pass the EOs.

The course evaluation responses I have gotten from the students have been good!  In general, as you can imagine, they liked having smaller tests, but some complained about having too many tests.  Some students lauded the make-up periods, but some complained that there weren't enough chances to make up their tests.  Overall, there were far more positive comments than negative ones... I'm not sure if that answers your question, but I hope it gives you a general idea.

 

Josh,

Following up on Greg's earlier question about remediation: Have you found that sometimes when students struggle with one or two topics early in the semester, they become overwhelmed by trying to learn the earlier topics on top of the new material? In other words, is there sometimes a snowball effect in which early failures lead to later failures because students were working on old material at the expense of new material? 

(I realize that even without mastery "questing" this might happen to students anyway if later material builds on earlier material, but I wondered if it becomes an even bigger problem when they have to do it for things that aren't as much of a building block for future material.)

Joshua Ring's picture

Hi Scott,

Thanks for including your second paragraph... I had found in previous semesters that there were a few students who seemed shocked when they received scores in the 20s on the first exam, and at that point, they were VERY seldom able to recover, grade-wise or content-wise.  Getting behind in Organic has always been a disaster.

One benefit that I've noticed with Specs Grading is that the students now receive summative feedback on the fourth day of class, and know very early in the semester if their study habits are sufficient or not (and whether or not they need to get a tutor or start showing up for office hours).

Another thing that's different is that by classifying the Outcomes as General or Essential is that they know which material is most important, and if they're behind, which they should prioritize in their studying.  If I were teaching this course online, I wouldn't even let them move onto any of the General Outcomes until they'd passed the first three Essential Outcomes (I'm not sure how to do that in a synchronous classroom).

For instance, when we start on reactions around midterm, their ability to identify stereochemical relationships (GO4) isn’t nearly as important as the ability to perfectly draw the lone pairs and hydrogen atoms on a line-angle structure (EO2).  For a student who’s behind, I think that’s a really key distinction that they may not have been able to make on their own.

-Josh

Catherine Welder's picture

Josh, do you also teach the Organic II course?  If so, do you plan to implement spec grading into that course as well?  If not, do you see this type of grading scheme as a viable option for Organic II? Cathy

Joshua Ring's picture

Hey Cathy!

I do teach Organic II, and I did implement Specs Grading into it in Spring 2016 (and I will in Spring 2017). 

The second semester was nice in that they understood the grading/testing plan; however, content-wise, the class itself seemed almost too easy for the students (despite me covering the same amount of material as in previous years)... so I'm in the process of tinkering with it significantly. 

Ask me about it next year, if you will!

-Josh

Catherine Welder's picture

Glad to hear you use specs grading in the organic II course.  You bet I'll stay in touch!

rpendarvis's picture

Does it not create a lot of workload to have a lot of students taking tests repeatedly?  It does cover the problem of students missing tests.  Do you have to create and grade a large number of versions to combat cheating?  How do you manage the workload?

Thanks for an interesting way of looking at grading procedures.

Richard

Joshua Ring's picture

Hi Richard, I definitely create many more evaluations than in years past, but with each evaluation being 1 page and giving no partial credit, grading is simpler.  I have experimented with grading homework, grading clicker responses, and mini-quizzes in the past, and I've elminated all of those things... so the grading has been streamlined, but there is admittedly a lot of it to do.

When I give a quiz for the first time, I have about 40 of them to grade, which takes 15-30 minutes... so I get those done pretty easily.  During retest days, it definitely takes me a while to deal with the sheer volume; but I would estimate that grading all of the make-up quizzes on a retest day is still less time than it took me to grade a big test on a normal exam day (again, because of no partial credit being given).

Overall, I'd estimate that the total assessment workload (between creating and grading) is slightly larger than in the past, but it mostly comes in smaller chunks, and it's quite nice not having to stress so much over assigning fair partial credit to most questions for every student.

-Josh

I may have missed it, but are the evaluations multiple choice or do they require a longer answer?

Joshua Ring's picture

Hi Patricia,

They are very rarely multiple-choice.  I'm attaching a few examples:

EO2: Interconverting Lewis Structrues/Condensed Formulas/Line-Angle Structures and Functional Groups (mastery is considered demonstrated with 4 of 5 correct answers) 

GO4: Classifying Isomeric Relationships (mastery is considered demonstrated with 3 of 4 correct answers)

GO6: Predicting Relative Acidity and Basicity (mastery is considered demonstrated with 4 of 5 correct answers)

I hope this helps! Perhaps I should have attached some quests to the original paper.  -Josh

Hi Josh,

Thanks for an interesting article.  Typically when I grade quizzes/exams, I try to be as generous with partial credit as possible.  I wanted to ask how difficult it is to grade more complex questions, such as a mechanism or synthetic scheme, as simply right or wrong? 

Thanks, Ashleigh

Joshua Ring's picture

Hi Ashleigh,

I don't grade mechanisms themselves very much; I teach them (there's one outcome that's 5 questions about drawing the arrows or interpreting answers, and 4 answers have to be completely right), and use them, then mostly grade off of results/products.  I spend a whole section on giving the students structures and teaching them to predict their reactive sites (that's EO6), so the rationale/mechanisms are definitely a huge part of the understanding, but I've been treating them largely as the work that gets you to the answer, as opposed to the answer themselves.  

That being said, last fall, I had an outcome where they'd have to draw the mechanisms for an E1/SN1 mechanism (with rearrangement), and 80% of their arrows needed to be perfect, which was a bit of a pain, but worked out.

As for synthetic schemes, I evaluate students on the one-step fill-in-the-reagent synthetic schemes twice in the first semester, once on substitution/elmination reactions and once on addition reactions (so the "A + B --> ?" and "A + ? --> C" skillsets have their own separate evaluations.  But I assume you're talking about multistep synthesis, and I have several retrosynthesis outcomes in the second semester of the course.  Last year those outcomes were evaluated with three questions each, in which I gave them a starting material and a product, and for each question I asked them to devise a method (which usually ends up being 2-4 steps) to get from point A to point Z.  They'd need to get 2 of the 3 right to pass.  As I mentioned in an earlier post to Cathy, I'm not convinced that this is the best way to evaluate them, and I'd consider the second semester to be much more of a work in progress.

Please let me know if you have any suggestions!  -Josh

 

Layne Morsch's picture

Josh,

First, I love what you are doing. I know that I've told you that personally, but it has challenged how many of us look at this endeavor of trying to get students to learn organic chemistry.

I have a couple questions and some comments.

First, in your Charge Stability EO, do you include acid/base reactions, pKa's and analysis of leaving groups as conjugate bases of strong/weak acids?

Second, do you have any sort of TA's or SI leaders? My thought was that if you had some class helper, that they could offer make up quests one day each week. That would allow students to try to quickly return to mastering the particular EO or GO before having to go through the next several leading up to the make up day.

As far as aromaticity goes, perhaps you could shift out the last GO on biochemical applications to the 2nd semester. I'm not sure what all you cover there, but typically that seems easier to do once they have seen some carbonyl chemistry. That would leave room to introduce aromaticity. (I moved that to the first semester of my course a couple years ago, but I left Electrophilic Aromatic Substitution reactions for the 2nd semester). 

I'm sure if each one of us made these EOs and GOs for our courses, there would be some differences. The only one I suggest you to consider is if EO1 and 2 really could be combined into one EO of organic structure drawing. The students should have learned Lewis dot structures as part of general chemistry and theoretically, that should just be a review at the start of organic. That could allow you to potentially add another EO, like acidity and basicity or aromaticity.

Layne

Joshua Ring's picture

Hi Layne!  Thanks for your feedback and questions.

I treat charge stability (which is EO4) as step 1, then acid-base (which is GO6 and comes immediately afterwards) as step 2... I introduce pKas, but don't ask them to memorize them yet, instead we talk about how to use charge stability to compare and predict relative strengths of acids and bases.  Because (in simple terms that students can understand) the relative strengths of acids is determined by the relative staiblity of the conjugate bases, I frame the Acid-Base outcome as an application of charge stability.  Then we transition right into elementary mechanisms (EO5) so that they can see how the acid-base mechanism is so similar to the SN2 mechanism. The next outcome (EO6) is predicting reactive sites, and together we use thermodynamics (pKas) to tie in the idea that leaving groups attached to hydrogens indicate strong acids, and the same leaving groups attached to carbons indicate our electrophiles.  I'm sure there's many ways to teach these concepts, but I've found that moving resonance and acid-base reactions out of chapter 1 and into the middle of the semester really helps to build towards reactions, and makes for a nice logical flow.

I don't have any student assistants of any type, but it would be lovely!  I do have tutors that the University hires when enough students request them, and I could probably work with those students in quest design (which would help make sure that my tutors are keeping up, as well).

My biochemical applications GO doesn't actually consume class time, I give it to them in the form of a series of group take-home problem sets.  But introducing aromaticity without its reactions might not require much shuffling.  I'll definitely give it some thought!  I'm planning to move addition to the second semester and bring carbonyl reactions to the first next year, and I'll try to fit aromaticity at the same time.

Finally, I completely agree about GOs and EOs... designing them was liberating but challenging.  I've made changes from last year to this year, and will do so again next year.  I also think we all have different sets of students, who have different experiences in general chemistry before we even see them... and so we're having to design something that feels comfortable to each of us as well as our student sets.  I've found that spending 2 days on Lewis Structures/Formal Charges and 2 days on condensed formulas/line-angle formulas/functional-group-names is almost too fast for some of my students.  But perhaps communicating that with our gen chem teachers would help?  This might sound crazy, but Specs Grading might work far better programatically than in just one course.

Anyway, I really hope that some other faculty will think about adopting Specs Grading, and I'd love to be able to continue to discuss/refine ideas about arranging courses to help our students learn to the best of their ability!

Thanks much, Josh

Joshua Ring's picture

I don't know how much longer these comments will be going out via email, but while they are, I want to thank all you so much for all of the questions and ideas about Specs Grading.  If you're interested in adopting Specs Grading (or thinking about how you might set up the outcomes in your class one day), please stay in touch.  I'd love to hear the outcomes that you would choose for organic I (and II), and certainly hearing different types of backwards-design ideas can only benefit us all!

Peace and all good, Josh

Amazing approach, Josh. I would like to see it published in JCE or CERP at some point.

To put things into big perspective - this is an example of mastery learning. According to Hattie's meta analysis, it is one of the most effective learning interventions - http://visible-learning.org/hattie-ranking-influences-effect-sizes-learning-achievement/. I am delighted to see a study of mastery learning effectiveness for organic chemistry.

I'd like to see the relationship of EO and GO - can one GO be a part of several EO? Also, if you use ACS exams - what % of items there are covered by EO and GO?

 

 

Joshua Ring's picture

Hi Alexey,

Yes, you're absolutely right, a number of faculty have pointed out to me that Specs grading most definitely involves a larger structure based around mastery learning.

As for the relationships between EOs and GOs, I've tried to set them up in a way that the outcomes have their own separate discrete skills, but that the EOs are largely those that are necessary for the students to master right away in order to proceed into the following sets of GOs.  I built a skill-tree when I was doing the whole backwards-design of the class, and those EOs seemed to be prerequisites for nearly everything in the course (including the second semester).

I've only used the ACS exam once (the whole-year one), and hadn't seen a copy of it until halfway through the second semester.  The vast majority of the ACS questions were covered by GOs (I think my EOs seem to be a bit too simple for their questions), but I don't have an exact percentage to share.

Best,

Josh

 

Joshua:

Sorry for the relative lateness of this comment.  But distractions have kept me elsewhere.  I do hope that this comment does get read by participants in the conference.

I found your paper quite interesting and very relevant  While "specifications grading" appears to be a good approach to assessment, I believe that for it to work, emphasis must be place on the proper specification of learning outcomes for the course, so that they clearly identify to the student those skills and knowledge that the student is expected to have acquired by the end of the course.

As a member of the faculty in chemical engineering (now retired) at my institution, I served for several years on the assessment committee for the department, and we were responsible for the preparation of the department for site visits by our accrediting agency, which in 2000 developed new criteria based upon outcomes achieved by our students.  Other accrediting agerncies are also now requiring outcomes as part of meeting the criteria of the accrediting agency.  Generally, the agency is looking for program outcomes (studuents' skills and knoledge at the completion of their undergraduate program).  Program outcomes are determined by course outcomes for each course in the program.  And learning outcomes within the course are linked to the course outcomes.

The additional advantage to using learning outcomes and relevant assessment tools is that one can assess the course and instruction, as well as assessing students' accomplishments.  This would be a requirement for continued accreditation of our programs.  The "Continuous Program Improvement" process lets one look at how well the class as a whole (or close to all of the students) is achieving given learning outcomes.  And if a problem is identified, it can then be fixed, whehter it is instructional strategies, lack of appropriate prior knowledge, textbook or intructional materials, curriculum, etc.  This is also a valuable tool for aligning content of courses within a program.

Along these lines, I might make a couple of suggestions for you (and others who adopt this approach).

1. It must be kept in mind that outcomes must be stated in measurable behavioral verbs that can be assessed.  Looking at your list of outcomes, there are three of them that use the verb "Understanding" (EO6, GO9, and GO13).  This is not a measurable behavioral verb, and thus would create problems with assessing your students' achievement of these outcomes.  Instead, for example, you could have:

For EO6, use "Predicting Electron Motion".  Understanding is not necessary here.

For the other two, "Describing" could be used in the statement of the outcome instead of "Understanding".

"Predicting" and "Describing" are valid behavioral verbs for outcomes that can be assessed.

Also, the last outcome, GO16 does not have any behavioral verb which would make assessment quite difficult.

2. My colleague and I have done online professional development program using videos much as you use videos for a flipped classroom.  And he has also taught an engineering course using a flipped classroom approach.  And in each case, we had the students assess and give us feebback on the video the first couple of times we used this approach to learn from the students if the content is appropriate for their needs and if the instruction is clear and relevant to the specified outcome.  You do not mention doing any assessment of the videos.  We have found that the students gave us valuable feedback that allowed us to improve the videos and results in improved student learning.

I hope this information proves useful to everyone.

Howard

Joshua Ring's picture

Hi Howard,

Thank you for your response, I really appreciate your suggestions.  And yes, distractions... no need to apologize (as long as you don't hold it against me that it took me a week to respond to your post).

You make great points about the titles; some of them definitely could use some work.  A few have been modified for this year, but all of them have a more defined set of skills that are presented on the LMS to the students in the form of an Outcome Introduction (for each outcome).

For example, EO6 was changed for this semester to "Predicting Reactive Sites", and the introduction page for the students was as follows:

"In the previous module, we examined the ways that electrons CAN move in the various elementary reaction steps (given reactants and either the arrows or products), while defining the different types of reactive sites (nucleophile, electrophile, acid, base, or leaving group).

"In this module, we will look more deeply at how to predict the ways that electrons WILL move in the various elementary reaction steps (without being given either arrows or products).  To do so, we will consider the many factors affecting acid-base, substitution, addition, and elimination reactions, and focus on how ions can become more stable (and the various ways that ions can become neutral).

 

 "Learning Goals:

  • Given a chemical structure, I am able to identify: (i) acidic hydrogens (pKa <20), (ii) strong bases, (iii) strong nucleophiles, (iv) good leaving groups, and (v) strong electrophiles (particularly those suitable for SN2 reactions)

 

Essentially, I tried to create descriptive titles for each outcome, but within each outcome, gave students 1-3 clear "I can..."/"I am able to..." skills that they need to master to pass each outcome.  

I would definitely report that I learned a lot the first time through about what exact skills I needed them to master, and was more able to define them to the students... but some the titles can certainly use some clarification, and I'll implement your suggestions.  Another bonus for the second time through the course is that I had so many quests/re-quests from last semester, that I was able to provide the students with concrete examples of the types of questions I am requiring them to answer for each outcome. 

Peace and all good,

Josh