P652: Immediate answer-until-correct feedback in chemistry testing

Author: Jamie L. Schneider, University of Wisconsin River Falls, USA

Co-Author: Kristen L. Murphy and Shalini Srinivasan, University of Wisconsin Milwaukee, USA; Arunendu Chatterjee, University of Wisconsin River Falls, USA

Date: 8/5/14

Time: 5:15 PM6:30 PM

Room: LIB

Related Symposium: S33

Instructors often employ individual assessments (tests and quizzes) with multiple choice formats to evaluate student content knowledge. Delayed feedback mechanisms are commonly used; some of which are non-corrective (scores on exams) and some of which are corrective (marks next to each question wrong with access to answer keys). Reform efforts in chemistry testing have largely focused on types of questions (conceptual vs. algorithmic) and not so much on feedback mechanisms to improve student learning. Unit exams are often described as summative assessments; however, they have the potential to serve as formative assessments in courses that have a circular curricular structure. Stated differently, improvement of skills from early unit exams could improve student performance on subsequent unit exams and on a truly summative cumulative final exam. Our research aims to gather evidence about the effects of incorporating methods of feedback into multiple-choice exams in general chemistry and to offer suggestions to optimize the feedback to promote student learning. We collected testing data using traditional answer forms and Immediate Feedback Assessment Technique (IF-AT) forms both of which are paper-based, classroom-accessible multiple-choice exam response options. We will present the first stages in this NSF supported project (DUE 1140914) which include development of two algorithmically similar general chemistry tests and initial data on delayed non-corrective versus immediate corrective feedback conditions. Feedback effectiveness will be presented through changes in student performance on repeat testing and changes in the degree of correlation between confidence and performance on repeat testing.

P522: Investigating students’ conceptual boundaries of scale

Author: Jaclyn Trate, University of Wisconsin-Milwaukee, USA

Co-Author: Anja Blecking, Peter Geissinger and Kristen Murphy, University of Wisconsin-Milwaukee, USA

Date: 8/5/14

Time: 3:05 PM3:25 PM

Room: LTT 103

Related Symposium: S13

The American Association for the Advancement of Science (AAAS) has outlined four themes that define science literacy; these are systems, models, constancy and change, and scale. More recently, the National Research Council has released the framework for K-12 science education that includes “Scale, Proportion, and Quantity”. Our research has already shown that scale literacy is a better predictor for success in a general chemistry course than traditional measures and integrating scale as a theme in the undergraduate general chemistry curriculum has been accomplished through a variety of methods. Of particular interest was developing a laboratory sequence that not only helped students increase their knowledge of scale concepts but also gave feedback into the conceptual boundaries of scale held by students. One activity, adapted from the work of Gail Jones that specifically targeted these goals was trialed in a course-wide experiment. In this activity students created “bins” to sort objects spanning a wide range of sizes and then given 20 cards containing the names of objects to sort into their bins. The preliminary data collected from this activity shows that students frequently operate within a very narrow range of scale, typically centered around the height of an adult. Additionally, students often lumped all nonvisible items into a single bin, ignoring the many orders of magnitude separating these objects. Finally, when asked to place the items in order within their bins, students struggled to correctly order the nonvisible items. The analysis of this activity and the implications for these findings will be discussed.

P317: Items that make the test: Different types of assessment items, what they can tell us, and how they have evolved over time

Author: Kristen Murphy, University of Wisconsin-Milwaukee, USA

Co-Author: Thomas Holme, Iowa State University, USA

Date: 8/4/14

Time: 3:40 PM4:00 PM

Room: LOH 164

Related Symposium: S25

Assessments are regularly used to make judgments about what students know and these judgments can impact course grades, program progress or career intentions. Therefore, the individual items that combine to produce these assessments must be valid and reliable. The ACS Examinations Institute has been producing assessment items for national normed assessments for over 75 years. The process utilized by the Exams Institute involves multiple components to produce assessment items that provide information about what students may know. These items have evolved over time and some of the earlier item types will be presented as well as the newest types of items now in use on electronic exams. In addition to different types of assessment items, one item type can be designed for a specific content area as conceptual or algorithmic. Items can also be collectively designed around a central problem or scenario. Implications for test development by individual instructors as well as utilizing assessment data to make judgments about what students may know will also be discussed.

P157: Development and preliminary testing of a persistence instrument: Measuring outcome expectations

Author: Shalini Srinivasan, University of Wisconsin - Milwaukee, USA

Co-Author: Kristen Murphy, University of Wisconsin - Milwaukee, USA

Date: 8/4/14

Time: 11:50 AM12:10 PM

Room: LTT 103

Related Symposium: S13

Research on Social Cognitive Career theory (SCCT) has focused on formulating relationships among constructs that affect college student performance and persistence. Exploring these relationships and their impact on persistence is of critical importance in STEM domains, where a crucial need exists to increase and sustain the number of students pursuing STEM degrees. Two key constructs have emerged from studies on SCCT: Self-efficacy and outcome expectations. While the latter was incorporated into SCCT as a distinct construct, most studies have placed emphasis on self-efficacy with minimal attention given to the measurement of outcome expectations. Thus, understanding this construct and developing a psychometrically sound instrument to measure it would certainly result in a more comprehensive and robust model to test persistence. Outcome expectations (viewed as if-then statements) are judgments of whether a particular course of action will produce a desirable outcome. The items in our instrument were developed using existing measures of cognitive/career outcome expectations and Bandura’s three forms of outcome expectations (physical, social and self-evaluative). These items were tailored to be domain-specific; students were requested to provide a level of agreement for each statement. Exploratory factor analysis was used to analyze preliminary results. The foci of this talk will be a brief description of instrument development and discussion of our results till date. Ongoing refinements and further administrations of this instrument, in conjunction with our existing chemistry self-efficacy scale will allow for the validation and testing of a complete persistence instrument for students in STEM degrees.