Critical Thinking Project Fall CT Retreat CT Rubric Contact Info
 
 

Project History

Report Findings

CT Rubric

Faculty Adaptations

Instructors Responses

CT Materials

Q and A

Upcoming Events

Past CT Events

Partners

Contact Info

 
  Project Background Description
By Gary Brown, Bill Condon, Diane Kelly-Riley, and Richard Law

Fostering critical thinking skills in undergraduates across a university’s curriculum presents formidable difficulties. Making valid, reliable, and fine-grained assessments of students' progress in achieving these higher order intellectual skills involves another set of obstacles. Finally, providing faculty with the tools necessary to refocus their own teaching to encourage these abilities in students represents yet another formidable problem. These, however, are precisely the problems Washington State University is addressing through one concerted strategy. Washington State University has received a three-year, $380, 000 grant from the U. S. Department of Education FIPSE Comprehensive Program to integrate assessment with instruction in order to increase coherence and promote higher order thinking in a four-year General Education curriculum at a large, Research-I, public university, and to work with our two- and four-year counterparts in the State of Washington. As a result of a Washington State HEC Board funded pilot study, we have substantial evidence that we can significantly improve student learning, reform teaching, and measure the critical thinking gains of students at Washington State University. This project represents a collaboration among WSU's Campus Writing Programs, General Education Program, and Center for Teaching, Learning, and Technology, and it builds upon WSU's nationally recognized leadership in assessment in writing and learning with technology.

When WSU began a General Education reform in the late-1980s, we proposed to achieve these desired goals through General Education curriculum and writing-across-the-curriculum initiatives. While Washington State University has fully integrated writing into all aspects of its undergraduate curriculum, particularly General Education, recent self-studies indicate that the writing-to-learn and learning-to-write strategies have not translated into well-developed, higher order thinking abilities, in spite of demonstrable progress in improving the quality of students' writing abilities.

In 1996, the Center for Teaching, Learning and Technology (CTLT), the General Education Program, and the Writing Programs collaborated to develop a seven-dimension critical thinking rubric derived from scholarly work and local practice and expertise to provide a process for improving and a means for measuring students’ higher order thinking skills during the course of their college careers. Our intent has been to develop a fine-grained diagnostic of student progress as well as to provide a means for faculty to reflect upon and revise their own instructional goals, assessments, and teaching strategies. We use the rubric as an instructional guide and as an evaluative tool using a 6-point scale for evaluation combining holistic scoring methodology with expert-rater methodology (Haswell. & Wyche, 1996; Haswell, 1998). Early studies conducted by CTLT and the Writing Programs indicated an atmosphere ready for implementation of a critical thinking rubric within the WSU curriculum.

The instrument itself identifies seven key areas of critical thinking. The dimensions include:
  • problem identification
  • the establishment of a clear perspective on the issue
  • recognition of alternative perspectives
  • context identification
  • evidence identification and evaluation
  • recognition of fundamental assumptions implicit or stated by the representation of an issue, and
  • assessment of implications and potential conclusions.
A fully developed process or skill set for thinking critically will demonstrate competence with and integration of all of these components of formal, critical analysis. The instrument was developed from a selection of literature, including Toulmin (1958), Paul (1990), Facione (1990) and others, as well as the expertise and the experience of educators at WSU. The instrument and methodology has sustained a cumulative inter-rater reliability in our formal studies of 80%.

The 1999 Progress Report on the WSU Writing Portfolio showed that 92% of student writers received passing ratings or higher on junior-level Writing Portfolios, indicating that an overwhelming majority of upper-division students demonstrated writing proficiency as defined by WSU faculty. However, a pilot critical thinking evaluation session conducted in the summer of 1999 on papers from three senior-level courses revealed surprisingly low critical thinking abilities (a mean of 2.3 on a 6 point scale). This phenomenon, in which writing deemed acceptable in quality despite lacking obvious evidence of analytic skills, was also discerned among other General Education courses. In one workshop session in 1999, twenty-five instructors of the World Civilizations core courses evaluated a freshman paper in two ways-- in terms of the grade they would give (they agreed on a B- to B+ range) and in terms of critical thinking (a score of 2 on a 6-point scale). The conclusion they arrived at informally was that as an instructor group, they tended to be satisfied with accurate information retrieval and summary and did not actively elicit evidence of thinking skills in their assignments.

In December 1999, several WSU units working collaboratively on these issues sought funding from the Washington State Higher Education Coordinating Board (HECB). We received $65, 000 from the Fund for Innovation in Quality Undergraduate Education to explore the usefulness of the critical thinking rubric developed at Washington State University both to foster student higher order thinking skills and to reform faculty practice. With these funds, we explored the relationship between WSU’s writing assessment instrument, which evaluates student writing at entry and at mid-career, with the critical thinking rubric and the skills we were trying to measure with it. Furthermore, we compared data collected from courses specifically designated to integrate the rubric into their evaluative and instructional methods with courses that did not.

These initial studies yielded interesting results. First, we discovered an inverse relationship between our current scoring of student work in our writing assessment program and our assessment of the same work in terms of the critical thinking rubric. Our assessment practice, in other words, tends to elicit and reward surface features of student performance at the expense of our reported highest priorities—higher order thinking. Second, we found that integrating the WSU critical thinking instrument and methodology into teaching practices and assignments makes a significant difference in students' higher order thinking abilities over the course of the semester. In the HECB-funded pilot study, we ascertained that students' critical thinking scores:
  • Increase three and a half times a much in a course that overtly integrates the rubric into instructional expectations, compared with performances in a course that does not.
  • Improved more in one semester in those courses than students not in those courses demonstrate in the two years from freshman to their junior year, as established by comparison of entry and junior level performances in WSU's writing assessment data.
As we expanded our pool of faculty participants in the HECB study, we found that some instructors demonstrated a substantial need for support in revising their practices of instruction and evaluation. That is, their habitual teaching approaches did not elicit critical thinking from their students, and it was not easy for them to change to a mode that would. On the positive side, we found that faculty from all areas of the university, from the sciences as well as from the arts, humanities, and social sciences, found the rubric applicable to their definitions of critical thinking and usable in their disciplines. We had anticipated that definitions of critical thinking would be discipline specific or politically charged. In order to avoid unproductive ideological conflicts, we introduced the rubric as a diagnostic guide for faculty to freely adapt to their own pedagogical methods. Faculty were invited to make revisions and alterations relevant to their specific contexts. Evaluation of course papers is conducted using the more general critical thinking rubric.

From these initial studies we concluded the following: as a faculty, we are not eliciting systematically the kinds of higher order thinking skills that we have defined as our desired program and course outcomes. We, therefore, need to make a shift in our academic culture, so that we focus consciously and collectively upon our agreed upon goals and use effective means to move our students to the desired levels of achievement. In the WSU critical thinking rubric, we have an instrument capable of helping us achieve that shift in our teaching practices. The rubric has proven useful as a diagnostic tool for faculty in evaluating their own practices and testing the outcomes of different approaches objectively.

In our comparison of the writing assessment exams and the critical thinking rubric, for instance, we evaluated 60 samples of writing, representing pairs of entry-level Writing Placement Exams and junior-level timed writing portions of the WSU Writing Portfolio, using the critical thinking rubric to gather general baseline data regarding the critical thinking abilities of students at WSU. This population represented students who wrote on topics that required them to analyze a subject, but students in this sample population had no prior exposure to the critical thinking rubric. We found that a surprising inverse correlation existed between the writing assessment rubric and the critical thinking rubric. The higher the Writing Placement Exam score, the lower the critical thinking score at a statistically significant level (r = -.339, p = .015).

The same inverse correlation phenomenon appeared in the rating of the junior-level timed writings, though the results were not statistically significant ( r = -.169, p = 235.) Overall, students writing at the entry-level received a mean critical thinking score of 2.59 (SD =.738). At the junior-level, the mean critical thinking score increased to 3.05 (SD = .791). This indicates that students’ critical thinking between the freshman and junior year improves significantly (p = .001), though not to a generally appreciable level. The .458 overall increase reflects significant gains on all dimensions of critical thinking identified in the rubric. Yet the mean of 3.0469 nonetheless is barely half the ideal critical thinking score. In addition, the inverse correlation points out the need for our assessments to extend beyond the mechanics of academic writing and to address more fully and aggressively the critical thinking competencies desired.

A further outcome of the HECB study demonstrated the success of the critical thinking rubric as faculty integrated it into undergraduate classroom expectations. To assess the gains within an individual course attributable to the integration of the critical thinking course, papers were rated from two different semesters of Entomology 401, Biological Thought and Invertebrates, representing a single course and instructor, one semester when the rubric was not used (n = 14), and from the following semester when the rubric was used (n = 12). The overall mean score in the semester without the rubric, 1.867 (SD = .458) , increased significantly to 3.48 (SD = .923, p = .001) the semester when the rubric was used.

These gains were further supported in studies observing courses that implemented the rubric as opposed to courses that did not. One hundred and twenty-three student essays were assessed for critical thinking from several lower and upper division undergraduate courses. In the four courses where the rubric was used variously for instruction and evaluation (n = 87), the papers received significantly higher critical thinking ratings than in the four courses in which the rubric was not used (n = 36). The mean score for courses in which the rubric was not used was 2.44 (SD = .595) compared to 3.3 (SD = .599, p = .001) in courses which employed the rubric.

Over the three years of the FIPSE CT project, we will enlist 120 faculty in the General Education core courses representing a variety of disciplines to adopt the new assessment instrument, revise their own pedagogies in terms of the program goals and outcomes, and develop innovative combinations of teaching and assessment based on the instrument. In addition, these faculty will give presentations to their campus colleagues regarding their instructional innovations, and they will be encouraged to write up their findings for an edited, book length edition on successful teaching methods using these methodologies.

In addition to targeting the core General Education courses—a combination of lower- and upper-division classes that span the disciplines—we will also revise the WSU writing assessment instrument to elicit higher order thinking more overtly as one of its aims. This instrument will be used for all incoming freshmen in the Writing Placement Exam and for undergraduates across the disciplines for the junior-level Writing Portfolio. A cadre of faculty will be trained to think in terms of learning outcomes and equipped with a set of tools for making valid assessments for these exams and for evaluation of critical thinking gains in the General Education courses.

Dissemination efforts will focus on collaboration with state organizations, the Washington Assessment Group and the Washington Center for the Improvement of Undergraduate Education, to promote student learning, reform teaching, and develop and implement a means to measure the gains in critical thinking of students at other institutions regionally and nationally.

References

Facione, P.A. (1990) Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction. research findings and recommendations.

Haswell, R. H. (1998). Multiple inquiry in the validation of writing tests. Assessing Writing, 5 (1), 89-108.

Haswell, R. H. & Wyche, S. (1996) A two-tiered rating procedure for placement essays. In T. W. Banta, J. P. Lund, K. E. Black, & F. W. Oblander (Eds.), Assessment in practice: Putting principles to work on college campuses (pp. 204-207). San Francisco: Jossey-Bass.

Paul, R. (1990) Critical thinking: How to prepare students for a rapidly changing world. Santa Rosa, CA: Foundation for critical thinking.

Toulmin, S. E. (1958) The uses of argument. New York: Cambridge University Press.
         
                         
                         
                         
     
 

Accessibility | Copyright | Policies
P.O. Box 644530, Washington State University, Pullman, WA 99164 - 4530 USA