tealline.gif (3178 bytes)

What direction is assessment taking at WSU and how has this changed during the past six years? The answer to this question probably depends on the point of view of the person answering. My own opinion is that assessment activities have increasingly become part of the normal, every-day, institutional activities at WSU. This may be intentional, since assessment funds are distributed centrally, so that all colleges and academic units receive assessment funding, and are therefore expected to engage in assessment activities. We are also beginning to have sufficient assessment data, from our alumni surveys, from the freshman surveys, from the writing program, and from the end-of-program assessments, to be able to identify trends and problems needing attention. It has taken a while to get to this point, but we are now beginning to see the benefits of having assessment data, on which to base decisions. There is value in good assessment data, accumulated over several years. It can be used to establish trends, and to look at the effects on the trends, of implementing policy changes. However, assessment data by itself does not produce change and improvement in instruction, other factors are equally if not more important. There are several instances where WSU is committing resources and establishing policy intended to improve undergraduate teaching, without having specific assessment data to justify them (e.g., the Center for Teaching and Learning).

 

With the establishment of WSU's Center for Teaching and Learning (CTL) there will be an increased emphasis on Classroom Assessment Techniques (CAT's) which faculty can use to improve their teaching. This is one of the stated goals of the CTL, and will increase awareness of assessment among a greater number of WSU faculty and departments. However, it won't provide much assessment data about programs, since CAT's are not designed for this purpose. CAT's are most useful for providing immediate feedback to instructors about what is and isn't working in the classroom. This kind of assessment is likely to be more effective at improving instruction than the programmatic assessments that we have been conducting.

 

One of the problems with the direction that assessment is going is that it is becoming an all encompassing enterprise, so that almost any data gathering activity comes to be labeled as assessment. The original focus of assessment was only on undergraduate student outcomes. But now almost anything even remotely connected with student outcomes can be called assessment. In part this has occurred because of the calls for increased accountability and the view that assessment data can be used for accountability purposes. However, my opinion is that using assessment for accountability purposes, only results in discouraging universities from experimenting with change, because it then becomes too risky to have negative results. In addition, accountability emphasizes inter-institutional comparisons, which are detrimental because they don't account for fundamental differences in student populations and institutional resources and programs. I think that we have benefited and been fortunate in this state that assessment has been a collaborative effort among all the institutions of higher education. The annual assessment conference, the WAGs newsletter, the Fall colloquies, and participation in the assessment taskforce have helped all of us to improve our assessment programs, and consequently to improve instruction as well. This collaborative aspect is probably one of the most valuable things to come out of assessment, and is something work continuing in the future.

 

 Future assessment work will emphasize management and analysis of varied student outcomes datasets. With each academic year we are accumulating increasing amounts of data, which by itself is relatively valueless. Analyzing this data and making sense of the numbers takes substantial time and effort, however the results are often worthwhile and lead to new insights into the learning process. The point is simply that in the years to come the focus of assessment will most likely shift from designing new assessment activities, to more work with analyzing and reporting on the accumulating data.

 

This is not say that we won't be developing new assessments, because there are clearly some areas (e.g., quantitative skills) where more activities are still needed. Additionally, we are considering more surveys of students, and faculty as well, to get more regular information on our progress at improving instruction.

 

HECB policy of 1989 (May) identified five categories of assessment that institutions were to use in developing their assessment programs. We have always organized our assessment reports and files, around these five categories. The five areas are listed below:

 

                                1. Collection of entry baseline information.

 

2. Intermediate assessment of quantitative and writing skills and other appropriate intermediate assessment as determined by the institution.

 

3. End-of-program assessment.

 

4. Post-graduate assessment of the satisfaction of alumni and employers.

 

5. Periodic program review.

 

During the last six years, these five areas have served well as an organizing principle for our assessment activities. It is not always clear where to put new assessment activities, such as the time-to-degree activity, but perhaps this kind of categorization is not all that important anyway. The important point of the five categories is that there be some assessment activities that focus on each of these areas.

tealline.gif (3178 bytes)

teal.gif (973 bytes)Six Year Retrospective

 

 

 

 

 

tealline.gif (3178 bytes)

Annotated Bibliography of Assessment Reports

 

WSU Assessment Workplan

tealline.gif (3178 bytes)

teal.gif (973 bytes)Six Year Retrospective