-
I would like to raise three aspects of the current system that I believe need reforming: the clustering of membership in accreditation agencies based on geography rather than sector; the increasingly costly and burdensome requirements placed on institutions irrespective of their record of achievement; and the implications of the current trend toward requiring overly proscriptive quantitative and comparative measures of student learning outcomes.
The Uses and Misuses of Accreditation Princeton President Shirley M. Tilghman, November 9, 2012.
-
The emphasis on form over function extends to the reviews we do of each other's programs in accreditation work. Did the program have outcomes? Were they assessed? Were the results used for something? Everything is checked except whether or not the data are any good and the inferences are reasonably justified.
Most institutions probably have a small number of assessment projects, perhaps in general education, that do get the attention they need to be successful as educational research. But the majority can only pass accreditation reviews through attention blindness induced by a box-checking mentality of correctness.
A Guide for the Perplexed David Eubanks, AALHE Intersection, Fall 2017.
-
The last stages of the assessment process combine results and use of results into the sacred closing of the loop. Faculty tend to resist demands for evidence of cut-and-dried "improvements" in student learning. They are not wrong. The language of the typical industrial quality control cycle (analyze data, identify problem, propose solution, re-evaluate data to see if it worked, close the loop) applies very unevenly to academic work. I am not deriding evidence-based, outcome-oriented decisions. A law school or cosmetology program whose students never got their licenses would need to take a long, hard look at what it was doing. Likewise, controlled studies comparing competing pedagogies inform teaching in academic disciplines, including the humanities. In English composition, for instance, the research strongly suggests that what is called "teaching grammar" (i.e., getting students to memorize usage rules or complete exercises) does little to improve student writing. So, we don't do it. By contrast, much research suggests that getting students to combine sentences does work, especially in the context of an authentic inquiry-based writing task, so we do a lot of that. If we want to see if an initiative is successful, we set a goal, then look at the data. And yes, data sometimes reveals the unexpected.
But a lot of student learning doesn't fit the "close the loop" model of improvement. The biggest difficulty with teaching is not gathering or analyzing the "data." It is making sure that the data, that is, the actual moments of student performance that do or do not reveal something, are meaningful. And here, one can ask a thousand questions that make the loop model look like a convenient fiction.
For instance, what peer-reviewed evidence justifies current demands for prescriptive statements about what a student is supposed to learn? [...]
Beyond the Theatre of Compliance? Madeline Murphy, AALHE Intersection, Fall 2017.