I took a few minutes out of the day to write down my philosophy about the work I do. I hope you find it useful.
For me, faculty development is about three goals. The first of these is the improvement of student learning. There are a variety of contexts for learning (lecture, discussion, lab work, performances, clinical work, etc.), a host of disciplines, and as many ways to teach as there are people. There are a variety of learning theories, including behaviorist, cognitive, social cognitive, constructivist, and connectionist. None of those theories are perfect, and teaching is still more art than science, but we do know a lot more about how people learn than we did a hundred years ago. We can identify goals and learning outcomes for courses, departments and institutions. We can use modeling, active learning, authentic frequent assessment, scaffolding, and fading to structure the courses to support what we understand about student learning. They find modeling helpful, along with active learning, scaffolding, and facing to help them improve their teaching. Assessment of faculty performance, though, is the domain of the chairs and deans, not the faculty developer. The faculty developer provides access to current knowledge and methodologies of teaching to the entire faculty, and helps those with whom he is working to make incremental, progressive changes to their teaching that reflects who they are and what they are teaching. The changes they make should be based on their own beliefs about teaching.
The second goal of faculty development is to enhance faculty satisfaction. Most faculty members will spend at least half of their professional lives in the classroom, so improving their teaching experiences will improve their satisfaction and hence their retention at the institution. Moreover, faculty members are clever, independent people and if they do not want to do something, they will find a way not to do it. Any change in instruction must be one of which the faculty members approve. There are a lot of factors that go into getting that approval. The innovation must first make sense to the instructor, and it should preferably not add greatly to her workload. Most faculty care about their students and their teaching, and if those two aspects are addressed, they will at least consider adopting the innovation. That decision is eased if the faculty developer has a friendly relationship with them, if their faculty friends are also making the innovation, or if the administration is rewarding the use of the innovation.
The third goal of faculty development is to enhance the college’s reputation. The best way to do that is to celebrate the victories of the faculty and help them celebrate the victories of their students. Workshops by faculty who are championing a particular innovation are a way to celebrate those victories and simultaneously help convince others to do the same. Action research on one’s classes (i.e. SOTL) should be encouraged if it is simple, effective, and not too time-consuming. Not everyone will have the time or inclination to do this sort of work, but those who want to should be supported and as many barriers to that work lowered as possible. All of this should be done creatively (to maintain interest) and relatively inexpensively (to ensure sustainability of the program).
Ideally, all three of these goals are part of any faculty development work. Together, they ensure that teaching innovations are effective, that the faculty supports and champions the work, and that the institution can celebrate meaningful improvement.
In preparation for my workshop on small group discussion protocols coming up at Simmons College, I’m posting two of the workshop handouts here. The first is Pre-Reading that I would like participants to do before the workshop. It will save us some time. I’m also making one of the handouts available online, namely Twenty Discussion Protocols, in case people want to access it online.
Nathan Grawe (Carleton University) visited Endicott College today and presented on Quantitative Reasoning. It was the same presentation that can be seen through the Quantitative Reasoning channel on Vimeo. He gave out two handouts. The first was version 6 of the Quirk rubric and the other were The 10 Foundational Quantitative Reasoning Questions. Functionally, what Dr. Grawe was proposing was Quantitative Reasoning across the Curriculum, where disciplines that did not use quantification as a central strategy would teach peripheral quantification in their classes, while classes (like Dr. Grawe’s) were centered on quantitative reasoning (QR, or quantitative literacy, or numeracy, or statistical literacy, etc.) would add authentic figures to their charts and authentic contexts to their problems. Those teaching peripheral strategies would have to ramp up their understanding of quantification and be sure that any QR was embedded in their work and not extraneous. Those teaching QR as a central strategy would need to make their calculations and data more authentic (i.e. more messy), which would make grading more time-consuming but hopefully much more rewarding. The presentation was entertaining and the proposal made sense. The rubric felt like it needed work though, and the sample QR assignments available through SERC did not seem as extensive as one might hope. There are Six Examples of QR across the Curriculum from the Numeracy Infusion Course for Higher Education (NICHE) program and a number of K16 teaching materials at another tab at the same site, but I would have preferred for these to have been indexed at MERLOT, since it would allow one-stop shopping for learning objects (and RLOs).
This news release is one I put together back in 2003. I couldn’t find a copy of it online, so I’m posting this in case anyone is interested. I found this panel enormously interesting because all of these classes were very small and each of the faculty members had a different approach to their instruction.
* * * * * * * * * * * * * * * *
A panel of IU faculty members gathered on November 8, 2003, to share individual approaches to some common challenges of teaching small classes. Although we often hear about innovative ways of teaching large classes, small specialized classes also require flexibility and creativity on the part of the teacher. The four panelists came from different departments and taught four different languages — Catalan, Swahili, Uzbek, and Russian. After the exchange, the panelists agreed that they learned as much as the audience, and all involved look forward to the next opportunity to share teaching strategies.
Professor Josep Sobrer, who teaches Catalan in the Department of Spanish and Portuguese, finds innovative course resources during his visits to Catalonia. Catalan is a language centered in Barcelona and Catalonia, with only eight million people in the world who speak the language. Though instructional materials can be scarce, Professor Sobrer supplements his classes with a Catalan language instruction program on CD-ROM and with Catalan versions of popular American movies. These resources allow his students to immerse themselves in the everyday applications of the language. Professor Sobrer teaches Catalan whenever a cohort of five to ten students is interested in the language.
Associate Professor Aliwaya Omar, who teaches Swahili and other languages in the Linguistics Department & the African Studies Program, supervises peer instructors in more specialized languages such as Luo, Tigrinya, and Moroccan Arabic. When a student wants to learn a language like these, usually for a research trip to Africa, she matches the student with a native speaker on campus and gives brief pedagogical instruction to the native speaker. The peer instructor then customizes the lessons to the contextual and research needs of the student. This intensive program is made possible by Title VI funding, primarily Foreign Language Area Studies (FLAS) fellowships received by the students. Associate Professor Omar also teaches Swahili, with enrollments of up to nineteen students, and supervises regular classes in Twi, Hausa, and Bambara.
Dr. Malik Hodjaev, a visiting lecturer teaching Uzbek for the Department of Central Eurasian Studies, must develop his own course materials for this little-taught language. Before coming to the U.S., Dr. Hodjaev visited colleges and universities in Uzbekistan, trying unsuccessfully to locate resources for teaching Uzbek to Americans. His search was made more difficult because Uzbekistan has changed alphabets five times in the last two hundred years. They used an Arabic script from the 14th century to 1927, when it was replaced by a Latin alphabet. The Latin alphabet was in turn replaced by Cyrillic in 1940. In 1992, the Latin alphabet began to make a comeback, and in 1993 an alphabet based on typographic symbols emerged. In 2005, the Latin alphabet will again be the official alphabet of Uzbekistan, but in the meantime, many teaching resources use only Cyrillic. As a result, Dr. Hodjaev is concurrently developing course materials in both Latin and Cyrillic alphabets for the three levels of Uzbek he teaches.
Dr. Jeffrey Holdeman is the Slavic Language Coordinator in the Department of Slavic Languages and Literatures and is currently teaching Russian. He is working to build student enrollments and retention rates in Slavic language classes by developing a multi-faceted strategy combining program development, analysis of trends in student demographics, extra-curricular activities, instructor-training, advertising, and outreach. Like some of his colleagues, he finds resource materials scarce for some languages, such as Polish and Serbian-Croatian. He is working with the AIs in these languages to compile bibliographies and materials and to present these to colleagues at national conventions. Dr. Holdeman also uses electronic discussion groups, such as SEELangs (http://home.attbi.com/~lists/seelangs), to ask questions, share resources with other instructors, and combat the isolation felt by instructors of less commonly taught languages.
These four faculty members agree that the importance of teacher-student relationships is magnified in small classes. Instructors often need to respond to the strengths and weaknesses of particular students in designing syllabi and pacing the content. For example, student needs largely shape the content of the course in the one-on-one African languages courses. Instructors tailor these syllabi to concentrate on the specialized research contexts in which the languages will be used (reading or speaking, specialized vocabulary, etc.).
Relationships among students and differences in their preparation are also magnified in small classes. For example, a single language class may enroll novice undergraduate learners, heritage learners (students raised in households where the studied language is spoken), and graduate students (who may have advanced learning skills but little content knowledge). Both heritage learners and graduate students will often demand greater content than the novice learners can handle. Dr. Holdeman provides these more advanced learners with further material on the cultural context of the language, allowing them to deepen their understanding of the language without disrupting the level of conversation within class.
There’s an excellent article on classroom observation protocols in the Chronicle of Higher Education (Feb. 14, 2014) on classroom observations. The method used by Brett Gilley (University of British Columbia) is Classroom Observation Protocol for Undergraduate STEM (COPUS), and is essentially a time audit for activities in the classroom. COPUS is also used in Smith et al (2013) The Classroom Observation Protocol for Undergraduate STEM (COPUS). Both that protocol and the Teaching Dimensions Observation Protocol (TDOP) are classroom observation systems designed for science classroom. TDOP is described in Hora, Oleson & Ferrare (2013) The TDOP User’s Guide and is most recently mentioned in Hora & Ferrare (2014) Remeasuring Postsecondary Teaching.
This is not a new thing. Most systems like this are based on Ned Flanders (1960), Interaction Analysis in the Classroom: A Manual for Observers. The Clinic to Improve University Teaching at the University of Massachusetts-Amherst used classroom observation in its model for faculty development in higher education in the early 1970s, and the Clinic supplied the model for most faculty development centers in the U.S. for the next twenty-five years or more.
The method that I use is most closely related to anthropological non-participant descriptive observation. I learned it at Indiana University’s Teaching Resource Center, which was largely based on the Clinic model, but I’m not sure exactly where it came from. Jan Nespor (1997) talks about this method in Tangled Up in School: Politics, Space, Bodies, and Signs in the Educational Process, so it is not uncommon. Take a look at Whitehead (2005) Basic Classical Ethnographic Research Methods to learn more.
Because I am a non-participant, my presence can change the classroom dynamics, so I have the instructor explain that I’m there at her request because she is interested in improving her teaching. I try to dress down for the observation, to try to blend in as best as possible. Once the students know I’m not checking up on the professor, they tend to relax and act more normally. It is descriptive observation because the aim is to write as much down as I can, so I have plenty of evidence to discuss with the instructor later. I try to write down everything the students do, so I can judge whether the instructor is going too fast or if he is talking while students are busy writing. I keep track of how many PowerPoint slides are being used, how they are designed, and how easy they are to read from the back of the room. After about twenty minutes, I change my method of observation and start concentrating on the interaction between teacher and students, and among the students themselves. I draw a map of the room showing the various interactions and circling the students who talk the most. I note where the teacher moves and with whom she is making eye-contact. I note the types of questions asked, and how they are followed up. I typically end up with 10-12 pages of notes for an hour of class, so I also make sure to tell the instructor this so he doesn’t worry and think I’m just writing down things for improvement.
What is interesting to me about Gilley’s work is that a “High” level of student engagement during a lecture and demonstration session. I might report that for a demonstration session, especially if it models the work the stduents will soon be doing, but the rest of the time they are just listening. All the research I’ve seen does not see that as high engagement. Lectures are best at improving lower cognitive goals (memorization, understanding) by students. Lectures are not particularly effective at improving student thinking, which is what I equate with student engagement. Behaviorists like Gagne think of entertainment as a form of engagement, but Eriz Mazur and others have pointed out that while students may be stimulated and entertained by a rousing lecture, they really don’t learn very much.
Today at a faculty retreat, we were encouraged to write haiku of wisdom for our graduate students. Having done the haiku effort before, I resisted and instead wrote a sijo, a Korean form of poetry that predated haiku. Here’s my sijo effort:
The trees of Keumgangsan bend in the wind, grow to the sun.
The scholar droops in the night, leans to the bed.
Bed warmth, sun warmth. Both must wait for dissertation’s end.
NPR just had hosted an article by Tania Lombrozo about Solving the Conundrum of Multiple Choice Tests. Lombrozo summarizes the research by Arnold L. Glass & Neha Simha, entitled, Multiple-Choice Questioning is an Efficient Methodology that may be widely Implemented in Academic Courses to Improve Exam Performance. Actually, this research is nothing new. Repetition of practice can lead to learning — we knew that already. MCQs can contribute to this repetition and lead to improved long-term retention. Right. But retention of what? That is the key question. Behaviorists taught us all about improving memory, about stimulating the improvement of memory, and even how long-term memory eventually fades after the removal of the stimulus. Nothing new. But Lombrozo states that, “Students who received multiple choice questions throughout a course performed better than those who didn’t when it came to final exams.” That’s true, but easily can be misconstrued. MCQs aren’t an irreplaceable part of an education. They are one tactic that can be used, and unfortunately that tactic is not very good at teaching the upper cognitive levels identified in Bloom’s Taxonomy. They work well for remembering, understanding and even applying heuristics (which means they are helpful for mathematics), but they are difficult to use to check the ability to analyze, evaluate and create texts (making them not very helpful for the humanities or other textual forms of knowledge). If a student practices with MCQ quizzes, they will obviously be more ready to answer MCQs on the tests, particularly if the questions are parallel or perhaps even the same on both quizzes or tests. Glass & Simha demonstrate that MCQ quizzes can also help with short answer responses on the tests, which is useful knowledge. But we already knew that repeated MCQ quizzes could help with retention of content knowledge. Our society is pretty test-happy already. Great. But what we also need is the ability to analyze and evaluate that information and to solve problems based upon it, or develop new lines of inquiry. MCQs don’t do that very well. We also know that people get better with practice. So why not give them practice at those higher cognitive functions with weekly written quizzes or journal entries that will give them the practice to do them? The answer is time. Most teachers know what is effective and what is not, but they generally choose the easier methods even if they are not the most effective, because their lives are already too busy and they have to have some time for their own families.
The earliest article cited by Glass & Simha is A.P. Yonelinas (2002), The nature of recollection and familiarity: A review of 30 years of research, Journal of Memory and Language, 46, pp. 441–517. Unfortunately, our library does not have that journal in its subscribed databases, nor does JSTOR, so I’ll have to interlibrary loan it. But I suspect I will find that it lists several articles that reach similar conclusions to Glass & Simha. I’ll let you know.
In the meantime, we can look at Douglas Courtney’s similar research in the 1940s at Boston University. The article is Courtney et al (Feb. 1946), Multiple Choice Recall versus Oral and Written Recall, The Journal of Educational Research, 39(6), pp. 458-461. Courtney noted that,
“It is often observed that some children who make high scores on reading tests fail to do commensurate work in the classroom. This disparity in performance may be due in part to the fact that the reading test demands a type of recall different from that demanded by classroom discussion or written tests.”
Indeed, Courtney’s research showed that “The scores on the multiple-choice test were markedly higher than on the written recall test.” Moreover,
“It is evident from this study that multiple-choice recall does not insure comparable written recall. A study of individual scores shows many pupils with high recall on multiple choice items but very low recall on written reproduction. The wider standard deviation of the written recall test is also of significance. These findings indicate the desirability of supplementing the usual reading test with measures of written recall in order to get a more accurate picture of the pupil’s ability to deal with materials he has read.”
All of this was before Bloom and the other Univ. of Chicago faculty identified different types of cognitive acts (Bloom’s Taxonomy). It is clear, however, that transfer of knowledge from the declarative knowledge tested by MCQs was not helpful for Bloom’s higher cognitive levels.
Now, I know most of you will ignore me, because you think MCQs will save you time, but actually it takes quite a while to design effective MCQs. Please at least read Thomas M. Haladyna et al’s (2002), A Review of Multiple-Choice Item-Writing Guidelines for Classroom Assessment, Applied Measurement in Education, 15(3), pp. 309-334.