Peer Review of Teaching as Part of Faculty Evaluation

Peer Review of Teaching

Cite this guide: Bandy, J. (2015). Peer Review of Didactics. Vanderbilt University Center for Educational activity. Retrieved [todaysdate] from https://cft.vanderbilt.edu/guides-sub-pages/peer-review-of-teaching/.

Introduction


  • What is Peer Review
  • Why Peer Review
  • How to Select Peer Reviewers
  • How to Evaluate
  • Possible Limitations
    of Peer Review
  • Bibliography
In higher education, peer review stands as the prime number means for ensuring that scholarship is of the highest quality, and from it flows consequential assessments that shape careers, disciplines, and unabridged institutions.  While peer review is well established as a means of evaluating inquiry beyond the disciplines, it is less common in the cess of teaching.  Yet information technology is no less useful, since it can ameliorate what Ernest Boyer has called the "scholarship of teaching and learning" past enhancing instructional and faculty development, by bolstering the integrity of personnel decisions, and past enabling more intentional and mutually supportive communities of scholar teachers.  This guide is intended equally an introduction to the basics of peer review, including its purposes, challenges, and common practices.  The primary audience for this guide consists of departments, programs, or schools considering implementing peer review, although individual faculty, staff, and students are likely to find what follows interesting, likewise.

What Is Peer Review?

Peer review is often identified with peer observations, but it is more broadly a method of assessing a portfolio of information about the didactics of an instructor nether review.  This portfolio typically includes curricula vitae, student evaluations, self-evaluative statements, peer observations, and other evidence such equally syllabi, assignments, student piece of work, and messages solicited from former students.  This said, peer observations volition figure prominently in what follows.

It is likewise worth noting a common distinction between two very different forms of peer review: formative and summative.Determinative evaluation typically is oriented solely towards the improvement of teaching and is part of instructional mentorship and development. Summative evaluation, in contrast, is that done to inform personnel decisions.  To improve the freedom and exploration of individual kinesthesia, formative reviews may be shielded from scrutiny for a menstruation of years until such time that there needs to be accountability to standards of excellence for personnel decisions.  At this point in time, summative evaluations are more mutual since they are tied to decisions related to reappointment, promotion, or tenure (Bernstein et al. 2000).  Because the more consequential nature of summative evaluations tends to diminish the formative value of the peer review procedure, it is important to maintain a articulate distinction between these types of evaluation and be transparent with those nether review.  It is also common to have different faculty involved in each form of assessment – mentor faculty in the formative evaluation and departmental or program administrators, such as chairs, involved in summative evaluations.

Why Peer Review?

Peer review serves many functions in the process of evaluating faculty, courses, or unabridged programs.

What'southward practiced for enquiry is good for teaching. Every bit in peer reviews of research, information technology is a vital ways of receiving expert assessments of 1 important part of scholarly exercise: teaching.  As with research, peer review ensures that faculty internalize, in the words of Pat Hutchings, scholarly "habits of mind" by identifying goals, posing questions for enquiry, exploring alternatives, taking appropriate risks, and assessing the outcomes with learned colleagues.  When this process of scholarly engagement and deliberate improvement is part of the institutional expectations for teaching, every bit information technology is with research, it can function to support a community of scholarship effectually education (Hutchings 1996).

Enables teaching to exist a community try .  Relatedly, also ofttimes in higher education educational activity is subject field to what Pat Hutchings has called, "pedagogical isolation," just peer review provides opportunities for us to open our pedagogy up to a community of colleagues who can nurture improvement (Pat Hutchings 1996).

Peer review allows for less sectional reliance on student evaluations. Student evaluations have become institutionalized in higher education and for the nigh part provide extremely useful information for the purposes of evaluating faculty, courses, and even entire curricula.  Withal, students may not always be the all-time evaluators since they often accept limited disciplinary preparation, they can accept biases against certain faculty unrelated to teaching effectiveness, and they can be less cognizant of institutional goals or values than faculty. Indeed it is for these reasons that the American Sociological Association, along with other professional societies, have cautioned universities not to overly rely on student evaluations (see here).

Greater kinesthesia experimentation and rigor. Only as chiefly, an over-reliance on student evaluations in processes of professional review tin can crusade faculty to become overly concerned nearly receiving positive educatee evaluations.  In the worst of moments, this can lead kinesthesia to adopt a consumer model of education, shaping our pedagogy to meet the needs of students over the needs of our disciplines or institutions (Hutchings 1996).  This, in turn, results in faculty becoming overly cautious by refusing to challenge pupil expectations by using conventional education methods, by becoming less rigorous in their standards, and at worst, by feeling a need to entertain more than educate.  Peer review, when done in formative and summative forms alongside student evaluations, can ensure both faculty and students will accept a voice in their evaluation, and that faculty have greater autonomy to innovate and to teach rigorously.  This tin requite kinesthesia the opportunity to focus more intentionally on what helps students learn best, and therefore more straight focus on the quality of their teaching.

Allows for both determinative and summative evaluation. When done well, peer review involves both formative and summative evaluations.  The inclusion of greater formative evaluation allows for more significant faculty and instructional development by encouraging more than disquisitional reflection on teaching and by providing a safer, less risky, and more collegial setting for cess.

I mproves faculty approaches to teaching . Daniel Bernstein, Jessica Jonson, and Karen Smith (2000), in their exam of peer review processes found they positively impact faculty attitudes and approaches toward education.  While their study did not reveal a necessary shift in faculty attitudes towards student learning and grading, information technology did change several important aspects of pedagogy exercise.  Starting time, it dramatically impacted in-class practices, particularly the incorporation of more active and collaborative learning, and less reliance on lecturing.  Second, information technology improved faculty willingness to ask students to demonstrate higher order intellectual and critical thinking skills.  Third, for some faculty it increased the quality of feedback they gave to their students on assignments, and thus improved educatee understanding and performance.  And lastly, they enjoyed discussing substantive disciplinary and teaching issues with their colleagues, enhancing the scholarly customs in their departments and programs.  Peer review therefore shows an ability to meliorate faculty joy in teaching by improving the relations amongst faculty and students, and among kinesthesia themselves.

How to Select Peer Reviewers

Peer review may take many forms, only usually begins with the option of peer reviewers drawn nearly often from within the same department or program equally the instructor existence reviewed.  The reviewers typically are senior kinesthesia, but sometimes junior kinesthesia equally well, who accept significant expertise in instruction.  These faculty may be chosen to undertake all peer teaching reviews for the department or program during a specific flow, or they may exist selected specifically considering they share some expertise with the instructor existence reviewed.  The person under review also may be granted some selection every bit to whom i or more of the reviewers may exist.  The number of the reviewers may vary just usually include at least 2 and rarely more than four.

In selecting reviewers, one must be mindful of several criteria.

Institutional Experience. Information technology helps if reviewers are highly familiar with the department or program, school, and institutional goals, and particularly the processes of peer review itself and the criteria that form the basis of the assessment.

Integrity. Peer reviews as well function best when reviewers take commitments to integrity, fair-mindedness, privacy, and agreement the reasoning backside the instruction choices of the person nether review.

Trust. Peer reviewers, peculiarly in determinative reviews, work collaboratively with the faculty under review to establish a clear procedure of evaluation and reporting, therefore peer reviewers who tin can establish trust are particularly effective.

Mentorship. Those under review are particularly vulnerable and often anxious, therefore reviewers who have grace and tact in the process of assessment, tin can offering feedback with integrity and support, and who tin help advise on strategies for faculty development will be near helpful.

Thorough and Practical. Peer reviewers should be able to provide summary reports that conspicuously and thoroughly correspond all phases of the procedure, and that make recommendations that are specific and practical (Centre for Teaching Effectiveness, University of Texas, Austin).

How to Evaluate?

The peer evaluation itself ordinarily focuses on several aspects of teaching through a process that unremarkably has a serial of activities.  The post-obit list of peer evaluation activities represents a sequential, reasonably thorough, and maximal model for peer review, but not all are necessary.

Develop Departmental Standards for Instruction. Without a clear set of learning goals for all departmental programs information technology is difficult to assess teaching with whatsoever validity or reliability, and it can go out departments open to biases, inconsistencies, and miscommunications in peer evaluation processes.  One of the greatest benefits of peer reviews of teaching is that it provides an occasion for departments and programs, if not entire schools and universities, to be more intentional, specific, and articulate almost quality educational activity and learning, and the various means to attain it.  This may be the work of an entire department or a special teaching committee that researches disciplinary and institutional benchmarks and proposes guidelines for review.

Preliminary Interview. Peer review processes usually brainstorm with a conversation, sometimes framed as an interview, between the peer reviewers and the instructor being reviewed.  The prime purpose of this is to provide the teacher in question an understanding of the process of peer review, and to offer them the opportunity to provide their input on the procedure.  The conversation as well allows the peer reviewers to begin collecting data almost the didactics context, specially the courses, of the teacher being reviewed.  This context helps to provide better understandings of the teacher's goals and teaching choices, and may be divided into several dimensions related to the blueprint of their courses (Fink 2005).

Logistical contexts. How many students?  Is the course(s) lower division, upper division, a graduate class, etcetera?  How frequent and long are the class meetings?  Is it a distance-learning class?  What are the concrete elements of the learning environment?

Goals. How have the learning goals of the course(s) been shaped by the department, college, academy, or discipline?  Are the courses required or electives?  What kinds of intellectual and skill outcomes is the focus of the course(s)?

Characteristics of the learners. What are their ages and other demographic factors that may touch on teaching?  What is their prior feel in the subject?  What are their interests and goals?  What are their life situations?

Characteristics of the teacher. What expertise does he or she have in the subject areas?  What are his or her ain assessments of his/her strengths and weaknesses?  What models of pedagogy did he or she come across as a pupil?  What theoretical or practical orientations basis his or her approach to teaching and learning?  What from the educational activity and learning scholarship has been influential on his/her didactics?  How do these influences take shape in the teaching of the instructor's different courses?

Class Observations. The goal of the class observations is to collect a sample of information virtually the in-course practices of teaching and learning.  They typically include ii to 4 class visits to gain reliable information.  If the teacher beingness reviewed teaches multiple courses, every bit they ofttimes do, the process may involve fewer observations per course (e.g., ii).

What to detect? The goal is to create a thorough inventory of instructor and student practices that ascertain the pedagogy and learning environment.  These may vary widely across bailiwick and teachers, and can exist drawn from a broad array of pedagogies, depending on learning goals.  This said, there are several categories of instructor and student practices to notation during the observation(due south).

Content cognition
Utilise of instructional materials
Class organization
Presentation form and substance
Teacher-Student interactions
Student participation
Assessment practices

How to assess didactics practices? In many institutions, inventories of teaching practices are combined with assumptions about what is conducive to student learning.  Information technology is of import for the peer reviewers and the administrators who guide them to exist conscious of what they regard as constructive teaching and the appropriate evidence for it before committing to an observation procedure, lest the peer review gather invalid or unreliable data, and lest the process invite peer biases and unexamined pedagogy into the evaluation.  A reasonably representative list of pedagogy practices, along with more or less explicit value for learning, would include the post-obit:

Content knowledge

– Selection of class content worth knowing and appropriate to the form
– Provided appropriate context and background
– Mastery of grade content
– Citation of relevant scholarship
– Presented divergent viewpoints

Clear and effective class organization

– Articulate argument of learning goals
– Relationship of lesson to course goals, and past and future lessons
– Logical sequence
– Appropriate pace for student agreement
– Summary

Varied methods for engagement, which may include…

– In-class writing
– Analysis of quotes, video, artifacts
– Grouping discussions
– Pupil-led discussions
– Debates
– Case studies
– Concept maps
– Book clubs
– Role plays
– Affiche sessions
– Call back aloud trouble solving
– Jigsaws
– Field trips
– Learning logs, journals
– Critical incident questionnaire (see Brookfield)

Presentation

– Project voice
– Varied intonation
– Clarity of explanation
– Eye contact
– Listened effectively
– Defined hard terms, concepts, principles
– Apply of examples
– Varied explanations for difficult material
– Used sense of humour appropriately

Instructor-Student Interactions

– Constructive questioning
– Warm and welcoming rapport
– Use of student names
– Encouraging of questions
– Encouraging of discussion
– Engaged student attention
– Answered students effectively
– Responsive to student communications
– Pacing appropriate for student level, activity
– Restating questions, comments
– Proposition of farther questions, resource
– Concern for private student needs
– Emotional awareness of pupil interests, needs

Appropriateness of instructional materials

– Content that matches class goals
– Content that is rigorous, challenging
– Content that is advisable to pupil feel, cognition
– Adequate preparation required
– Handouts and other materials are thorough and facilitated learning
– Audio/visual materials effective
– Written assignments

Student engagement

– Student interest
– Enthusiasm
– Participation
– Student-to-student interaction

Support of departmental/program/school instructional efforts

– Appropriate content
– Appropriate didactics
– Advisable do

In-class, formative assessment practices

– Background cognition probes, muddiest indicate exercises, defining features matrix and other "classroom assessment techniques" described in greater particular here
– Ungraded in-form writing exercises, such as minute papers
– Discussions
– Questioning

Out-of-class, summative assessment practices

– Class participation
– In-class writing exercises, graded
– Presentations
– Examinations
– Projects

Utilize of observation forms. To make the process more transparent, reliable, and valid, many departments and programs use observation forms, constructed from items similar those listed to a higher place, to help peer evaluators track and evaluate educational activity and learning practices.  These may include nothing more than checklists of activities; they may provide rating scales (e.g., Likert scales) to assist the evaluation; they may take open-ended prompts that provide infinite for general commentary and assay; or, they may involve some combination of all three.  The near thorough forms guide the observer in what exactly they should notice, and prompt them to provide some synthesis and evaluation of their observations.  Several case forms may be constitute with a broad online search, but here is a useful example from Wayne State University.

Evidence of Educatee Learning.

Stop-of-course student piece of work. To more thoroughly appraise the effectiveness of instruction, peer reviewers may collect evidence of pupil learning in the course of examinations, written assignments, and other projects from the course of the teacher under review.  Collecting this evidence may exist helpful in assessing core competencies expected from the course.

Student work throughout the course. Evidence of student learning may be more thoroughly assessed by collecting examples of pupil work at various times during a course so every bit to gain perspective on pupil growth and evolution.  To practise this requires some training and lead-time to ensure the teacher under review is certain to collect work from students, and proceeds their consent for sharing it.

Grades. Student grades likewise may be used as an indicator of student operation, if they are accompanied by contextual information such as a grade distribution, the criteria used to assign those grades, and samples of pupil work at A, B, C, D, and failing levels.

Student Evaluations. In addition to reviewing standard end-of-course evaluations, peer reviewers may cull to solicit letters of evaluation from a sample of students, current or alumni, who take had at least one class with the instructor in question, preferably ii or more.  Requesting these from graduates who have a more mature perspective on the effectiveness and touch on of the teacher under review can exist peculiarly useful.  The request for evaluation letters can be more or less specific in its prompts, only at a minimum typically innovate the importance of the evaluation process for the individual and the institution, and ask for them to assess how effective the instructor was equally an instructor, what limitations he or she may have, and what impacts he or she made to their educations.

Engagement with Centers for Instruction. If the person under review has attended consultations, workshops, or other programs offered by a campus centre for teaching and learning, the evaluation procedure may consider this to exist part of the analysis.

Advising Activeness. Peer evaluators may wish to make note of the advising activities and load of the teacher in question, forth with any special service to the teaching mission of the department, school, or institution.  This may involve some data collection from students the teacher has advised and peers with whom the teacher has collaborated in their teaching service.  For some faculty, this kind of teaching outside typical course structures can be a substantial contribution to the teaching mission of the section.

Professional person Publications, Presentations, and Recognitions. Peer reviewers too may wish to collect evidence of the scholarly activities in pedagogy and learning past the teacher in question, such as professional publications, presentations, or awards for their teaching.

Collaborative Analysis. Together, each of the activities in a higher place provides information that can be assembled into an overall flick of the teacher nether review.  Afterward meetings betwixt the peer evaluators to review the information collected, any missing information can be sought and unresolved questions can be answered.  It is then incumbent upon the evaluators to discuss the form and substance of a terminal cess and to divide the work of writing it.

Overall Recommendation. Typically the written evaluation includes some clarification of the process, the methods, the data collected, and of course any positive feedback and effective criticism that is necessary, along with suggested improvements.  This will exist the substance of a formative or summative assessment by the peer evaluators, one that may be shared with the relevant administrators and the teacher nether review, depending on the procedure adopted.  If the evaluation is formative, this may accompany a series of suggested improvements for teaching and a plan for instructional or curricular evolution that could include ongoing mentorship, the use of professional person development resources such as the Center for Teaching, and further peer evaluation.  If it is a summative evaluation, the recommendation will be used by departmental and university committees and administrators as the basis for a reappointment, promotion, or tenure conclusion.

Possible Limitations of Peer Review?

Limitations of Peer Observations. While peer review may be a process that allows for a more rigorous evaluation of a teaching portfolio, it is worth noting that peer observations alone are often bereft data on which to base of operations an entire teacher's assessment.  Peer observations correspond only a snapshot of education, and thus must be only 1 component of a teaching portfolio that is subject to peer evaluation, including pupil evaluations, evidence of student learning, course materials, and self evaluations, just to name a few.

Bias. Surely, all methods of teaching evaluation adventure biases of i grade or another.  1 common criticism of peer review processes is that they may invite some bias if they involve limited or unprofessional approaches to information collection and analysis.  This may occur because of several reasons.  Personal relationships between reviewers and those existence reviewed can create either hyper- or hypo-critical approaches to evaluation.  Standards of excellence or their application tin be highly subjective and individual teaching styles may vary widely, therefore evaluations tin can be contentious if standards are non defined in advance through rigorous enquiry and open, collaborative processes.  Ability relations in departments or programs as well can unduly influence open up and thorough evaluation.  Other factors may crusade peer evaluator bias also.  Therefore, to avoid the worst cases of bias, peer review must be established via processes that guarantee the greatest rigor, openness, and transparency.

Collegiality Issues. Under the best of circumstances, peer review can shape a dialogue most teaching that fosters a pedagogy customs amongst educators and can lead to more growth-oriented forms of professional person development.  Nonetheless, when it is implemented in less collaborative and more adversarial forms, or when information technology involves unavoidable consequences such equally promotion or job security, anxieties and frustrations tin can be triggered for both reviewers and those being reviewed.  Therefore peer review must attach to the highest standards of transparency, integrity, and care for the sake of those under review.

Time and Endeavor. Possibly the most common critique of peer review processes, and the reason they are not more than commonly used in the academy, is that they require significant fourth dimension and endeavor.  Departmental and campus administrators must define the procedure, constitute standards, train and prepare reviewers, perform peer observations, review portfolios, draft assessments, and have multiple dialogues with those under review.  Each footstep requires preparation if it is to exist off-white, transparent, and professional.  Whatsoever shortcut may compromise the rigor, care, or goals of the evaluation.  All the same, there are several shortcuts each with potential costs.

Rely on the expertise of senior colleagues, administrators, and the Centre for Teaching. At that place are typically those on campus that my have sufficient knowledge to assist in defining departmental learning or teaching goals, in determining what data to include in a teaching portfolio, in grooming peer observers, in drafting assessments, etcetera.  These sources of expertise may be helpful in streamlining the process with little price to its integrity, as long as their suggestions may be tailored to the needs of the section or program in question.

Use predefined standards for teaching and learning. Rather than spend significant time adjudicating which learning and pedagogy goals are appropriate, department or program leaders may decide to utilise existing language in university or departmental missions, course catalogs, accreditation reports, other constituting documents, or the operating principles of the Center for Teaching.  This may grant some efficiency with limited costs to the integrity of the peer review procedure.  Yet, vague and imprecise learning goals that sometimes characterize constitutional documents (e.thousand., "critical thinking") may be of niggling help in benchmarking a specific fix of courses or teaching strategies.  Likewise, departments and programs may take particular didactics challenges that broad standards may non take into consideration.  Both difficulties can leave departments or programs open to unclear standards, unfair or inconsistent judgments, and miscommunications.

Collect information judiciously. One of the more fourth dimension consuming tasks of peer review is combing through all facets of a didactics portfolio, particularly if information technology includes samples of student work.  To salvage time, some peer review processes rely largely upon peer observation, in add-on to pupil evaluations of didactics, and do not collect teaching portfolios or examples of student work.  Others collect only limited samples of student work, such equally grade distributions and examples of student work at A, B, C and D levels to evaluate an teacher's cess and grading strategies.  Other data collection short cuts may be possible as well.  Even so, more limited data may allow fewer contextual interpretations of a educational activity career, and peer observations lone are but in-grade snapshots of instructional performance, not a more encompassing perspective on all phases of teaching.  These may atomic number 82 a department or program to make less informed and fair judgments.

Utilize templates for written peer evaluation reports. Concluding written reports need not exist highly expansive analyses, but may represent more than of a thorough check list with brief sections of commentary on challenges and successes that become points of discussion betwixt peer reviewers and the instructor under review.  This form or report tin save valuable time, only it also may provide express feedback to the instructor under review, possibly affording him or her less useful guidance on where to amend his or her teaching.

Only summative evaluation. A section or program may limit peer evaluation to only summative and not formative assessments of teaching.  This would limit opportunities for faculty evolution, hinder data drove, create more tensions betwixt reviewers and those being evaluated, and thwart the formation of collegial cultures that ameliorate didactics for entire departments and programs. However, many departments and programs accept used this shortcut to acquit peer review.

Final Thoughts

Peer review of instruction, when done well, has many benefits in fostering educational activity excellence, creating collegial communities of scholar teachers, and more fair and transparent cultures of professional development.  By dissimilarity the challenges of peer review, while not insignificant, are pocket-size by comparison.  Peer review of didactics, equally in research, enhances the integrity and innovation of pedagogy and is a practice whose institutionalization is long overdue.

Bibliography

  • Bernstein, Daniel J. 2008. "Peer Review and Evaluation of the Intellectual Work of Pedagogy." Change. March/April.
  • Bernstein, Daniel J., Jessica Jonson, and Karen Smith. 2000. "An Examination of the Implementation of Peer Review of Educational activity." New Directions for Instruction and Learning. 83: 73-86
  • Bernstein, Daniel., A.N. Burnett, A. Goodburn and P Savory. 2006. Making Teaching and Learning Visible: Course Portfolios and the Peer Review of Pedagogy. Anker.
  • Heart for Teaching Effectiveness. "Preparing for Peer Observation: A Guidebook." Academy of Texas, Austin.
  • Chism, Nancy V. 2007. Peer Review of Instruction: A Sourcebook. 2nd Edition. Anker.
  • Glassick, C. M. T. Huber, and K. Maeroff. 1997. Scholarship Assessed: Evaluation of the Professoriate. Jossey-Bass.
  • Hutchings, Pat. 1995. From Idea to Epitome: The Peer Review of Education. Stylus
  • Hutchings, Pat. 1996. "The Peer Collaboration and Review of Teaching." ACLS Occasional Paper No 33.
  • Hutchings, Pat. 1996. Making Teaching Community Belongings: A Menu for Peer Collaboration and Peer Review. Stylus
  • Hutchings, Pat. 1998. The Grade Portfolio. Stylus
  • Perlman, Baron and Lee I. McCann. 1998. "Peer Review of Teaching: An Overview." Office of Teaching Resource in Psychology and Department of Psychology, Georgia Southern University.
  • Seldin, P. 1997. The Teaching Portfolio. 2nd Edition. Anker.
  • Seldin, P. 1999. Changing Practices in Evaluating Pedagogy: A Practical Guide to Improved Faculty Performance and Promotion/Tenure Decisions. Jossey-Bass.
  • Shulman, Lee Southward. 2004. Teaching as Community Property: Essays on College Educational activity. Jossey-Bass.

Creative Commons License
This education guide is licensed nether a Creative Commons Attribution-NonCommercial 4.0 International License.

ledesmahintud.blogspot.com

Source: https://cft.vanderbilt.edu/guides-sub-pages/peer-review-of-teaching/

0 Response to "Peer Review of Teaching as Part of Faculty Evaluation"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel