This module covers two chapters that sit at the center of everything you have studied so far. Chapter 7 is about assessment: how you measure what students have learned, how you use those measurements to improve your teaching, and how you communicate results to students, parents, and administrators. Chapter 8 is about classroom questioning: the techniques you use to check understanding, push thinking to higher levels, and create a classroom where students are doing the cognitive work.
If planning (Module 3) is about designing instruction, assessment and questioning are about finding out whether the instruction worked. The two chapters are connected. The questions you ask during a lesson are formative assessments. The tests and projects you assign at the end of a unit are summative assessments. The grading decisions you make shape what students pay attention to and how hard they try. Every piece connects.
You have three assignments in this module, a VoiceThread discussion, and a set of branching scenarios. Start with the reading. Work through the tabs. Come back to this overview page when you are ready to tackle the assignments.
Chapter 7, Section 1. Core concepts, vocabulary, purposes of assessment, areas teachers assess, and links to planning.
Chapter 7, Section 2. Feedback, formative strategies, student motivation, and the formative-summative relationship.
Chapter 7, Sections 3-5. Assessment tools, test construction, item types, performance assessment, rubrics, and grading.
Chapter 8. Research on questioning, four strategies, wait time, prompting, handling responses, and building student questioning skills.
Four branching scenarios where you make assessment and questioning decisions under pressure.
Instructor walkthroughs for Chapters 7 and 8.
Submit all assignments through the Canvas assignment links. This module page is your study guide and content hub. The assignment submission happens in Canvas, not here.
A quick reference. These are the figures behind the ideas in Chapters 7 and 8.
Create two summative assessments for your certification area. Select one from Option A (objective or essay test items) and one from Option B (product-based, performance-based, or inquiry activity). For each assessment, identify the grade level, subject, standard, and provide a description, a visual snapshot, and a grading resource (answer key or rubric). Then write a reflection connecting your design choices to the assessment principles from Chapter 7.
You may complete this assignment in pairs or triads. Each person submits the final product.
Submit in Canvas →| Criterion | Top (100%) | Mid (~70%) | Low (~35%) |
|---|---|---|---|
| Option A Assessment Grade, subject, standard, description, snapshot, grading resource, references |
40 All components present and fully developed. Assessment aligns with the stated standard. Grading resource (answer key or rubric) matches the assessment. References to textbook or course materials. | 28 Most components present. Minor gaps in alignment between assessment and standard, or grading resource is incomplete. | 14 Multiple components missing. Assessment does not align with stated standard, or no grading resource provided. |
| Option B Assessment Grade, subject, standard, description, snapshot, grading resource, references |
40 All components present and fully developed. Assessment aligns with the stated standard. Grading resource matches the assessment. References to textbook or course materials. | 28 Most components present. Minor gaps in alignment or grading resource is incomplete. | 14 Multiple components missing or assessment does not align with stated standard. |
| Reflection Guidelines for selecting assessments, personal assessment experiences, impact on teaching career. References to text/resources. |
20 Addresses all three reflection prompts with specific connections to Chapter 7 concepts. References course materials. Two or more paragraphs. | 14 Addresses the prompts but connections to chapter content are vague or one prompt is missing. | 7 Reflection is superficial, missing multiple prompts, or lacks any reference to course materials. |
Step back and reflect on your experiences in SEC 520. Connect your growth to at least one SEC 520 course objective and at least one NC Professional Teaching Standard. The final product is either a written essay (approximately four paragraphs) or a presentation using the platform of your choice (Google Slides, Canva, PowerPoint, Prezi, etc.).
Submit in Canvas →| Criterion | Top (100%) | Mid (~70%) | Low (~35%) |
|---|---|---|---|
| Instructional Design Resources/Experiences Connects course objective and NC Teaching Standard to class experiences. References text/resources. |
13 Clearly connects both a course objective and a NC Teaching Standard to specific class experiences. References course materials. One full paragraph. | 9 Connects either the course objective or the teaching standard, but not both. References are present but vague. | 5 Connections are missing or generic. No references to course materials. |
| Favorite Assigned Reading Identifies a reading and connects it to a course objective and NC Teaching Standard. Explains impact on future teaching. |
13 Names a specific reading and connects it to both a course objective and a teaching standard. Explains the impact with concrete detail. One full paragraph. | 9 Names a reading but connections to objectives or standards are incomplete. | 5 Reading is named but no meaningful connections to objectives or standards. |
| Favorite Instructional Design Topic Identifies a topic and connects it to a course objective and NC Teaching Standard. |
12 Names a specific topic with clear connections to both a course objective and a teaching standard. One full paragraph. | 8 Topic is identified but connections are partial or vague. | 4 Topic is named without meaningful connections. |
| Performance Reflection Successes, areas for improvement, and anything additional to share with the instructor. |
12 Identifies specific successes and specific areas for improvement. Reflection is honest and detailed. One full paragraph. | 8 Addresses successes or improvement areas, but not both, or descriptions are generic. | 4 Reflection is superficial with no specific examples of growth or areas for improvement. |
Pick a lesson topic in your certification area. Write a questioning plan for one class period that includes specific questions at each of the four strategy levels: convergent, divergent, evaluative, and reflective. For each question, identify the Bloom's level and the strategy type. Explain where you would use wait time 1 versus wait time 2. Then write a scripted prompting sequence showing how you would respond to one hypothetical incorrect student answer.
Your questions must match the content you are teaching. Generic questions that could apply to any subject will not earn full credit. The prompting sequence should follow the model from Chapter 8, Section 8-11.
Submit in Canvas →| Criterion | Top (100%) | Mid (~70%) | Low (~35%) |
|---|---|---|---|
| Question Set Questions at each strategy level (convergent, divergent, evaluative, reflective) that match the lesson content. |
25 Includes questions at all four strategy levels. Each question is content-specific and matches its labeled strategy type. Questions build from lower to higher cognitive levels. | 18 Questions are present for most strategy levels but one or two are mislabeled, generic, or missing. | 9 Questions cover fewer than three strategy levels, are generic, or do not match the labeled strategy types. |
| Bloom's Level and Strategy Alignment Each question correctly identifies its Bloom's level and strategy type with brief justification. |
15 Every question has a correctly identified Bloom's level and strategy type. Brief justification explains the match. | 11 Most questions have Bloom's levels and strategy labels, but some are incorrect or unjustified. | 5 Bloom's levels are missing, incorrect for most items, or no justification is attempted. |
| Wait Time Application Identifies where wait time 1 and wait time 2 apply, with reasoning tied to Chapter 8. |
10 Correctly distinguishes wait time 1 from wait time 2. Identifies specific moments in the question plan where each applies. Reasoning references Chapter 8. | 7 Wait time is discussed but the distinction between wait time 1 and 2 is unclear or application is generic. | 4 Wait time is mentioned only in passing or not connected to specific moments in the plan. |
| Prompting Sequence Scripted response to an incorrect answer using prompting techniques from Section 8-11. |
15 Scripted exchange shows 3+ teacher moves that clarify, redirect, and guide the student toward a correct response. Maintains a positive tone. Follows the prompting model from the textbook. | 11 Scripted exchange is present but includes fewer than three teacher moves, or the prompting does not clearly guide toward a correct response. | 5 No scripted exchange, or the response is simply telling the student the correct answer without prompting. |
| Content-Standard Alignment Lesson topic connects to a specific standard in the student's certification area. |
10 Lesson topic is clearly stated with a specific content standard. All questions connect to the stated topic and standard. | 7 Standard is identified but questions do not all connect to the stated topic. | 4 No standard identified, or the lesson topic is too vague to evaluate alignment. |
Prompt: Think about a teacher you had in school who was skilled at asking questions. What made their questioning effective? Now connect their approach to the four questioning strategies from Chapter 8 (convergent, divergent, evaluative, reflective). Which strategy or strategies did they use most? How did their questioning connect to the way they assessed student learning (Chapter 7)? Record a 2-3 minute response and reply to at least two peers.
| Criterion | Top (100%) | Mid (~70%) | Low (~35%) |
|---|---|---|---|
| Initial Response 2-3 minute recording connecting personal experience to Ch 7 and Ch 8 concepts. |
30 Describes a specific teacher's questioning approach. Identifies questioning strategies by name and connects them to assessment concepts from Chapter 7. Recording is 2-3 minutes. | 21 Describes questioning but connections to chapter concepts are vague or only one chapter is addressed. | 11 Response is generic, too short, or does not reference concepts from either chapter. |
| Peer Replies Substantive responses to at least two peers. |
20 Replies to two or more peers with substantive comments that extend the discussion or offer a new connection to the chapter content. | 14 Replies to two peers but responses are brief or do not add new content. | 7 Fewer than two replies, or replies are surface-level agreements without substance. |
Have you read the assignment descriptions and rubrics above? Do you understand what each assignment asks you to do? Write one sentence for each assignment describing the hardest part. If you cannot identify the hard part, re-read the description. Knowing where the challenge lives is the first step in meeting it.
Read the prompt below before you dig into the chapter tabs. Submit your written response in the Exit Ticket: Module 4 assignment in Canvas (25 points). You can return and revise as you work through the chapter.
Section 1 builds the foundation for everything else in this chapter. It defines assessment, distinguishes it from testing and measurement, introduces validity and reliability, and identifies the four purposes of classroom assessment. It also names the four areas teachers assess (knowledge, thinking, skills, attitudes) and connects assessment directly to planning and instruction. If you skip this section, the rest of the chapter will not make sense.
Rick Stiggins reframes the formative-summative split into three uses of assessment, and the third one is what most teachers miss.
Most assignments and tests are assessment OF. Most exit tickets and check-ins are assessment FOR. Assessment AS is rarer because it requires teaching students how to assess themselves. Worth the investment.
Students can hit any target that they can see and that holds still for them.
Rick StigginsAssessment is a continuous process whose primary purpose is to improve student learning. Teachers observe, gather information, interpret it, and make decisions about whether and how to respond. Every time you watch students work, ask a question, review a quiz, or check an assignment, you are assessing. The process includes formal tools (tests, rubrics, portfolios) and informal ones (observation, questioning, conversation).
The textbook identifies seven: (1) provide feedback to students, (2) make informed decisions about students, (3) monitor and document academic performance, (4) aid student motivation by establishing short-term goals, (5) increase retention and transfer by focusing learning, (6) evaluate instructional effectiveness, and (7) establish and maintain a supportive classroom atmosphere.
Before going further, the textbook distinguishes several terms that people often use interchangeably. Assessment is the broadest: it includes any process by which teachers gather information about student learning. A test is one specific type of assessment, usually a set of questions answered during a fixed period. Measurement assigns numbers to assessment results. A norm-referenced standardized test compares each student's score to a norming group. These are related but different processes.
A test is valid if it measures what it is intended to measure. A ruler is valid for measuring length but useless for measuring weight. Validity is relative to purpose: a test can be valid for one purpose but not another. The fundamental question: does this test sample a representative portion of the content being assessed?
A test is reliable if it gives similar results each time it is used under similar conditions. If a group of students could be retested and get approximately the same scores, the test is reliable. Reliability increases with test length: more questions means more information and less uncertainty about each student.
The textbook identifies four purposes for classroom assessment. Each one answers a different question and leads to a different instructional decision.
Determines whether students have the prerequisite knowledge to begin new material. Placement tests help the teacher decide where to start, not whether students can learn.
Identifies specific areas of difficulty: what students know, what they are confused about, and where the gaps are. Diagnostic tests pinpoint the problem so the teacher can address it.
Monitors learning in progress. Provides feedback to both teacher and student during instruction. The primary user of formative assessment data is the teacher, who uses it to adjust instruction in real time.
Evaluates achievement at the end of a unit, course, or period. Summative assessments include end-of-unit tests, final projects, and standardized achievement tests. The result is usually a grade.
Drag each scenario to the correct assessment purpose:
Chapter 4 introduced three domains of learning: cognitive, affective, and psychomotor. Teachers make assessments in each domain. The textbook organizes assessment areas into four categories. Each area requires different assessment methods.
Assessment planning should be an integral part of instructional planning, not a process added on at the conclusion of instruction. When you plan carefully, your objectives, instruction, and assessment all match. The textbook refers to this alignment as the hallmark of backward design: start with the end in mind, then plan how to get there.
Backward design in practice. The objective shapes the instruction; the instruction shapes the assessment. Hover each piece to see how the alignment holds.
"Compare" is the verb. Bloom-Analyze level. The format ("graphic organizer") and the categories ("causes, leaders, outcomes") are named here. Every later decision flows from this sentence.
Each piece of the lesson maps to a piece of the objective: introduction = framework, small-group analysis = practice with one rebellion, gallery walk = comparison preview. No drift, no surprise activities.
Same graphic organizer, same three categories, same comparison move. The rubric weights what the objective named. No student sees a new format on assessment day. That is alignment, not just intent.
Grant Wiggins and Jay McTighe describe backward design as a three-stage process: (1) identify desired results, (2) determine acceptable evidence, and (3) plan learning experiences and instruction. Assessment is stage two: you decide what evidence would prove that students learned before you design the activities. This approach produces stronger alignment between what you teach and what you test.
The textbook warns against using assessment for classroom control or punishment. Using tests to punish students who misbehave, assigning busy work as filler, or using grades as threats can damage motivation and destroy the trust between teacher and student. Punitive testing teaches students that assessment is something done to them, not something that helps them learn. The consequences are serious: students stop trying for excellence because they no longer see the point.
The textbook discusses several contexts that affect how you plan assessments. In flipped classrooms, students read or watch material before class, so assessment shifts toward checking whether students engaged with the material and can apply it. Much of the assessment in flipped classrooms is formative: quick checks at the start of class, application activities during class, and peer assessment during group work.
For students with special needs, the textbook recommends consulting with your school principal and special education co-teacher. In co-teaching models, both teachers share responsibility for assessment, and assessments for students with special needs must align with their IEP goals while still connecting to grade-level standards.
A practical way to plan assessments is to start with the report card you will be expected to prepare. What information does it require? How often? What types of data will you need? Working backward from the reporting requirements helps you plan a calendar of assessments across the grading period. Consider the timing: four times per year is typical for elementary report cards.
Different teaching moments call for different assessment moves. Pick the situation closest to yours.
Section 2 is where assessment shifts from a concept you read about to a strategy you use every day. Formative assessment is a type of classroom assessment devoted to the enhancement of student learning and achievement. It is a process (not a test) that occurs during instruction and is used by both teachers and students. This section defines formative feedback, describes several formative assessment strategies, connects formative assessment to student motivation, and explains the relationship between formative and summative assessment.
Paul Black and Dylan Wiliam published "Inside the Black Box" in 1998 and changed how schools think about assessment. Their meta-analysis showed that formative assessment, used well, produces some of the largest gains in student achievement ever measured. The catch: most teachers do formative assessment in name only.
The four moves that worked, in order of impact: clear learning intentions students can articulate; questioning that exposes thinking, not just answers; descriptive feedback (not grades) that tells students what to do next; and peer and self-assessment that hands the work back to students. The textbook covers all four. Black and Wiliam's research is what made the textbook take them seriously.
Assessment becomes formative when the evidence is actually used to adapt the teaching work to meet learning needs.
Paul Black and Dylan Wiliam (1998)Feedback illustrates the gap between what the student currently knows and understands and what the teacher's expectations are for this knowledge and understanding. In its simplest form, feedback is used to make adjustments in classroom instruction: how the teacher uses feedback and revisions to learning strategies (how the student uses feedback). In each case, the goal is improving student learning.
For feedback to work, students must ultimately hold the same understanding of the standard as the teacher. Students must be able to assess their own individual work and apply a variety of self-monitoring strategies to revise and enhance their work to meet the standard.
Leahy, Lyon, Thompson, and Wiliam recommend four steps: (1) At the beginning of each unit, help students identify what they need to learn. (2) Teach students to self-assess so they can monitor their own learning. (3) Work with students to set achievement goals so they can spend their learning time productively. (4) Help students develop learning strategies to reach their goals.
"Good effort." "Nice work." "Needs revision." These comments tell the student nothing specific. They cannot use this feedback to improve because they do not know what to change.
"When you write your next report, consider using shorter sentences. Your message is lost with such long sentences." "When solving these equations, use a different line for each step. Then you will be able to monitor the process you have used."
The textbook lists several formative assessment strategies. Each one gives you different data about student learning, and each one is appropriate in different situations.
Questioning is so important that Chapter 8 is devoted to it. There are two types of questions: convergent (one right answer) and divergent (many possible answers). Convergent questions check recall. Divergent questions push students to think, analyze, and evaluate. Teachers who ask mostly convergent questions miss opportunities to develop higher-order thinking. Chapter 8 covers this in depth.
When students assess each other's work, they internalize the criteria. They begin to see their own work through a critical lens. For peer assessment to work, students need clear criteria (a rubric, a checklist, or specific questions to answer about the work). The goal is self-assessment: the ability to judge your own work accurately.
Grading is a task you must understand and do well. This includes the development of semester grades, but also the grading of tests, reports, and projects. A major contribution to student achievement was the use of "Teacher Self-Report Grades": when teachers communicate clear expectations and provide specific feedback through grading, students perform better. Grading should communicate, not just evaluate.
After material is introduced to students, a practice test of the material is administered. Correct solutions are provided to students after the practice test is given. A few days later, another practice test is administered with correct solutions provided again. Research has demonstrated learning and instructional benefits including identification of gaps in knowledge and understanding, better cognitive organization of the content, and instructional feedback to the teacher. It is optimal to align the test format with the format used on the final unit examination.
Providing clear feedback to students about their achievements can increase student motivation to succeed. In particular, clarifying the goal or standard and helping students develop a representation of this standard for themselves helps students take ownership of their success. When students understand what it means to succeed in your classroom, success is no longer a mystery or something held only by the teacher. With shared understanding, students are less likely to blame you or something beyond themselves for not meeting the standard.
These ideas have the greatest potential for helping low-achieving students. In an environment with little feedback, low-achieving students see achievement as a futile guessing game and often stop trying. When students receive formative assessment feedback, they are more likely to try to meet achievement challenges. The connection between formative assessment and student motivation should be at the forefront of any teacher's thinking and planning of classroom assessment.
Summative assessment is a process of "summing up" achievement in some way or conducting a status check on accomplishments at a given point in time. These assessments include end-of-unit or end-of-chapter assessments, end-of-course tests administered by the district, and interim benchmark assessments administered by the state. Formative assessment is designed to provide information to students so they can act to close the gap between where they are and where they need to be relative to the standard.
Purpose: monitor learning, provide feedback, adjust instruction. Audience: teacher and student. Timing: ongoing throughout the lesson or unit. Stakes: low or no stakes. Examples: observation, questioning, exit tickets, peer review, practice tests.
Purpose: evaluate achievement, assign grades, certify mastery. Audience: teacher, student, parents, administration. Timing: end of unit, course, or grading period. Stakes: high (contributes to final grade). Examples: unit tests, final exams, state assessments, portfolios.
Drag each example to the correct assessment type:
Same word in both names. Different purpose, different timing, different consequences for students. Mixing them up is the most common error in assessment design.
Purpose: Improve learning while it is still happening. Adjust instruction.
Timing: During a lesson, unit, or course.
Stakes: Low or none. Used for feedback, not grades.
Audience: Teacher and student. Both use the data to adjust.
Examples: Exit tickets, observation, peer review, quick polls, draft feedback.
Purpose: Measure what students learned by the end. Report a grade.
Timing: End of unit, course, or program.
Stakes: High. Counts toward the final grade.
Audience: Teacher, student, parents, administrators.
Examples: Final exams, term papers, end-of-unit projects, state tests.
Feedback ranks among the highest-impact influences on learning Hattie has measured. Reciprocal teaching, mastery learning, and self-questioning all clear the threshold by a comfortable margin.
The red linemarks the 0.40 effect-size threshold. Source: Hattie, Visible Learning (2009 and updates). Values approximate.
Sections 3 through 5 cover the practical side of assessment. Section 3 introduces the major categories of assessment tools: teacher-made assessments, large-scale achievement tests, and student-led conferences. Section 4 is the construction workshop: guidelines for building tests, writing objective items, writing essay items, and assessing performance and products. Section 5 covers grading: the principles, the pitfalls, and how to communicate your system to students and parents. This is the section you will draw on most for your Summative Assessment assignment.
Most classroom assessments involve teacher-made tests, and there are good reasons for this. The teacher has monitored the learning experiences in the class and thus has a much better idea of what needs to be assessed. The teacher is familiar with the students as well as the instruction, which may affect the content and method of assessment.
Three important reminders: First, plan the test as you plan instruction, not after. Knowing what you will assess shapes how you teach. Second, use a variety of methods. You will be letting students demonstrate their skills and understanding in a variety of valid ways: reports, oral and written projects, poetry, videos, music, plays, stories, models, and performances. Third, weave assessment throughout the duration of a unit, not just at the end. Students assessed throughout will have a much more complete picture of their understanding.
Richard Stiggins offers four questions to gauge assessment quality: (1) What is the purpose of the assessment? Who will use the results? How? (2) What are the learning targets? Are they clear? Appropriate? (3) Assess how? Built of quality or relevant ingredients? (4) Communicate how? Reported to whom? In what form?
Schools use large-scale achievement tests to assess student performance according to district-wide and statewide curricula, monitor student achievement, and assess student aptitude prior to high school graduation. The primary consideration is congruence: there must be alignment between what is taught and what the test measures.
Questions are written by specialists, reviewed for bias, and field-tested. They are accompanied by extensive technical data on norming, validity, and reliability. Development costs are recovered over large print runs, which keeps cost-per-student low. They provide separate printouts for class records, individual student reports, reports to parents, and many other uses. Scores can be compared to the norming group, and they provide data on the normality of specific skills and objectives.
Many of the higher-level thinking processes are difficult to assess using a multiple-choice test. Reform efforts have turned away from using numerical scores and averages as indicators of success and moved toward a focus on each student's competency in the skills that will be most useful in life. High-stakes tests (those with consequences for performance) continue to evolve. The federal government has weighed in with its own set of high-stakes testing requirements. Teachers often focus on test preparation, share instructions in the test form, and give particular attention to the form in which tests are constructed, sometimes at the expense of deeper instruction.
In a student-led conference, the student takes major responsibility for discussing and evaluating their current level of achievement relative to the standard. This discussion includes the quality of the student's work, the ways in which the student performed well, and what might be done to enhance their future performance. The student conveys the assessment in written form to the teacher and parents.
Two benefits stand out. First, students learn to take ownership of their learning and are held accountable for it. Second, communication among the three stakeholders in student success (students, parents, and the teacher) is enhanced. Many teachers have known of the power of this technique for some time and have used it successfully for years.
Your work in writing a specific test will be greatly facilitated if you follow these six steps:
Objective items are so called because they have a single best or correct answer. There is no (or very little) dispute about the correct response. Objective items come in two types: the selection type, in which a response is chosen from among alternatives given, and the supply type, in which the student supplies a brief response.
True-false, matching, and select-answer items are the simplest types of assessment. They are best suited to those who are not test-wise. Keep your true-false items short and unambiguous. Students who guess have a 50% chance of getting a correct response. Three variations: (1) standard T/F, (2) fact vs. opinion format, (3) correction format where students fix false statements.
Matching exercises tend to provide clues to those who are test-wise. Keep your items to approximately eight items and include more options than items to be matched. Ensure there is only one correct answer per match. Use homogeneous content: all items should come from the same category.
Short-answer items require the student to provide a word, phrase, or number. Students are not simply asked to identify a correct choice but rather to retrieve it from memory, which is a different and perhaps individually more complex process. Science and math teachers are particularly fond of this format because it seems to directly measure comprehension and problem-solving skills.
The multiple-choice item is generally considered the most useful objective test item. It can measure both knowledge and higher-level learning outcomes. Multiple-choice items consist of two parts: a question or problem (the stem) and a list of possible solutions (the alternatives). The correct alternative is the answer. The remaining alternatives are called distractors. Creating effective distractors is one of the most difficult parts of writing multiple-choice items: they should be plausible to the non-knowing student while not confusing the knowing student.
Drag each assessment scenario to the best item type:
The essay item is an excellent way to assess students' higher-thinking processes: comprehension, analysis, and evaluation, as well as skills in organizing and presenting ideas. There are two types of essay items:
The question limits the scope of the response. Students know what to address and approximately how much to write. Example: "Explain two reasons leading to the conflict in which Magellan was killed." The teacher can use specific criteria to evaluate the response.
The question allows students to select, organize, and evaluate ideas. Example: "Compare the American Revolution to the US experience in Iraq. Note specifically the concluding phases of both wars." These are harder to score consistently and require a well-designed rubric.
Use holistic scoring for extended responses (assign 3, 5, or 4 points on a scale rather than descriptive words like "excellent" or "needs improvement"). Use analytic scoring for restricted responses (you can directly compare responses to the scoring rubric and assign points). A general rule: it is useful to use quantitative scores rather than descriptive words. Develop a coding system that conceals the students' names. This reduces the tendency to evaluate performance based on quality of earlier performance rather than making an objective assessment. Read all responses to one question first before going on to the next, to reduce the halo effect.
Many areas of student achievement are more effectively assessed with a performance-based assessment than with a test question. Language arts teaches speech and listening, both of which are assessed most directly by observing student performances. The same is true of science lab procedures, social studies community projects, and reports of observations in health or earth science class. Performance-based items do not have a single best response. Instead, students are required to organize and present the material in their own way within the stated bounds of the task.
One of the first principles of grading is that you cannot have too much data. No matter how many test scores, homework exercises, and class activities you have assessed, you will still feel that you need more information to make summative evaluations. By giving your students many opportunities to show achievement, you provide yourself with more data for a fair professional judgment, and you also provide each student with every possible chance to succeed.
The textbook flags five common errors: (1) Using pretest scores in determining grades: such scores should indicate only where to begin instruction. (2) Not adequately informing students of what to expect on a test, which leaves students guessing what is important. (3) Assigning a zero for missing or incomplete work: a zero has a profound effect on an average, and one alternative is to use the median (middle-ranking) score as an indicator rather than using the average. (4) Using grades for reward or punishment: achievement of learning objectives should be the only consideration in assigning grades. (5) Assigning grades contingent on improvement: a student could improve a good-but-not-best performance expectation, and mastering a specified and well-articulated standard should be the main factor.
Whatever your system, you must explain it clearly, with appropriate handouts and examples, to your students during the first days of class. Students can and should be taught to keep track of their own grades. A letter or note to parents prior to grading that includes a description of the grading system for the class helps them track their child's progress and prevents unnecessary confusion and disagreement at report card time.
Chapter 8 is about the teaching method you will use more than any other: asking questions. Teachers ask between 300 and 400 questions per day. Most of those questions are low-level recall. This chapter teaches you how to ask better questions, how to use different questioning strategies for different cognitive purposes, and how to handle the responses you get. It covers four questioning strategies, wait time, prompting techniques, handling incorrect answers, promoting multiple responses, and developing students' own questioning skills. This is the chapter your Questioning Strategy Blueprint assignment is built on.
Questioning plays a critical role in teaching. Teachers must be knowledgeable in the process of framing questions so they can guide student thought processes in the most skillful and meaningful manner. This means teachers must design questions that will help students attain the specific goals that are the objectives of a particular lesson.
Key findings from the research: questioning tends to be a universal teaching strategy. A broad range of questioning options is open to you. Being systematic in the use and development of questioning tends to improve student learning. By classifying questions according to a particular system, you may determine the cognitive or affective level at which your class is working and make adjustments as needed. Questions should be developed logically and sequentially. Students should be encouraged to ask questions. A written plan with key questions provides lesson structure and direction.
The textbook also emphasizes using statements alongside questions. As an alternative to asking questions, making declarative statements can also stimulate student responses, curiosity, and thinking. There is some evidence that students' verbal responses are of higher quality when they respond to statements rather than to questions alone. For example, a teacher might say, "It really doesn't matter what tense you use when writing." That statement predictably will evoke a wide range of student responses.
The textbook describes four basic questioning strategies. Each one targets a different type of thinking. If you assign particular importance to the different types of questions you ask, then you will need a method for verifying that you are indeed using the desired questioning patterns.
Convergent questions focus on a narrow objective. They encourage student responses that converge, or focus, on a central theme. These questions typically elicit short responses from students and focus on the lower levels of thinking: knowledge or comprehension. They are ideal for checking factual knowledge, building vocabulary, and quick warm-up exercises. Examples: "In what words did Robert Browning use the dramatic monologue as a form for his poems?" "Under what conditions will water boil at less than 100 degrees C?"
Rather than seeking a single focus, a divergent strategy is designed to evoke a wide range of student responses. Divergent questions also elicit longer student responses. They are ideal for building the confidence of students with learning difficulties because divergent questions do not always have right-or-wrong answers. Examples: "What type of social and cultural development might have taken place if Christopher Columbus had landed on Manhattan Island on October 12, 1492?" "What do you think are effective methods for creating a sustainable environment?"
The evaluative strategy is based on the divergent strategy, but with one added component: evaluation. An evaluative question has a built-in set of evaluative criteria. For example, an evaluative question might ask why something is good or bad, why an author's approach is or is not effective. The teacher's role is to help students develop a logical basis for establishing evaluative criteria. Students should develop a logical, consistent set of evaluative criteria. Examples: "Is the world a better or worse place because of computers and the internet?" "What evidence is there that the federal system of interstate highways harmed our city environments?"
The goal of the reflective question is to require your students to develop higher-order thinking: to elicit motives, make inferences, speculate on causes, consider impact, and contemplate outcomes. Rather than asking a student a "why" or "what" question, you are trying to encourage the student in position to think of implications, to search for unintended consequences. Five examples: seeking motives, expanding a vision, listing implications, searching for unintended consequences, identifying issues.
Drag each question to the correct questioning strategy:
The basic rule for asking questions is to provide them in three steps: ask the question, pause, and then call on a student. This rule is grounded in the psychological rationale that when you ask a question and follow it with a short pause, all students will attend to the communication. The meaningful message: the pause communicates that any student in the class may be selected for a response, so the attention level of the class remains high.
The time between when you ask a question and when you call on a student. It gives students a chance to think about their response. This is especially important when you ask higher-level questions. Students with special needs will have some time to ponder the question so they may respond appropriately.
The pause after the student you have called on has responded. It is equally important because it gives the student additional time to think, or it allows other students to respond as well. If the teacher waits a while to respond after the initial student response, then students will continue to respond without prompting.
For the teacher: less teacher talking, less repetition of questions, fewer questions per period, more questions with multiple responses, fewer lower-level questions, more application-level questions, less disciplinary action. For the students: longer responses, more student discourse and questions, fewer nonresponding students, more student involvement in lessons, increased complexity of answers and improved reasoning, more responses from slower students, more peer interaction and fewer peer interruptions, less confusion, more confidence, higher achievement.
Mary Budd Rowe spent the early 1970s recording science classrooms with a stopwatch. Average wait time across those classrooms: less than one second between question and call. When she trained teachers to extend it to three seconds, every measurable indicator of student thinking improved. Longer answers, more student-to-student exchange, more risk-taking from students who never spoke before.
The hard part is not knowing about wait time. The hard part is feeling the silence in your own classroom and not filling it. Three seconds feels long. The research is from 1972 and the problem has not changed.
Once you have asked a question and called on a student to respond, the student may not answer the question the way you want. You may do this by clarifying the question, by eliciting a fuller response, or by eliciting additional responses from the student to allow you to verify whether the student comprehends the material. Always provide positive reinforcement so the student will be encouraged to complete an incomplete response or revise an incorrect one.
The textbook provides a model. The teacher prompts the student in a nonthreatening or neutral verbal tone. The episode continues until the student provides all the necessary information for an appropriate closure. Example: "Class, now let us examine the data that we collected regarding our experiment on absorption and radiation. What differences did you observe between the covered and uncovered pans? [pause] Lisa?" / "The water in the covered pan had a temperature of 96 degrees." / "At what point in the experiment was that temperature measured?" / "After the pan had been covered for 10 minutes." / "What was the temperature of the water when you took the first reading?" The teacher keeps going until the student has provided a full, accurate answer.
When a student gives a totally incorrect response, the textbook recommends three steps. First, reinforce positively: comments such as "yes" or "that is incorrect" should be avoided because they can be negative reinforcers and may reduce the student's desire to participate. Second, if you respond negatively to an incorrect student response, there is a high probability that a ripple effect will occur (Kounin 1970): other students' behavior will be negatively affected. Third, when a student gives an incorrect answer, try to move to a neutral prompting technique rather than responding with "No."
Teachers typically conduct recitation periods by sequential questioning: they ask one student to respond, then another student to respond, and so on. The textbook recommends using the multiple-response technique instead. You ask a question, pause, and then call on three or four students to respond. You caution students that you will not repeat any student responses, so they must listen carefully.
The multiple-response strategy allows for longer student responses, greater depth in student statements, and greater challenges for all students. It is a logical precursor to student-conducted discussions. Because many students do not demonstrate the behaviors or skills needed, using multiple-response questions helps you condition students to accept more responsibility for listening to one another and to modify their responses based on previous ones.
For nonvolunteers, the textbook offers several strategies: maintain a positive attitude toward nonvolunteering students, ask nonvolunteers questions that they will be likely to answer successfully, give generous positive feedback to encourage future responding, attempt to determine why each nonvolunteer remains shy, occasionally make a game out of questioning (place each student's name on a card and draw cards at random), and prompt promptly. There is nothing wrong with giving each nonvolunteering student a card with a question on it the day before the intended oral recitation period to help them prepare.
Frank Lyman (1981) described this method as having three steps: (1) Think: you ask a question to the whole class and allow them a short time to think about the response. (2) Pair: designate partners (desk mates, buddies) to pair up and discuss the best answers or even the most novel possibilities. In some cases you could even have them write their team responses. (3) Share: you now call on the pairs to share their thinking with the class. Responses can be recorded on the chalkboard. This method is another means by which ELL students or those with special needs can participate meaningfully in the recitation session.
After conducting extensive research, Cynthia T. Richetti and Benjamin B. Tregor (2001) identified five reasons why students should develop their own questions: it increases motivation to learn, improves comprehension and retention, encourages creativity and innovation, teaches how to think and learn, and provides a basis for problem solving and decision making.
One technique: play a game of Twenty Questions. In Twenty Questions, participants ask questions to identify something. The teacher thinks of some problem, concept, place, or historical figure, and students attempt to discover it through questioning ("Does it have a definite answer?" "Is it a place?"). The teacher can respond with only yes or no answers. Initially you will conduct the session, but as students master the technique, you can let them conduct the entire session. Another technique: have students question the author of the texts they have read for class ("What is the author trying to say?" "What did the author say to make you think that?" "What do you think the author means?").
The textbook warns about teacher idiosyncrasies that interfere with good questioning: repeating the question, repeating students' responses, answering the question yourself, not allowing a student to complete a long response, and not attending to the responding student (looking away while they talk). These behaviors are easy to develop and hard to break. Have a colleague observe your questioning or record yourself to catch these patterns.
Each question type does a different job. Knowing which job you need is the first step in asking the right question.
Number of correct answers: One.
Cognitive level: Recall, comprehension, basic application.
Best for: Checking understanding before moving on.
Example: "What is the capital of France?"
Number of correct answers: Many.
Cognitive level: Analysis, synthesis, creative thinking.
Best for: Generating ideas, exploring possibilities.
Example: "How many ways could we solve this?"
Number of correct answers: Defensible answers, with criteria.
Cognitive level: Evaluation, judgment.
Best for: Making and defending choices.
Example: "Which solution best fits these constraints, and why?"
Number of correct answers: The student's own.
Cognitive level: Metacognition.
Best for: Self-awareness, building learning habits.
Example: "How did you arrive at that answer?"
Not all questioning is equal. Discussion and peer-driven questioning produce stronger effects than the typical teacher-led questioning that fills most classrooms. Higher-order questions on their own do not clear the threshold without a deliberate questioning strategy.
The red linemarks the 0.40 effect-size threshold. Source: Hattie, Visible Learning (2009 and updates). Values approximate.
Reading about assessment and questioning is one level of understanding. Making assessment and questioning decisions when students are in front of you is another. These four scenarios put you in classrooms where something has gone wrong. Your choices determine whether the teacher recovers or digs a deeper hole.
Each scenario covers content from Chapters 7 and 8: formative feedback, test construction, validity, questioning strategies, wait time, and prompting. Work through all four. When you make a wrong choice, read the feedback. The explanation is where the learning happens.
Look at the paths you took. For each wrong turn, go back to the relevant chapter tab and find the section that covers the concept. Write one sentence explaining what you misunderstood and what you understand now. This is the same formative feedback process described in Section 7-7: identify the gap, then close it.
These instructor videos walk through each chapter's key concepts. Copies of the PowerPoint slides are in the Modules section on Canvas. As you watch, write one sentence connecting something in the video to something from your reading. A connection, not a summary. You will use these in your VoiceThread discussion.
These instructor videos walk through each chapter's key concepts. Copies of the PowerPoint slides are in the Modules section on Canvas.
Click the card to flip. Mark each one "Got it" or "Review again" to see what to study next.
0 got it · 0 to review
Match each term to its definition. Click two cards to flip them. Matching pairs stay open.
Solved in 0 moves and 0 seconds.
Match each assessment idea to its meaning.
Solved in 0 moves and 0 seconds.
Match each question type or wait time to its purpose.
Solved in 0 moves and 0 seconds.
Optional. Pick one if you want to push past the textbook on assessment design or questioning practice.
The book that turned formative assessment from a buzzword into classroom practice. Five strategies, each with research and examples. Pairs with Chapter 7, Section 2.
Right Question Institute's protocol for teaching students to generate their own questions. Flips Chapter 8's framework: instead of you asking better questions, students learn to ask them. Free protocol at rightquestion.org.
The National Assessment of Educational Progress publishes its actual test items online. See how professional item writers handle stems, distractors, and scoring rubrics. A useful reference for the Summative Assessment assignment.