SEC 520: Instructional Design, Technology, and Leadership • Watson College of Education, UNCW

Module FOUR:
Assessment & Questioning

2 Chapters 4+ Videos 3 Assignments 1 Discussion
Module FOUR Progress
0%

At a Glance

📹

Chapter Walkthroughs Available

The Videos tab has instructor walkthroughs for Chapters 7 and 8.

This module covers two chapters that sit at the center of everything you have studied so far. Chapter 7 is about assessment: how you measure what students have learned, how you use those measurements to improve your teaching, and how you communicate results to students, parents, and administrators. Chapter 8 is about classroom questioning: the techniques you use to check understanding, push thinking to higher levels, and create a classroom where students are doing the cognitive work.

If planning (Module 3) is about designing instruction, assessment and questioning are about finding out whether the instruction worked. The two chapters are connected. The questions you ask during a lesson are formative assessments. The tests and projects you assign at the end of a unit are summative assessments. The grading decisions you make shape what students pay attention to and how hard they try. Every piece connects.

You have three assignments in this module, a VoiceThread discussion, and a set of branching scenarios. Start with the reading. Work through the tabs. Come back to this overview page when you are ready to tackle the assignments.

What's in Each Tab

Assessment Foundations

Chapter 7, Section 1. Core concepts, vocabulary, purposes of assessment, areas teachers assess, and links to planning.

Formative Assessment

Chapter 7, Section 2. Feedback, formative strategies, student motivation, and the formative-summative relationship.

Building Assessments

Chapter 7, Sections 3-5. Assessment tools, test construction, item types, performance assessment, rubrics, and grading.

Questioning

Chapter 8. Research on questioning, four strategies, wait time, prompting, handling responses, and building student questioning skills.

Apply It

Four branching scenarios where you make assessment and questioning decisions under pressure.

Videos

Instructor walkthroughs for Chapters 7 and 8.

Canvas Tip

Submit all assignments through the Canvas assignment links. This module page is your study guide and content hub. The assignment submission happens in Canvas, not here.

Guiding Questions

1 How do you coordinate planning, instruction, and assessment so each one informs the others?
2 What is the difference between formative and summative assessment, and when do you use each?
3 How do you construct test items and performance assessments that measure what you actually taught?
4 Why is questioning a teaching strategy, and how do the four questioning types (convergent, divergent, evaluative, reflective) target different levels of thinking?
5 What does wait time do, and what happens when you skip it?
6 How do you handle incorrect student responses without shutting down participation?

After completing this module, you will be able to:

📚
Describe assessment as a continuous process and identify its four classroom purposes: placement, diagnostic, formative, and summative. InTASC #6
📚
Explain the concepts of validity and reliability and apply them when evaluating assessment instruments. InTASC #6
📚
Design and use formative assessment strategies to monitor learning and provide feedback. InTASC #6
📚
Construct objective test items, essay items, and performance assessments following established guidelines. InTASC #6
📚
Apply the principles of grading to communicate student achievement accurately. InTASC #6
📚
Use convergent, divergent, evaluative, and reflective questioning strategies at appropriate cognitive levels. InTASC #4 InTASC #8
📚
Demonstrate wait time, prompting techniques, and strategies for handling incorrect responses. InTASC #8
📚
Develop strategies to encourage nonvolunteers and build students' own questioning skills. InTASC #8

Required Readings

📖
Chapter 7: Assessment Orlich et al., Sections 7-1 through 7-19
Read for: The four purposes of classroom assessment (Section 7-3), formative feedback strategies (Section 7-8), test construction guidelines (Section 7-14), and the principles of grading (Section 7-18). These sections are the backbone of your assignments.
📖
Chapter 8: The Process of Classroom Questioning Orlich et al., Sections 8-1 through 8-17
Read for: The four questioning strategies: convergent (8-5), divergent (8-6), evaluative (8-7), and reflective (8-8). Also wait time (8-10), prompting (8-11), and handling incorrect responses (8-12). Your Questioning Strategy Blueprint depends on these sections.
🌐
Creating and Using Rubrics IUPUI Center for Teaching and Learning
Read for: Rubrics are both assessment tools and learning tools. This resource walks through how to build one that does both jobs. Relevant for the Summative Assessment assignment.

Key Theorists


A quick reference. These are the figures behind the ideas in Chapters 7 and 8.

BW
Black & Wiliam
None
Inside the Black Box (1998). The research that put formative assessment on the map.
click
BW
Black & Wiliam
Photo coming
flip back
JH
John Hattie
1947–
Feedback ranks #10 of 252 influences on student achievement (effect size 0.73).
click
JH
John Hattie
Photo coming
flip back
BB
Benjamin Bloom
1913–1999
Taxonomy applied to question stems. Each cognitive level produces a different kind of question.
click
BB
Benjamin Bloom
Photo coming
flip back
NW
Norman Webb
None
Depth of Knowledge. Four levels that map cleanly to question complexity beyond recall.
click
NW
Norman Webb
Photo coming
flip back
RM
Robert Marzano
1946–
Levels of knowledge and instructional strategies. Bridges assessment design and teaching practice.
click
RM
Robert Marzano
Photo coming
flip back
MR
Mary Budd Rowe
1925–1996
Wait Time research (1972). Three seconds of silence after a question doubles response quality.
click
MR
Mary Budd Rowe
Photo coming
flip back
WP
W. James Popham
1930–
Transformative Assessment. Assessment that drives instructional decisions, not just grades.
click
WP
W. James Popham
Photo coming
flip back
RS
Rick Stiggins
1941–
Assessment FOR Learning vs OF Learning. Reframes how teachers think about testing.
click
RS
Rick Stiggins
Photo coming
flip back

Assignments

📝

Assignment 1: Summative Assessment

Create two summative assessments for your certification area. Select one from Option A (objective or essay test items) and one from Option B (product-based, performance-based, or inquiry activity). For each assessment, identify the grade level, subject, standard, and provide a description, a visual snapshot, and a grading resource (answer key or rubric). Then write a reflection connecting your design choices to the assessment principles from Chapter 7.

You may complete this assignment in pairs or triads. Each person submits the final product.

Submit in Canvas →
CriterionTop (100%)Mid (~70%)Low (~35%)
Option A Assessment
Grade, subject, standard, description, snapshot, grading resource, references
40 All components present and fully developed. Assessment aligns with the stated standard. Grading resource (answer key or rubric) matches the assessment. References to textbook or course materials. 28 Most components present. Minor gaps in alignment between assessment and standard, or grading resource is incomplete. 14 Multiple components missing. Assessment does not align with stated standard, or no grading resource provided.
Option B Assessment
Grade, subject, standard, description, snapshot, grading resource, references
40 All components present and fully developed. Assessment aligns with the stated standard. Grading resource matches the assessment. References to textbook or course materials. 28 Most components present. Minor gaps in alignment or grading resource is incomplete. 14 Multiple components missing or assessment does not align with stated standard.
Reflection
Guidelines for selecting assessments, personal assessment experiences, impact on teaching career. References to text/resources.
20 Addresses all three reflection prompts with specific connections to Chapter 7 concepts. References course materials. Two or more paragraphs. 14 Addresses the prompts but connections to chapter content are vague or one prompt is missing. 7 Reflection is superficial, missing multiple prompts, or lacks any reference to course materials.
100
Total Points
Summative Assessment Assignment
📝

Assignment 2: Reflective Practitioner

Step back and reflect on your experiences in SEC 520. Connect your growth to at least one SEC 520 course objective and at least one NC Professional Teaching Standard. The final product is either a written essay (approximately four paragraphs) or a presentation using the platform of your choice (Google Slides, Canva, PowerPoint, Prezi, etc.).

Submit in Canvas →
CriterionTop (100%)Mid (~70%)Low (~35%)
Instructional Design Resources/Experiences
Connects course objective and NC Teaching Standard to class experiences. References text/resources.
13 Clearly connects both a course objective and a NC Teaching Standard to specific class experiences. References course materials. One full paragraph. 9 Connects either the course objective or the teaching standard, but not both. References are present but vague. 5 Connections are missing or generic. No references to course materials.
Favorite Assigned Reading
Identifies a reading and connects it to a course objective and NC Teaching Standard. Explains impact on future teaching.
13 Names a specific reading and connects it to both a course objective and a teaching standard. Explains the impact with concrete detail. One full paragraph. 9 Names a reading but connections to objectives or standards are incomplete. 5 Reading is named but no meaningful connections to objectives or standards.
Favorite Instructional Design Topic
Identifies a topic and connects it to a course objective and NC Teaching Standard.
12 Names a specific topic with clear connections to both a course objective and a teaching standard. One full paragraph. 8 Topic is identified but connections are partial or vague. 4 Topic is named without meaningful connections.
Performance Reflection
Successes, areas for improvement, and anything additional to share with the instructor.
12 Identifies specific successes and specific areas for improvement. Reflection is honest and detailed. One full paragraph. 8 Addresses successes or improvement areas, but not both, or descriptions are generic. 4 Reflection is superficial with no specific examples of growth or areas for improvement.
50
Total Points
Reflective Practitioner Activity
📝

Assignment 3: Questioning Strategy Blueprint

Pick a lesson topic in your certification area. Write a questioning plan for one class period that includes specific questions at each of the four strategy levels: convergent, divergent, evaluative, and reflective. For each question, identify the Bloom's level and the strategy type. Explain where you would use wait time 1 versus wait time 2. Then write a scripted prompting sequence showing how you would respond to one hypothetical incorrect student answer.

Your questions must match the content you are teaching. Generic questions that could apply to any subject will not earn full credit. The prompting sequence should follow the model from Chapter 8, Section 8-11.

Submit in Canvas →
CriterionTop (100%)Mid (~70%)Low (~35%)
Question Set
Questions at each strategy level (convergent, divergent, evaluative, reflective) that match the lesson content.
25 Includes questions at all four strategy levels. Each question is content-specific and matches its labeled strategy type. Questions build from lower to higher cognitive levels. 18 Questions are present for most strategy levels but one or two are mislabeled, generic, or missing. 9 Questions cover fewer than three strategy levels, are generic, or do not match the labeled strategy types.
Bloom's Level and Strategy Alignment
Each question correctly identifies its Bloom's level and strategy type with brief justification.
15 Every question has a correctly identified Bloom's level and strategy type. Brief justification explains the match. 11 Most questions have Bloom's levels and strategy labels, but some are incorrect or unjustified. 5 Bloom's levels are missing, incorrect for most items, or no justification is attempted.
Wait Time Application
Identifies where wait time 1 and wait time 2 apply, with reasoning tied to Chapter 8.
10 Correctly distinguishes wait time 1 from wait time 2. Identifies specific moments in the question plan where each applies. Reasoning references Chapter 8. 7 Wait time is discussed but the distinction between wait time 1 and 2 is unclear or application is generic. 4 Wait time is mentioned only in passing or not connected to specific moments in the plan.
Prompting Sequence
Scripted response to an incorrect answer using prompting techniques from Section 8-11.
15 Scripted exchange shows 3+ teacher moves that clarify, redirect, and guide the student toward a correct response. Maintains a positive tone. Follows the prompting model from the textbook. 11 Scripted exchange is present but includes fewer than three teacher moves, or the prompting does not clearly guide toward a correct response. 5 No scripted exchange, or the response is simply telling the student the correct answer without prompting.
Content-Standard Alignment
Lesson topic connects to a specific standard in the student's certification area.
10 Lesson topic is clearly stated with a specific content standard. All questions connect to the stated topic and standard. 7 Standard is identified but questions do not all connect to the stated topic. 4 No standard identified, or the lesson topic is too vague to evaluate alignment.
75
Total Points
Questioning Strategy Blueprint
💬

VoiceThread Discussion

Prompt: Think about a teacher you had in school who was skilled at asking questions. What made their questioning effective? Now connect their approach to the four questioning strategies from Chapter 8 (convergent, divergent, evaluative, reflective). Which strategy or strategies did they use most? How did their questioning connect to the way they assessed student learning (Chapter 7)? Record a 2-3 minute response and reply to at least two peers.

CriterionTop (100%)Mid (~70%)Low (~35%)
Initial Response
2-3 minute recording connecting personal experience to Ch 7 and Ch 8 concepts.
30 Describes a specific teacher's questioning approach. Identifies questioning strategies by name and connects them to assessment concepts from Chapter 7. Recording is 2-3 minutes. 21 Describes questioning but connections to chapter concepts are vague or only one chapter is addressed. 11 Response is generic, too short, or does not reference concepts from either chapter.
Peer Replies
Substantive responses to at least two peers.
20 Replies to two or more peers with substantive comments that extend the discussion or offer a new connection to the chapter content. 14 Replies to two peers but responses are brief or do not add new content. 7 Fewer than two replies, or replies are surface-level agreements without substance.
50
Total Points
VoiceThread Discussion
275
Total Module Points
Summative Assessment (100) + Reflective Practitioner (50) + Questioning Blueprint (75) + VoiceThread (50)
Before You Move On

Have you read the assignment descriptions and rubrics above? Do you understand what each assignment asks you to do? Write one sentence for each assignment describing the hardest part. If you cannot identify the hard part, re-read the description. Knowing where the challenge lives is the first step in meeting it.

Exit Ticket: What Do You Already Know?

Read the prompt below before you dig into the chapter tabs. Submit your written response in the Exit Ticket: Module 4 assignment in Canvas (25 points). You can return and revise as you work through the chapter.

Think about the last test you took as a student. Was it a good test? What made it good or bad? Could you tell what the teacher was trying to measure? Now flip the perspective: if you had designed that test, what would you change? Keep your answer in mind as you work through Chapter 7.
Submit in Canvas →
Chapter 7 • Section 1

Basic Contexts and Concepts

Section 1 builds the foundation for everything else in this chapter. It defines assessment, distinguishes it from testing and measurement, introduces validity and reliability, and identifies the four purposes of classroom assessment. It also names the four areas teachers assess (knowledge, thinking, skills, attitudes) and connects assessment directly to planning and instruction. If you skip this section, the rest of the chapter will not make sense.

Students can hit any target that they can see and that holds still for them.

Rick Stiggins
1 Assessment as a Continuous Process (Section 7-1)
What assessment is and why it never stops.

Assessment is a continuous process whose primary purpose is to improve student learning. Teachers observe, gather information, interpret it, and make decisions about whether and how to respond. Every time you watch students work, ask a question, review a quiz, or check an assignment, you are assessing. The process includes formal tools (tests, rubrics, portfolios) and informal ones (observation, questioning, conversation).

Seven Reasons for Classroom Assessment

The textbook identifies seven: (1) provide feedback to students, (2) make informed decisions about students, (3) monitor and document academic performance, (4) aid student motivation by establishing short-term goals, (5) increase retention and transfer by focusing learning, (6) evaluate instructional effectiveness, and (7) establish and maintain a supportive classroom atmosphere.

✍ Check Your Understanding A teacher gives a quiz every Friday. She records the scores in her gradebook but never looks at which questions students missed. She never changes her instruction based on the results. Is she using assessment as a continuous process? Why or why not? She is testing, but she is not assessing in the way the textbook defines it. Assessment is a continuous process that includes gathering information, interpreting it, and making decisions. She gathers (quiz scores) but skips the interpreting and deciding steps. A teacher using assessment as a continuous process would look at which questions students missed, figure out what that tells her about their understanding, and adjust her instruction for the following week. The quiz is a tool. The assessment is what she does with the information the tool produces.
2 Technical and Professional Vocabulary (Section 7-2)
Assessment, test, measurement, validity, and reliability.

Before going further, the textbook distinguishes several terms that people often use interchangeably. Assessment is the broadest: it includes any process by which teachers gather information about student learning. A test is one specific type of assessment, usually a set of questions answered during a fixed period. Measurement assigns numbers to assessment results. A norm-referenced standardized test compares each student's score to a norming group. These are related but different processes.

Validity Does the test measure what it claims to?

A test is valid if it measures what it is intended to measure. A ruler is valid for measuring length but useless for measuring weight. Validity is relative to purpose: a test can be valid for one purpose but not another. The fundamental question: does this test sample a representative portion of the content being assessed?

Reliability Does it give consistent results?

A test is reliable if it gives similar results each time it is used under similar conditions. If a group of students could be retested and get approximately the same scores, the test is reliable. Reliability increases with test length: more questions means more information and less uncertainty about each student.

✍ Check Your Understanding A math teacher creates a test on fractions. Half the questions require students to read long word problems. Students with weak reading skills score low even though they understand fractions. Is this test valid? Is it reliable? Explain. The test has a validity problem. It claims to measure fraction knowledge, but it is also measuring reading ability. Students who understand fractions but struggle with reading will score low, which means the test is not measuring what it claims to measure. Reliability is a separate question: the test might give consistent results (reliable) while still measuring the wrong thing (not valid). A test can be reliable without being valid. It cannot be valid without being reliable.
3 Purposes of Classroom Assessment (Section 7-3)
Four purposes, four different decisions.

The textbook identifies four purposes for classroom assessment. Each one answers a different question and leads to a different instructional decision.

Placement Where do I begin instruction?

Determines whether students have the prerequisite knowledge to begin new material. Placement tests help the teacher decide where to start, not whether students can learn.

Diagnostic What specific problems exist?

Identifies specific areas of difficulty: what students know, what they are confused about, and where the gaps are. Diagnostic tests pinpoint the problem so the teacher can address it.

Formative Is learning happening right now?

Monitors learning in progress. Provides feedback to both teacher and student during instruction. The primary user of formative assessment data is the teacher, who uses it to adjust instruction in real time.

Summative What did students learn overall?

Evaluates achievement at the end of a unit, course, or period. Summative assessments include end-of-unit tests, final projects, and standardized achievement tests. The result is usually a grade.

Drag each scenario to the correct assessment purpose:

Scenarios
A teacher gives a pretest on the first day of a fractions unit to see what students already know.
A teacher circulates during group work and asks probing questions to check understanding.
A reading specialist administers a test to find out why a student struggles with comprehension.
Students take a chapter test on Friday covering everything taught that week.
Purposes
Placement
Diagnostic
Formative
Summative
4 Areas Teachers Assess (Section 7-4)
Knowledge, thinking, skills, and attitudes.

Chapter 4 introduced three domains of learning: cognitive, affective, and psychomotor. Teachers make assessments in each domain. The textbook organizes assessment areas into four categories. Each area requires different assessment methods.

Knowledge & Conceptual Understanding: How students demonstrate understanding. Identify objectives before considering assessment methods. If the objective is recall, test for recall. If the objective is conceptual understanding, have students explain the concept in their own words or create new examples.
Thinking: A domain where one can improve one's performance. It includes problem solving, analysis, evaluation, and multiple-choice items that require application. Ask: what indicators will I look for to verify that students are thinking?
Skills: Physical, learning, social, thinking, math, problem solving. Various tools can assess them: paper-and-pencil tests for math, demonstrations for physical skills, portfolios for art, checklists for a music class.
Attitudes: Useful for building group spirit and interdependence. Attitude inventories, anecdotal records, and checklists provide data without compromising confidentiality or privacy rights.
✍ Check Your Understanding A PE teacher wants to assess whether students can dribble a basketball with control. She gives them a written multiple-choice test about dribbling rules. What is wrong with this assessment, and what should she do instead? The objective is a skill: dribbling with control. The assessment measures knowledge (rules about dribbling). The assessment method does not match the assessment area. Knowing the rules for dribbling does not prove a student can dribble. The teacher should use a performance assessment: observe students dribbling through a course, use a checklist or rating scale to evaluate ball control, speed, and form. The assessment must match what you are trying to measure.
5 Direct Links to Planning & Instruction (Section 7-5)
Assessment is not the last step. It is woven through the whole process.

Assessment planning should be an integral part of instructional planning, not a process added on at the conclusion of instruction. When you plan carefully, your objectives, instruction, and assessment all match. The textbook refers to this alignment as the hallmark of backward design: start with the end in mind, then plan how to get there.

Worked Example

One Lesson, Aligned End to End

Backward design in practice. The objective shapes the instruction; the instruction shapes the assessment. Hover each piece to see how the alignment holds.

The Lesson
ObjectiveStudents will compare three Civil War rebellions in a graphic organizer that includes causes, leaders, and outcomes. InstructionClass intro to the causes/leaders/outcomes framework, small-group analysis (one rebellion per group), gallery walk to compare across groups. AssessmentIndividual graphic organizer; rubric weights cause, leader, and outcome accuracy equally across all three rebellions.
1
Objective drives everything

"Compare" is the verb. Bloom-Analyze level. The format ("graphic organizer") and the categories ("causes, leaders, outcomes") are named here. Every later decision flows from this sentence.

2
Instruction matches the objective

Each piece of the lesson maps to a piece of the objective: introduction = framework, small-group analysis = practice with one rebellion, gallery walk = comparison preview. No drift, no surprise activities.

3
Assessment uses the same format students practiced

Same graphic organizer, same three categories, same comparison move. The rubric weights what the objective named. No student sees a new format on assessment day. That is alignment, not just intent.

Backward Design

Grant Wiggins and Jay McTighe describe backward design as a three-stage process: (1) identify desired results, (2) determine acceptable evidence, and (3) plan learning experiences and instruction. Assessment is stage two: you decide what evidence would prove that students learned before you design the activities. This approach produces stronger alignment between what you teach and what you test.

Challenges to Proper Assessment (Section 7-6)

The textbook warns against using assessment for classroom control or punishment. Using tests to punish students who misbehave, assigning busy work as filler, or using grades as threats can damage motivation and destroy the trust between teacher and student. Punitive testing teaches students that assessment is something done to them, not something that helps them learn. The consequences are serious: students stop trying for excellence because they no longer see the point.

✍ Check Your Understanding A teacher assigns a pop quiz because students were talking during a lesson. She tells them it will count toward their grade. What is wrong with this approach, and what would a better response look like? The quiz is being used as punishment, not assessment. It does not measure what students learned because it was not planned as part of instruction. Students will associate quizzes with punishment, which undermines the purpose of assessment. A better response: address the talking as a classroom management issue (which it is), then use a planned formative assessment to check whether students understood the lesson content. Separate behavior management from assessment. They serve different purposes and should never be combined.
6 Assessment in Context (Section 7-5, continued)
Flipped classrooms, co-teaching, and special needs considerations.

The textbook discusses several contexts that affect how you plan assessments. In flipped classrooms, students read or watch material before class, so assessment shifts toward checking whether students engaged with the material and can apply it. Much of the assessment in flipped classrooms is formative: quick checks at the start of class, application activities during class, and peer assessment during group work.

For students with special needs, the textbook recommends consulting with your school principal and special education co-teacher. In co-teaching models, both teachers share responsibility for assessment, and assessments for students with special needs must align with their IEP goals while still connecting to grade-level standards.

Begin with Report Cards

A practical way to plan assessments is to start with the report card you will be expected to prepare. What information does it require? How often? What types of data will you need? Working backward from the reporting requirements helps you plan a calendar of assessments across the grading period. Consider the timing: four times per year is typical for elementary report cards.

Choose

Which Assessment Approach Fits?

Different teaching moments call for different assessment moves. Pick the situation closest to yours.

Which assessment approach should you use? Pick the moment in your unit that best describes your need. I need to find out what students already know before I start a unit. I want to check understanding during a lesson and adjust before the unit ends. I need to assign a grade based on what students learned In the unit.
Chapter 7 • Section 2

Formative Classroom Assessment

Section 2 is where assessment shifts from a concept you read about to a strategy you use every day. Formative assessment is a type of classroom assessment devoted to the enhancement of student learning and achievement. It is a process (not a test) that occurs during instruction and is used by both teachers and students. This section defines formative feedback, describes several formative assessment strategies, connects formative assessment to student motivation, and explains the relationship between formative and summative assessment.

Assessment becomes formative when the evidence is actually used to adapt the teaching work to meet learning needs.

Paul Black and Dylan Wiliam (1998)
1 Formative Feedback (Section 7-7)
The connection between assessment and feedback.

Feedback illustrates the gap between what the student currently knows and understands and what the teacher's expectations are for this knowledge and understanding. In its simplest form, feedback is used to make adjustments in classroom instruction: how the teacher uses feedback and revisions to learning strategies (how the student uses feedback). In each case, the goal is improving student learning.

For feedback to work, students must ultimately hold the same understanding of the standard as the teacher. Students must be able to assess their own individual work and apply a variety of self-monitoring strategies to revise and enhance their work to meet the standard.

Four Steps to Effective Formative Assessment Feedback

Leahy, Lyon, Thompson, and Wiliam recommend four steps: (1) At the beginning of each unit, help students identify what they need to learn. (2) Teach students to self-assess so they can monitor their own learning. (3) Work with students to set achievement goals so they can spend their learning time productively. (4) Help students develop learning strategies to reach their goals.

Insufficient Feedback Vague: does not help students improve

"Good effort." "Nice work." "Needs revision." These comments tell the student nothing specific. They cannot use this feedback to improve because they do not know what to change.

Useful Feedback Specific: tells the student what to do next

"When you write your next report, consider using shorter sentences. Your message is lost with such long sentences." "When solving these equations, use a different line for each step. Then you will be able to monitor the process you have used."

✍ Check Your Understanding A student turns in an essay. The teacher writes "B+" at the top and "Good job" at the bottom. Using the textbook's definition, is this formative feedback? What would you add or change? A grade and a vague comment are not formative feedback. Formative feedback identifies the gap between where the student is and where they need to be, then gives specific guidance for closing that gap. The teacher could write: "Your thesis is clear and your evidence in paragraph 2 supports it well. In paragraph 3, the connection between your evidence and your argument breaks down. Try restating your thesis before introducing the new source so the reader can follow your reasoning." That tells the student what is working, what is not, and what to do about it.
2 Formative Assessment Strategies (Section 7-8)
Strategies you can use in the classroom to check understanding and adjust instruction.

The textbook lists several formative assessment strategies. Each one gives you different data about student learning, and each one is appropriate in different situations.

Questioning is so important that Chapter 8 is devoted to it. There are two types of questions: convergent (one right answer) and divergent (many possible answers). Convergent questions check recall. Divergent questions push students to think, analyze, and evaluate. Teachers who ask mostly convergent questions miss opportunities to develop higher-order thinking. Chapter 8 covers this in depth.

When students assess each other's work, they internalize the criteria. They begin to see their own work through a critical lens. For peer assessment to work, students need clear criteria (a rubric, a checklist, or specific questions to answer about the work). The goal is self-assessment: the ability to judge your own work accurately.

Grading is a task you must understand and do well. This includes the development of semester grades, but also the grading of tests, reports, and projects. A major contribution to student achievement was the use of "Teacher Self-Report Grades": when teachers communicate clear expectations and provide specific feedback through grading, students perform better. Grading should communicate, not just evaluate.

After material is introduced to students, a practice test of the material is administered. Correct solutions are provided to students after the practice test is given. A few days later, another practice test is administered with correct solutions provided again. Research has demonstrated learning and instructional benefits including identification of gaps in knowledge and understanding, better cognitive organization of the content, and instructional feedback to the teacher. It is optimal to align the test format with the format used on the final unit examination.

✍ Check Your Understanding A teacher gives students a practice test on Tuesday. She collects the papers, records the scores, but does not return them or discuss the answers. On Friday, students take the graded test. Has the teacher used the practice test as a formative assessment strategy? What did she miss? She used a practice test, but she missed the formative purpose. The power of practice tests is in the feedback loop: students see the correct solutions, identify their errors, and adjust their studying before the graded test. By collecting the papers without returning them or discussing the answers, she broke the feedback loop. Students practiced but did not learn from the practice. The fix: return the practice tests with correct answers, discuss the most common errors as a class, and give students time to study the areas where they struggled.
3 Formative Assessment and Student Motivation (Section 7-9)
How assessment affects whether students keep trying.

Providing clear feedback to students about their achievements can increase student motivation to succeed. In particular, clarifying the goal or standard and helping students develop a representation of this standard for themselves helps students take ownership of their success. When students understand what it means to succeed in your classroom, success is no longer a mystery or something held only by the teacher. With shared understanding, students are less likely to blame you or something beyond themselves for not meeting the standard.

Low-Achieving Students

These ideas have the greatest potential for helping low-achieving students. In an environment with little feedback, low-achieving students see achievement as a futile guessing game and often stop trying. When students receive formative assessment feedback, they are more likely to try to meet achievement challenges. The connection between formative assessment and student motivation should be at the forefront of any teacher's thinking and planning of classroom assessment.

4 The Relationship Between Formative and Summative Assessment (Section 7-10)
Two types of assessment, one integrated system.

Summative assessment is a process of "summing up" achievement in some way or conducting a status check on accomplishments at a given point in time. These assessments include end-of-unit or end-of-chapter assessments, end-of-course tests administered by the district, and interim benchmark assessments administered by the state. Formative assessment is designed to provide information to students so they can act to close the gap between where they are and where they need to be relative to the standard.

Formative During Instruction

Purpose: monitor learning, provide feedback, adjust instruction. Audience: teacher and student. Timing: ongoing throughout the lesson or unit. Stakes: low or no stakes. Examples: observation, questioning, exit tickets, peer review, practice tests.

Summative After Instruction

Purpose: evaluate achievement, assign grades, certify mastery. Audience: teacher, student, parents, administration. Timing: end of unit, course, or grading period. Stakes: high (contributes to final grade). Examples: unit tests, final exams, state assessments, portfolios.

Drag each example to the correct assessment type:

Examples
Teacher asks thumbs up/down to check understanding mid-lesson.
Students take a chapter test covering two weeks of content.
Students complete an exit ticket identifying one concept they found confusing.
A district administers an end-of-course exam that counts toward the final grade.
Assessment Type
Formative
Formative
Summative
Summative
✍ Check Your Understanding Can a single assessment serve both formative and summative purposes? Give an example where this might work and explain what makes it function as both. Yes. A unit test (summative) can also serve a formative purpose if the teacher analyzes which questions students missed, identifies patterns in the errors, and uses that information to reteach or adjust instruction for the next unit. The test is summative because it evaluates achievement at the end of a unit. It becomes formative when the teacher uses the results to make instructional decisions going forward. The key is what you do with the data. If you file the scores and move on, it is only summative. If you study the results and change something, you have added a formative function.
Compare

Formative vs Summative, Side by Side

Same word in both names. Different purpose, different timing, different consequences for students. Mixing them up is the most common error in assessment design.

Formative Assessment
Assessment FOR learning

Purpose: Improve learning while it is still happening. Adjust instruction.

Timing: During a lesson, unit, or course.

Stakes: Low or none. Used for feedback, not grades.

Audience: Teacher and student. Both use the data to adjust.

Examples: Exit tickets, observation, peer review, quick polls, draft feedback.

Summative Assessment
Assessment OF learning

Purpose: Measure what students learned by the end. Report a grade.

Timing: End of unit, course, or program.

Stakes: High. Counts toward the final grade.

Audience: Teacher, student, parents, administrators.

Examples: Final exams, term papers, end-of-unit projects, state tests.

Research

Feedback and Formative Strategies

Feedback ranks among the highest-impact influences on learning Hattie has measured. Reciprocal teaching, mastery learning, and self-questioning all clear the threshold by a comfortable margin.

Feedback and Formative Strategies
Hattie effect sizes
Reciprocal teaching
0.74
Feedback
0.70
Mastery learning
0.58
Self-questioning
0.55
Formative evaluation
0.40

The red linemarks the 0.40 effect-size threshold. Source: Hattie, Visible Learning (2009 and updates). Values approximate.

Chapter 7 • Sections 3, 4 & 5

Understanding Assessment Tools, Constructing Assessments, and Grading

Sections 3 through 5 cover the practical side of assessment. Section 3 introduces the major categories of assessment tools: teacher-made assessments, large-scale achievement tests, and student-led conferences. Section 4 is the construction workshop: guidelines for building tests, writing objective items, writing essay items, and assessing performance and products. Section 5 covers grading: the principles, the pitfalls, and how to communicate your system to students and parents. This is the section you will draw on most for your Summative Assessment assignment.

1 Teacher-Made Assessments (Section 7-11)
Why most classroom assessments involve teacher-made tests.

Most classroom assessments involve teacher-made tests, and there are good reasons for this. The teacher has monitored the learning experiences in the class and thus has a much better idea of what needs to be assessed. The teacher is familiar with the students as well as the instruction, which may affect the content and method of assessment.

Three important reminders: First, plan the test as you plan instruction, not after. Knowing what you will assess shapes how you teach. Second, use a variety of methods. You will be letting students demonstrate their skills and understanding in a variety of valid ways: reports, oral and written projects, poetry, videos, music, plays, stories, models, and performances. Third, weave assessment throughout the duration of a unit, not just at the end. Students assessed throughout will have a much more complete picture of their understanding.

Is the Assessment Appropriate?

Richard Stiggins offers four questions to gauge assessment quality: (1) What is the purpose of the assessment? Who will use the results? How? (2) What are the learning targets? Are they clear? Appropriate? (3) Assess how? Built of quality or relevant ingredients? (4) Communicate how? Reported to whom? In what form?

2 Large-Scale Achievement Tests (Section 7-12)
Standardized tests: strengths, limitations, and the reform debate.

Schools use large-scale achievement tests to assess student performance according to district-wide and statewide curricula, monitor student achievement, and assess student aptitude prior to high school graduation. The primary consideration is congruence: there must be alignment between what is taught and what the test measures.

Questions are written by specialists, reviewed for bias, and field-tested. They are accompanied by extensive technical data on norming, validity, and reliability. Development costs are recovered over large print runs, which keeps cost-per-student low. They provide separate printouts for class records, individual student reports, reports to parents, and many other uses. Scores can be compared to the norming group, and they provide data on the normality of specific skills and objectives.

Many of the higher-level thinking processes are difficult to assess using a multiple-choice test. Reform efforts have turned away from using numerical scores and averages as indicators of success and moved toward a focus on each student's competency in the skills that will be most useful in life. High-stakes tests (those with consequences for performance) continue to evolve. The federal government has weighed in with its own set of high-stakes testing requirements. Teachers often focus on test preparation, share instructions in the test form, and give particular attention to the form in which tests are constructed, sometimes at the expense of deeper instruction.

3 Student-Led Conferences (Section 7-13)
Students take ownership of their learning story.

In a student-led conference, the student takes major responsibility for discussing and evaluating their current level of achievement relative to the standard. This discussion includes the quality of the student's work, the ways in which the student performed well, and what might be done to enhance their future performance. The student conveys the assessment in written form to the teacher and parents.

Why Student-Led Conferences Work

Two benefits stand out. First, students learn to take ownership of their learning and are held accountable for it. Second, communication among the three stakeholders in student success (students, parents, and the teacher) is enhanced. Many teachers have known of the power of this technique for some time and have used it successfully for years.

4 General Guidelines for Test Construction (Section 7-14)
Six steps to writing a test that measures what you taught.

Your work in writing a specific test will be greatly facilitated if you follow these six steps:

1. Determine Topics and Proportions: Decide which topics to include and create a proportionate number of items for each topic. If you plan to teach four main ideas and devote similar time to each, then 25% of the questions on your test should be related to each topic.
2. Match the Format: Be sure to test the format you taught. If you teach for conceptual understanding, do not test for factual recall. Maintaining consistency is more difficult than it appears.
3. Balance Time and Questions: Determine a balance between the available testing time and the number of questions to include. The average high school student can complete two true-false items, one multiple-choice item, or one short-answer item per minute of testing time.
4. Use the Kaplan Matrix: Use a planning matrix (see Chapter 5, Tables 5.1 and 5.2) to help organize your planning. One method is to list main ideas on the left and headings that indicate the anticipated cognitive level across the top.
5. Plan for Early Finishers: Have an activity for students who finish early. Do not wait until test time for this. It is always a good idea to plan ahead.
6. Use a Timing Formula: Allow enough time. You will get a more valid picture of achievement if you allow plenty of time, even if it means dividing the assessment over several days.
5 Objective Test Items (Section 7-15)
True-false, matching, short-answer, and multiple-choice.

Objective items are so called because they have a single best or correct answer. There is no (or very little) dispute about the correct response. Objective items come in two types: the selection type, in which a response is chosen from among alternatives given, and the supply type, in which the student supplies a brief response.

True-false, matching, and select-answer items are the simplest types of assessment. They are best suited to those who are not test-wise. Keep your true-false items short and unambiguous. Students who guess have a 50% chance of getting a correct response. Three variations: (1) standard T/F, (2) fact vs. opinion format, (3) correction format where students fix false statements.

Matching exercises tend to provide clues to those who are test-wise. Keep your items to approximately eight items and include more options than items to be matched. Ensure there is only one correct answer per match. Use homogeneous content: all items should come from the same category.

Short-answer items require the student to provide a word, phrase, or number. Students are not simply asked to identify a correct choice but rather to retrieve it from memory, which is a different and perhaps individually more complex process. Science and math teachers are particularly fond of this format because it seems to directly measure comprehension and problem-solving skills.

The multiple-choice item is generally considered the most useful objective test item. It can measure both knowledge and higher-level learning outcomes. Multiple-choice items consist of two parts: a question or problem (the stem) and a list of possible solutions (the alternatives). The correct alternative is the answer. The remaining alternatives are called distractors. Creating effective distractors is one of the most difficult parts of writing multiple-choice items: they should be plausible to the non-knowing student while not confusing the knowing student.

Drag each assessment scenario to the best item type:

Scenarios
You want to measure whether students can apply a concept to a new situation with several plausible interpretations.
You need to quickly check whether students can associate explorers with their achievements.
You want students to recall and write a specific formula from memory.
You need a fast check on whether students know basic factual statements.
Item Types
True-False
Matching
Short-Answer
Multiple-Choice
6 Essay Items (Section 7-16)
Restricted response vs. extended response.

The essay item is an excellent way to assess students' higher-thinking processes: comprehension, analysis, and evaluation, as well as skills in organizing and presenting ideas. There are two types of essay items:

Restricted Response Focused and bounded

The question limits the scope of the response. Students know what to address and approximately how much to write. Example: "Explain two reasons leading to the conflict in which Magellan was killed." The teacher can use specific criteria to evaluate the response.

Extended Response Open and complex

The question allows students to select, organize, and evaluate ideas. Example: "Compare the American Revolution to the US experience in Iraq. Note specifically the concluding phases of both wars." These are harder to score consistently and require a well-designed rubric.

Scoring Essay Items

Use holistic scoring for extended responses (assign 3, 5, or 4 points on a scale rather than descriptive words like "excellent" or "needs improvement"). Use analytic scoring for restricted responses (you can directly compare responses to the scoring rubric and assign points). A general rule: it is useful to use quantitative scores rather than descriptive words. Develop a coding system that conceals the students' names. This reduces the tendency to evaluate performance based on quality of earlier performance rather than making an objective assessment. Read all responses to one question first before going on to the next, to reduce the halo effect.

7 Assessing Performance and Products (Section 7-17)
Rating scales, checklists, anecdotal records, portfolios, and rubrics.

Many areas of student achievement are more effectively assessed with a performance-based assessment than with a test question. Language arts teaches speech and listening, both of which are assessed most directly by observing student performances. The same is true of science lab procedures, social studies community projects, and reports of observations in health or earth science class. Performance-based items do not have a single best response. Instead, students are required to organize and present the material in their own way within the stated bounds of the task.

Rating Scales: Provide a list of characteristics to be observed and a scale showing the degree to which they are present. Useful for presentations, speeches, and demonstrations. Example: a 5-point scale from "clearly understood" to "inaccurately describes" for a speech.
Checklists: A "yes-no" rating scale. A process can be divided into steps and each one can be checked for its presence. Useful for lab procedures, homework routines, and skill demonstrations.
Anecdotal Records: Recorded observations of student behaviors made during routine class sessions, in the halls, or on the playground. Four keys: (1) do not record too much, (2) be consistent, (3) record positive as well as negative indicators, (4) do not draw inferences from a single incident.
Portfolios: Collections of student work assembled to monitor progress and share with parents and administrators. For assessment use, a portfolio must be a demonstration of student effort and progress toward achieving particular learning objectives. It is a valuable assessment tool when its contents are carefully and purposefully assembled.
Rubrics: Contain two primary components: criteria (which are easily described categories that describe what is being evaluated) and standards (which describe the level of achievement and tasks involved in reaching that level). Rubrics are both assessment tools for faculty and learning tools for students.
✍ Check Your Understanding You are a science teacher assessing whether students can follow the steps of the scientific method during a lab. Would you use a rating scale, a checklist, or a rubric? Explain your choice. A checklist is the best fit. The scientific method has specific sequential steps (question, hypothesis, procedure, data collection, analysis, conclusion), and a checklist lets you mark whether each step was completed. A rating scale would be useful if you wanted to evaluate the quality of each step (how well did the student form the hypothesis?), and a rubric would be appropriate for a more complex performance assessment where you want to describe levels of quality across multiple criteria. For a straightforward "did the student follow the steps" assessment, the checklist is the most efficient and appropriate tool.
8 Grading to Improve Student Learning (Sections 7-18 & 7-19)
Principles of grading and communicating your system.

One of the first principles of grading is that you cannot have too much data. No matter how many test scores, homework exercises, and class activities you have assessed, you will still feel that you need more information to make summative evaluations. By giving your students many opportunities to show achievement, you provide yourself with more data for a fair professional judgment, and you also provide each student with every possible chance to succeed.

Avoiding Grading Errors

The textbook flags five common errors: (1) Using pretest scores in determining grades: such scores should indicate only where to begin instruction. (2) Not adequately informing students of what to expect on a test, which leaves students guessing what is important. (3) Assigning a zero for missing or incomplete work: a zero has a profound effect on an average, and one alternative is to use the median (middle-ranking) score as an indicator rather than using the average. (4) Using grades for reward or punishment: achievement of learning objectives should be the only consideration in assigning grades. (5) Assigning grades contingent on improvement: a student could improve a good-but-not-best performance expectation, and mastering a specified and well-articulated standard should be the main factor.

Communicating Your Intentions (Section 7-19)

Whatever your system, you must explain it clearly, with appropriate handouts and examples, to your students during the first days of class. Students can and should be taught to keep track of their own grades. A letter or note to parents prior to grading that includes a description of the grading system for the class helps them track their child's progress and prevents unnecessary confusion and disagreement at report card time.

✍ Check Your Understanding A student misses an assignment and receives a zero. Her average drops from an 82 (B) to a 64 (D). Another student turns in a weak assignment and earns a 40. His average drops from an 82 to a 76 (C). Which student's grade more accurately reflects their learning? What does the textbook suggest as an alternative to assigning zeros? Neither student's grade accurately reflects their learning after the zero or low score is factored in. The zero distorts the average far more than the 40 does because a zero is not just "very low performance": it means "no data." The textbook suggests using the median (middle-ranking) score as an indicator rather than the average, which reduces the distortive effect of a single missing assignment. Another approach: require the student to complete the assignment late for reduced credit rather than accepting a zero. The goal of grading is to communicate what the student has learned. A zero communicates nothing about learning. It communicates that the student did not turn something in.
Chapter 8 • All Sections

The Process of Classroom Questioning

Chapter 8 is about the teaching method you will use more than any other: asking questions. Teachers ask between 300 and 400 questions per day. Most of those questions are low-level recall. This chapter teaches you how to ask better questions, how to use different questioning strategies for different cognitive purposes, and how to handle the responses you get. It covers four questioning strategies, wait time, prompting techniques, handling incorrect answers, promoting multiple responses, and developing students' own questioning skills. This is the chapter your Questioning Strategy Blueprint assignment is built on.

1 The Importance of Questioning (Sections 8-1 through 8-4)
What the research says and why it matters for your classroom.

Questioning plays a critical role in teaching. Teachers must be knowledgeable in the process of framing questions so they can guide student thought processes in the most skillful and meaningful manner. This means teachers must design questions that will help students attain the specific goals that are the objectives of a particular lesson.

Research Findings on Questioning

Key findings from the research: questioning tends to be a universal teaching strategy. A broad range of questioning options is open to you. Being systematic in the use and development of questioning tends to improve student learning. By classifying questions according to a particular system, you may determine the cognitive or affective level at which your class is working and make adjustments as needed. Questions should be developed logically and sequentially. Students should be encouraged to ask questions. A written plan with key questions provides lesson structure and direction.

The textbook also emphasizes using statements alongside questions. As an alternative to asking questions, making declarative statements can also stimulate student responses, curiosity, and thinking. There is some evidence that students' verbal responses are of higher quality when they respond to statements rather than to questions alone. For example, a teacher might say, "It really doesn't matter what tense you use when writing." That statement predictably will evoke a wide range of student responses.

2 Four Questioning Strategies (Sections 8-5 through 8-8)
Convergent, divergent, evaluative, and reflective.

The textbook describes four basic questioning strategies. Each one targets a different type of thinking. If you assign particular importance to the different types of questions you ask, then you will need a method for verifying that you are indeed using the desired questioning patterns.

Convergent One right answer. Recall and comprehension.

Convergent questions focus on a narrow objective. They encourage student responses that converge, or focus, on a central theme. These questions typically elicit short responses from students and focus on the lower levels of thinking: knowledge or comprehension. They are ideal for checking factual knowledge, building vocabulary, and quick warm-up exercises. Examples: "In what words did Robert Browning use the dramatic monologue as a form for his poems?" "Under what conditions will water boil at less than 100 degrees C?"

Divergent Many possible answers. Analysis and creativity.

Rather than seeking a single focus, a divergent strategy is designed to evoke a wide range of student responses. Divergent questions also elicit longer student responses. They are ideal for building the confidence of students with learning difficulties because divergent questions do not always have right-or-wrong answers. Examples: "What type of social and cultural development might have taken place if Christopher Columbus had landed on Manhattan Island on October 12, 1492?" "What do you think are effective methods for creating a sustainable environment?"

Evaluative Divergent plus judgment. Criteria-based thinking.

The evaluative strategy is based on the divergent strategy, but with one added component: evaluation. An evaluative question has a built-in set of evaluative criteria. For example, an evaluative question might ask why something is good or bad, why an author's approach is or is not effective. The teacher's role is to help students develop a logical basis for establishing evaluative criteria. Students should develop a logical, consistent set of evaluative criteria. Examples: "Is the world a better or worse place because of computers and the internet?" "What evidence is there that the federal system of interstate highways harmed our city environments?"

Reflective Higher-order thinking. Implications, values, meanings.

The goal of the reflective question is to require your students to develop higher-order thinking: to elicit motives, make inferences, speculate on causes, consider impact, and contemplate outcomes. Rather than asking a student a "why" or "what" question, you are trying to encourage the student in position to think of implications, to search for unintended consequences. Five examples: seeking motives, expanding a vision, listing implications, searching for unintended consequences, identifying issues.

Drag each question to the correct questioning strategy:

Questions
"What rights are ensured by the First Amendment?"
"What would happen in a school if it had no computers or internet connection?"
"Is global warming a critical issue? What evidence supports your position?"
"What assumptions did the US government make when it constructed the interstate highway system, and what consequences were unintended?"
Strategies
Convergent
Divergent
Evaluative
Reflective
✍ Check Your Understanding A teacher asks: "When was the Battle of Gettysburg?" Then follows with: "Why do historians consider Gettysburg a turning point?" Which strategy does each question represent? Why is the second question more valuable for developing thinking, and how could the teacher push it even further? The first question is convergent: one right answer (July 1-3, 1863), recall level. The second question is evaluative: it asks students to make a judgment ("turning point") and support it with evidence ("why"). To push further, the teacher could ask a reflective question: "What might the country look like today if the outcome at Gettysburg had been different? What assumptions are you making to answer that?" That moves students into speculation, inference, and analysis of unintended consequences.
3 Wait Time (Section 8-10)
The pause that changes everything.

The basic rule for asking questions is to provide them in three steps: ask the question, pause, and then call on a student. This rule is grounded in the psychological rationale that when you ask a question and follow it with a short pause, all students will attend to the communication. The meaningful message: the pause communicates that any student in the class may be selected for a response, so the attention level of the class remains high.

Wait Time 1 After the question, before calling on a student

The time between when you ask a question and when you call on a student. It gives students a chance to think about their response. This is especially important when you ask higher-level questions. Students with special needs will have some time to ponder the question so they may respond appropriately.

Wait Time 2 After the student responds, before you react

The pause after the student you have called on has responded. It is equally important because it gives the student additional time to think, or it allows other students to respond as well. If the teacher waits a while to respond after the initial student response, then students will continue to respond without prompting.

Benefits of Wait Time

For the teacher: less teacher talking, less repetition of questions, fewer questions per period, more questions with multiple responses, fewer lower-level questions, more application-level questions, less disciplinary action. For the students: longer responses, more student discourse and questions, fewer nonresponding students, more student involvement in lessons, increased complexity of answers and improved reasoning, more responses from slower students, more peer interaction and fewer peer interruptions, less confusion, more confidence, higher achievement.

✍ Check Your Understanding A teacher asks a divergent question: "What problems might you anticipate if algebra were made a required course for all students in eighth grade?" She waits one second, then calls on the first student who raises a hand. What did she lose by not using adequate wait time, and which type of wait time did she skip? She skipped wait time 1 (the pause between asking the question and calling on a student). By waiting only one second, she limited the response pool to the fastest thinkers. Students who need more processing time (including ELL students and those with learning differences) never had a chance to form a response. The research suggests waiting 3-5 seconds. She also lost the opportunity for longer, more complex answers. Quick calls produce quick answers: short, low-level, recall-oriented. Wait time produces longer responses, more student-to-student interaction, and higher-level thinking. For a divergent question, where the goal is a wide range of responses, adequate wait time is essential.
4 Prompting & Handling Incorrect Responses (Sections 8-11 & 8-12)
What to do when a student gives a wrong answer or no answer at all.

Once you have asked a question and called on a student to respond, the student may not answer the question the way you want. You may do this by clarifying the question, by eliciting a fuller response, or by eliciting additional responses from the student to allow you to verify whether the student comprehends the material. Always provide positive reinforcement so the student will be encouraged to complete an incomplete response or revise an incorrect one.

A Prompting Sequence

The textbook provides a model. The teacher prompts the student in a nonthreatening or neutral verbal tone. The episode continues until the student provides all the necessary information for an appropriate closure. Example: "Class, now let us examine the data that we collected regarding our experiment on absorption and radiation. What differences did you observe between the covered and uncovered pans? [pause] Lisa?" / "The water in the covered pan had a temperature of 96 degrees." / "At what point in the experiment was that temperature measured?" / "After the pan had been covered for 10 minutes." / "What was the temperature of the water when you took the first reading?" The teacher keeps going until the student has provided a full, accurate answer.

When a student gives a totally incorrect response, the textbook recommends three steps. First, reinforce positively: comments such as "yes" or "that is incorrect" should be avoided because they can be negative reinforcers and may reduce the student's desire to participate. Second, if you respond negatively to an incorrect student response, there is a high probability that a ripple effect will occur (Kounin 1970): other students' behavior will be negatively affected. Third, when a student gives an incorrect answer, try to move to a neutral prompting technique rather than responding with "No."

✍ Check Your Understanding You ask: "What is the relationship between the hypotenuse and sides of a right triangle?" A student answers: "I think it's something about the sides being equal." Write a three-move prompting sequence that guides the student toward the correct answer without telling them they are wrong. One possible sequence: Move 1: "You're thinking about the relationship between the sides. Good. Now think about this: do all three sides have to be the same length for a right triangle? Can you picture one?" (Redirect toward the specific concept.) Move 2: "Okay, so if the sides aren't equal, what makes the hypotenuse special compared to the other two sides? Think about its length." (Narrow the focus.) Move 3: "Now think about this: if you know the two shorter sides, is there a formula that lets you find the hypotenuse? Can you visualize the equation?" (Guide toward Pythagorean theorem.) Each move builds on the student's response, stays positive, and moves closer to the target without giving the answer away.
5 Multiple Responses & Encouraging Nonvolunteers (Sections 8-13 through 8-15)
Getting more students talking and thinking.

Teachers typically conduct recitation periods by sequential questioning: they ask one student to respond, then another student to respond, and so on. The textbook recommends using the multiple-response technique instead. You ask a question, pause, and then call on three or four students to respond. You caution students that you will not repeat any student responses, so they must listen carefully.

Benefits of Multiple Responses

The multiple-response strategy allows for longer student responses, greater depth in student statements, and greater challenges for all students. It is a logical precursor to student-conducted discussions. Because many students do not demonstrate the behaviors or skills needed, using multiple-response questions helps you condition students to accept more responsibility for listening to one another and to modify their responses based on previous ones.

For nonvolunteers, the textbook offers several strategies: maintain a positive attitude toward nonvolunteering students, ask nonvolunteers questions that they will be likely to answer successfully, give generous positive feedback to encourage future responding, attempt to determine why each nonvolunteer remains shy, occasionally make a game out of questioning (place each student's name on a card and draw cards at random), and prompt promptly. There is nothing wrong with giving each nonvolunteering student a card with a question on it the day before the intended oral recitation period to help them prepare.

Think-Pair-Share

Frank Lyman (1981) described this method as having three steps: (1) Think: you ask a question to the whole class and allow them a short time to think about the response. (2) Pair: designate partners (desk mates, buddies) to pair up and discuss the best answers or even the most novel possibilities. In some cases you could even have them write their team responses. (3) Share: you now call on the pairs to share their thinking with the class. Responses can be recorded on the chalkboard. This method is another means by which ELL students or those with special needs can participate meaningfully in the recitation session.

6 Developing Student Questioning Skills & Avoiding Idiosyncrasies (Sections 8-16 & 8-17)
Teaching students to ask their own questions. Catching your own bad habits.

After conducting extensive research, Cynthia T. Richetti and Benjamin B. Tregor (2001) identified five reasons why students should develop their own questions: it increases motivation to learn, improves comprehension and retention, encourages creativity and innovation, teaches how to think and learn, and provides a basis for problem solving and decision making.

Techniques for Developing Student Questions

One technique: play a game of Twenty Questions. In Twenty Questions, participants ask questions to identify something. The teacher thinks of some problem, concept, place, or historical figure, and students attempt to discover it through questioning ("Does it have a definite answer?" "Is it a place?"). The teacher can respond with only yes or no answers. Initially you will conduct the session, but as students master the technique, you can let them conduct the entire session. Another technique: have students question the author of the texts they have read for class ("What is the author trying to say?" "What did the author say to make you think that?" "What do you think the author means?").

The textbook warns about teacher idiosyncrasies that interfere with good questioning: repeating the question, repeating students' responses, answering the question yourself, not allowing a student to complete a long response, and not attending to the responding student (looking away while they talk). These behaviors are easy to develop and hard to break. Have a colleague observe your questioning or record yourself to catch these patterns.

✍ Check Your Understanding You notice that in every class discussion, you repeat each student's answer before calling on the next student. What problem does this create, and what should you do instead? When you repeat every answer, students learn to listen to you instead of each other. They know you will say it again, so they stop paying attention to their classmates. This kills student-to-student interaction and puts you at the center of every exchange. Instead, do not repeat the response. If other students did not hear it, ask the responding student to repeat it, or ask another student to paraphrase what was said. This trains students to listen to one another and shifts the cognitive work away from you and toward the class.
Compare

Four Question Types, Side by Side

Each question type does a different job. Knowing which job you need is the first step in asking the right question.

Convergent
Section 8-5

Number of correct answers: One.

Cognitive level: Recall, comprehension, basic application.

Best for: Checking understanding before moving on.

Example: "What is the capital of France?"

Divergent
Section 8-6

Number of correct answers: Many.

Cognitive level: Analysis, synthesis, creative thinking.

Best for: Generating ideas, exploring possibilities.

Example: "How many ways could we solve this?"

Evaluative
Section 8-7

Number of correct answers: Defensible answers, with criteria.

Cognitive level: Evaluation, judgment.

Best for: Making and defending choices.

Example: "Which solution best fits these constraints, and why?"

Reflective
Section 8-8

Number of correct answers: The student's own.

Cognitive level: Metacognition.

Best for: Self-awareness, building learning habits.

Example: "How did you arrive at that answer?"

Research

Questioning Strategies

Not all questioning is equal. Discussion and peer-driven questioning produce stronger effects than the typical teacher-led questioning that fills most classrooms. Higher-order questions on their own do not clear the threshold without a deliberate questioning strategy.

Questioning Strategies
Hattie effect sizes
Discussion
0.82
Reciprocal peer questioning
0.68
Student self-questioning
0.55
Classroom questioning
0.48
Higher-order questions (alone)
0.36

The red linemarks the 0.40 effect-size threshold. Source: Hattie, Visible Learning (2009 and updates). Values approximate.

Apply It: Branching Scenarios

Reading about assessment and questioning is one level of understanding. Making assessment and questioning decisions when students are in front of you is another. These four scenarios put you in classrooms where something has gone wrong. Your choices determine whether the teacher recovers or digs a deeper hole.

Each scenario covers content from Chapters 7 and 8: formative feedback, test construction, validity, questioning strategies, wait time, and prompting. Work through all four. When you make a wrong choice, read the feedback. The explanation is where the learning happens.

After the Scenarios

Look at the paths you took. For each wrong turn, go back to the relevant chapter tab and find the section that covers the concept. Write one sentence explaining what you misunderstood and what you understand now. This is the same formative feedback process described in Section 7-7: identify the gap, then close it.

Video Library

These instructor videos walk through each chapter's key concepts. Copies of the PowerPoint slides are in the Modules section on Canvas. As you watch, write one sentence connecting something in the video to something from your reading. A connection, not a summary. You will use these in your VoiceThread discussion.

Chapter Walkthroughs

These instructor videos walk through each chapter's key concepts. Copies of the PowerPoint slides are in the Modules section on Canvas.

Chapter 7: Classroom Assessment. Instructor walkthrough.
Chapter 8: The Process of Classroom Questioning. Instructor walkthrough.

Additional Resources

Chapter 9: Small-Group Discussions and Cooperative Learning. Instructor walkthrough of group structures and discussion design.
Module 4

Flash Card Review

Click the card to flip. Mark each one "Got it" or "Review again" to see what to study next.

 
1 of 12
Chapter 7
Term
Assessment
Click to reveal
Definition
...

Memory Match

Match each term to its definition. Click two cards to flip them. Matching pairs stay open.

Moves: 0 Matches: 0 / 6

Memory Match: Assessment Concepts

Match each assessment idea to its meaning.

Moves: 0 Matches: 0 / 6

Memory Match: Questioning Strategies

Match each question type or wait time to its purpose.

Moves: 0 Matches: 0 / 6

Bonus Resources

⭐ Bonus

Go Deeper

Optional. Pick one if you want to push past the textbook on assessment design or questioning practice.

📚
Embedded Formative Assessment (Dylan Wiliam)

The book that turned formative assessment from a buzzword into classroom practice. Five strategies, each with research and examples. Pairs with Chapter 7, Section 2.

The Question Formulation Technique

Right Question Institute's protocol for teaching students to generate their own questions. Flips Chapter 8's framework: instead of you asking better questions, students learn to ask them. Free protocol at rightquestion.org.

🔎
Released NAEP Items

The National Assessment of Educational Progress publishes its actual test items online. See how professional item writers handle stems, distractors, and scoring rubrics. A useful reference for the Summative Assessment assignment.