Instructors continuously evaluate a learner's performance in order to provide guidance, suggestions for improvement, and positive reinforcement.
Traditional assessment involves the kind of written testing and grading that is most familiar to instructors and learners. This includes things like multiple-choice tests and brief quizzes.
Authentic assessment requires the learner to demonstrate not just rote and understanding, but also the application and correlation levels of learning. The learner must exhibit in-depth knowledge by generating a solution instead of merely selecting a response. Learners know the standards of success in advance.
A rubric is a guide used to score performance assessments in a reliable, fair, and valid manner. It is generally composed of dimensions for judging learner performance, a scale for rating performances on each dimension, and standards of excellence for specified performance levels.
Diagnostic assessments assess learner knowledge or skills prior to a course of instruction.
Formative assessments provide a wrap-up of the lesson and set the stage for the next lesson. They are not graded.
Summative assessments measure how well learning has progressed to that point. They are used periodically throughout the training.
An effective assessment provides critical information to both the instructor and the learner. A good assessment provides practical and specific feedback to learners, with direction and guidance indicating how they may raise their level of performance.
A well-designed assessment can help the instructor see where more emphasis is needed. If several learners falter when they reach the same step in a weight-and-balance problem, the instructor should recognize the need for special emphasis.
An effective assessment displays several characteristics:
- Objective: If an assessment is to be effective, it needs to be honest; and it must be based on the facts of the performance as they were, not as they could have been. A conflict of personalities can alter an opinion. Sympathy or over-identification with a learner also can affect objectivity.
- Flexible: The instructor should evaluate the entire performance of a learner in the context in which it is accomplished. An assessment should be designed and executed so that the instructor can allow for variables.
- Acceptable: Nobody likes negative feedback. Assessments presented fairly, with authority, conviction, sincerity, and from a position of recognizable competence tend to work well.
- Comprehensive: While as assessment includes strengths as well as weaknesses, the degree of coverage of each should fit the situation.
- Constructive: When identifying a mistake or weakness, the instructor needs to give positive guidance for correction.
- Organized: Almost any pattern is acceptable, as long as it is logical and makes sense to the learner.
- Thoughtful: An effective assessment reflects the instructor's thoughtfulness toward the learner's need for self-esteem, recognition, and approval.
- Specific: Learners cannot act on recommendations unless they know specifically what the recommendations are.
A traditional assessment generally refers to written testing — multiple choice, matching, true/false, fill in the blank. There is a single, correct response for each item and an established time limit to complete the assessment.
The assessment, or test, assumes that all learners should learn the same thing, and relies on rote memorization of facts. Responses are often machine scored and offer little opportunity for a demonstration of the thought processes characteristic of critical thinking skills.
While the measure of performance is limited, such tests are useful in assessing the learner's grasp of information, concepts, terms, processes, and rules. This becomes the foundation needed for higher levels of learning.
Characteristics of a Good Written Assessment
- Reliability: If identical measurements are obtained every time a certain instrument is applied to a certain dimension, the instrument is considered reliable.
- Validity: A test should measure what it is supposed to measure. This is the most important consideration in test evaluation.
- Usability: A usable written test should have clear directions, be legible, and have appropriate graphics, charts, and illustrations. It should be easy to grade.
- Objectivity: Selection-type test items, such as true/false or multiple choice, are much easier to grade when compared to subjective essay-based tests.
- Comprehensiveness: A test should measure overall objectives.
- Discrimination: A test must measure small differences in achievement in relation to the objectives of the course.
Authentic assessment asks the learner to perform real-world tasks and demonstrate a meaningful application of skills and competencies. Learners must rely on critical-thinking skills. Instructors rely upon open-ended questions and established performance criteria.
Authentic assessment focuses on the learning process, enhances the development of real-world skills, encourages higher order thinking skills, and teaches learners to assess their own work and performance.
Collaborative critique is a form of learner-centered grading. The instructor begins by using a four-step series of open-ended questions to guide the learner through a complete self-assessment. Through this discussion, the instructor and the learner jointly determine the learner's progress.
- Replay: The instructor asks the learner to verbally replay the flight or procedure.
- Reconstruct: Learners identifying key things that they would have, could have, or should have done differently during the flight or procedure.
- Reflect: Investing perceptions and experiences with meaning by asking value-based questions about the lesson.
- Redirect: The instructor helps the learner relate lessons learned in this session to other experiences and consider how they might help in future sessions.
Questions that may help with the "Reflect" step of a collaborative critique include:
- What was the most important thing you learned today?
- What part of the session was easiest for you? What part was hardest? 3. Did anything make you uncomfortable? If so, when did it occur?
- How would you assess your performance and your decisions?
- How did your performance compare to the standards in the ACS?
Questions that may help with the "Redirect" step of a collaborative critique include:
- How does this experience relate to previous lessons?
- What might be done to mitigate a similar risk in a future situation?
- Which aspects of this experience might apply to future situations, and how?
- What personal minimums should be established, and what additional proficiency flying and/or training might be useful?
The collaborative assessment process in learner-centered grading uses two broad rubrics:
- A rubric that assesses the learner's level of proficiency on skill-focused maneuvers or procedures.
- A rubric that assesses the learner's level of proficiency on single-pilot resource management (SRM), which is the cognitive or decision-making aspect of flight training.
For maneuvers or procedures, at the completion of the scenario the learner is able to:
- Describe: Describe the physical characteristics and cognitive elements of the scenario activities but needs assistance to execute the maneuver or procedure successfully.
- Explain: Describe the scenario activity and understand the underlying concepts, principles, and procedures that comprise the activity, but needs assistance to execute the maneuver or procedure successfully.
- Practice: Plan and execute the scenario. Coaching, instruction, and/or assistance will correct deviations and errors identified by the instructor.
- Perform: Perform the activity without instructor assistance. The learner will identify and correct errors and deviations in an expeditious manner. At no time will the successful completion of the activity be in doubt. ("Perform" is used to signify that the learner is satisfactorily demonstrating proficiency in traditional piloting and systems operation skills).
- Not observed: Any event not accomplished or required.
For assessing risk management skills, the learner is able to:
- Explain: Identify, describe, and understand the risks inherent in the flight scenario, but needs to be prompted to identify risks and make decisions.
- Practice: Identify, understand, and apply SRM principles to the actual flight situation. Coaching, instruction, and/or assistance quickly corrects minor deviations and errors identified by the instructor. The learner is an active decision maker..
- Manage-Decide: Gather the most important data available both inside and outside the flight deck, identify possible courses of action, evaluate the risk inherent in each course of action, and make the appropriate decision. Instructor intervention is not required for the safe completion of the flight.
When deciding how to assess learner progress, aviation instructors can follow a four-step process:
- Determine level-of-learning objectives.
- List indicators of desired behaviors.
- Establish criterion objectives.
- Develop criterion-referenced test items.
Authentic assessment may not be as useful as traditional assessment in the early phases of training. The learner does not have enough information about the concepts or knowledge to participate. When exposed to a new topic, learners first tend to acquire and memorize facts. When learners possess the knowledge needed to analyze, synthesize, and evaluate, they can participate more fully in the assessment process.
Determine Level-of-Learning Objectives
List Indicators/Samples of Desired Behaviors
Establish Criterion Objectives
Develop Criterion-Referenced Assessment Items
An effective critique considers good as well as bad performance, the individual parts, relationships of the individual parts, and the overall performance.
Small Group Critique
Individual Learner Critique by Another Learner
The most common means of assessment is direct or indirect oral questioning of learners by the instructor. Proper quizzing by the instructor can have a number of desirable results, such as:
- Reveals the effectiveness of the instructor's training methods
- Checks learner retention of what has been learned
- Reviews material already presented to the learner
- Can be used to retain learner interest and stimulate thinking 5. Emphasizes the important points of training
- Identifies points that need more emphasis
- Checks comprehension of what has been learned
- Promotes active learner participation, which is important to effective learning
Characteristics of Effective Questions
The instructor should devise and write pertinent questions in advance. To be effective, questions must:
- Apply to the subject of instruction.
- Be brief and concise, but also clear and definite.
- Be adapted to the ability, experience, and stage of training of the learners.
- Center on only one idea (limited to who, what, when, where, how, or why, not a combination). 5. Present a challenge to the learners.
Types of Questions to Avoid
- Yes/No: "Do you understand?"
- Puzzle: "What is the first action you should take if a conventional gear airplane with a weak right brake is swerving left in a right crosswind during a full flap, power-on wheel landing?"
- Oversize: "What do you do before beginning an engine overhaul?"
- Toss-up: "In an emergency, should you squawk 7700 or pick a landing spot?"
- Bewilderment: "In reading the altimeter — you know you set a sensitive altimeter for the nearest station pressure — if you take temperature into account, as when flying from a cold air mass through a warm front, what precaution should you take when in a mountainous area?"
- Trick questions: These questions cause the learners to develop the feeling that they are engaged in a battle of wits with the instructor, and the whole significance of the subject of the instruction involved is lost.
- Irrelevant questions: Diversions that introduce only unrelated facts and thoughts and slow the learner's progress. Questions unrelated to the test topics are not helpful in evaluating the learner's knowledge of the subject at hand.
Answering Learner Questions
- Be sure that you clearly understand the question before attempting to answer.
- Display interest in the learner's question and frame an answer that is as direct and accurate as possible.
- After responding, determine whether or not the learner is satisfied with the answer.
- Orient new learners to the scenario-based training system.
- Help the learner become a confident planner and in-flight manager of each flight and a critical evaluator of
their own performance.
- Help the learner understand the knowledge requirements present in real world applications.
- Diagnose learning difficulties and help the individual overcome them.
- Be able to evaluate learner progress and maintain appropriate records.
- Provide continuous review of learning.
Proper oral quizzing by the instructor during a lesson identifies points which need more emphasis.
Before students willingly accept their instructor's critique, they must first accept the instructor.
To be effective in oral quizzing during the conduct of a lesson, a question should be of suitable difficulty for that stage of training.
A written test has validity when it measures what it is supposed to measure, not when it yields consistent results (reliability).
A written test is said to be comprehensive when it samples liberally whatever is being measured.
Supply-type test items cannot be graded with uniformity.
A characteristic of supply-type test items is that the same test graded by different instructors would probably be given different scores.
Effective multiple-choice test items should keep all alternatives of approximately equal length.
Which is one of the major difficulties encountered in the construction of multiple-choice test items? Inventing distractors which will be attractive to students lacking knowledge or understanding.
In a written test, the "matching" selection-type items reduces the probability of guessing correct responses.
I. Fundamentals of Instructing
Task D: Assessment and Critique
Objective: To determine that the applicant exhibits instructional knowledge of assessments and critiques by describing:
- Purpose of assessment
- General characteristics of effective assessment
- Traditional assessment
- Authentic assessment
- Oral assessment
- Characteristics of effective questions
- Types of questions to avoid
- Instructors/student critique
- Student-lead critique
- Small group critique
- Individual student critique by another student
- Written critique
- What is the purpose of assessment?
- What makes an assessment effective?
- What are the different types of assessments you can expect to use as an instructor?
- Why would you use a traditional assessment? An authentic assessment?
- What is the purpose of an oral assessment?
- What is a rubric?
- What are the characteristics of good questions?
- What types of questions should you avoid?
- What is the purpose of critique?
- What are some different type of critique?
- What are the four steps of a collaborative critique?