Dynamic Learning Maps
Acknowledgements
1
Overview
1.1
Current DLM Collaborators for Development and Implementation
1.2
Student Population
1.3
Assessment
1.4
Assessment Models
1.5
Theory of Action and Interpretive Argument
1.6
Key Features
1.7
Technical Manual Overview
2
Content Structures
2.1
Essential Elements
2.2
Development of the Essential Elements
2.3
Development of the Learning Maps
2.3.1
Identification and Representation of Learning Targets
2.3.2
Identification and Representation of Critical Supporting Knowledge, Skills, and Understandings
2.3.3
Development of Connections Between Nodes
2.3.4
Accessibility of Nodes and Pathways
2.3.5
Linking the Learning Maps to the Essential Elements
2.4
Organizing the Learning Maps: Claims and Conceptual Areas
2.5
System Structure
2.6
Evaluation of the Learning Map Structure
2.6.1
Educator and Expert Review
2.6.2
Empirical Analyses of the Learning Maps
2.7
Development of Assessment Blueprints
2.7.1
Blueprint Development Process
2.8
Alignment
2.8.1
Alignment of College and Career Readiness Standards and Essential Elements
2.8.2
Alignment of Essential Element and Target-Level Nodes
2.8.3
Vertical Articulation of Linkage Levels for Each Essential Element
2.9
Learning Maps for the Operational Assessment
2.10
Conclusion
3
Assessment Design and Development
3.1
Assessment Structure
3.2
Items and Testlets
3.2.1
English Language Arts Reading Testlets
3.2.2
English Language Arts Writing Testlets
3.2.3
Mathematics Testlets
3.2.4
Alternate Testlets for Students who are Blind or Have Visual Impairments
3.2.5
Practice Activities and Released Testlets
3.3
Test Development Procedures
3.3.1
Test Development Principles
3.3.2
Overview of the Testlet Development Process
3.3.3
Testlet and Item Writing
3.3.4
ELA Text Development
3.3.5
External Reviews
3.4
Alignment of Learning Map Nodes within a Linkage Level and Assessment Items
3.5
Evidence of Students’ Response Process
3.6
Evidence of Item Quality
3.6.1
Field Testing
3.6.2
Operational Assessment Items for 2021–2022
3.6.3
Evaluation of Item-Level Bias
3.7
Conclusion
4
Assessment Delivery
4.1
Overview of General Administration Features
4.1.1
The Kite Suite
4.1.2
Assessment Delivery Modes
4.1.3
Accessibility
4.2
Key Features of the Instructionally Embedded Assessment Model
4.2.1
Instruction and Assessment Planner
4.2.2
Testlet Assignment
4.2.3
Assessment Administration Windows
4.2.4
Summary of the Instructionally Embedded Assessment Process
4.3
Resources and Materials
4.3.1
Test Administrator Resources
4.3.2
District-Level Staff Resources
4.4
Test Administrator Responsibilities and Procedures
4.4.1
Before Beginning Assessments
4.4.2
Administration in the Fall and Spring Windows
4.4.3
Preparing for Next Year
4.5
Security
4.5.1
Training and Certification
4.5.2
Maintaining Security During Test Administration
4.5.3
Security in the Kite Suite
4.5.4
Secure Test Content
4.5.5
Data Security
4.5.6
State-Specific Policies and Practices
4.6
Evidence from the DLM System
4.6.1
Administration Time
4.6.2
Device Usage
4.6.3
Blueprint Coverage
4.6.4
Linkage Level Selection
4.6.5
Administration Incidents
4.6.6
Accessibility Support Selections
4.7
Evidence From Monitoring Assessment Administration
4.7.1
Test Administration Observations
4.7.2
Formative Monitoring Techniques
4.7.3
Monitoring Testlet Delivery
4.7.4
Data Forensics Monitoring
4.8
Evidence From Test Administrators
4.8.1
User Experience With the DLM System
4.8.2
Opportunity to Learn
4.8.3
Educator Ratings on First Contact Survey
4.8.4
Educator Cognitive Labs
4.9
Conclusion
5
Modeling
5.1
Psychometric Background
5.2
Essential Elements and Linkage Levels
5.3
Overview of the DLM Modeling Approach
5.3.1
Model Specification
5.3.2
Model Calibration
5.3.3
Estimation of Student Mastery Probabilities
5.4
Model Evaluation
5.4.1
Model Fit
5.4.2
Classification Accuracy
5.5
Calibrated Parameters
5.5.1
Probability of Masters Providing Correct Response
5.5.2
Probability of Nonmasters Providing Correct Response
5.5.3
Item Discrimination
5.5.4
Base Rate Probability of Class Membership
5.6
Conclusion
6
Standard Setting
6.1
Original Standard Setting Process
6.1.1
Standard Setting Approach: Rationale and Overview
6.1.2
Policy Performance Level Descriptors
6.1.3
Profile Development
6.1.4
Panelists
6.1.5
Meeting Procedures
6.1.6
Smoothing the Cut Points
6.1.7
Results
6.1.8
External Evaluation of Standard Setting Process and Results
6.1.9
Grade Level and Subject Performance Level Descriptors
6.2
Conclusion
7
Reporting and Results
7.1
Student Participation
7.2
Student Performance
7.2.1
Overall Performance
7.2.2
Subgroup Performance
7.3
Mastery Results
7.3.1
Mastery Status Assignment
7.3.2
Linkage Level Mastery
7.4
Additional Scoring Evidence
7.4.1
Pilot Survey Defining Student Mastery
7.4.2
Writing Sample Scoring
7.4.3
Alignment of At Target Achievement With Postsecondary Opportunities
7.5
Data Files
7.6
Score Reports
7.6.1
Individual Student Score Reports
7.6.2
Aggregate Reports
7.6.3
Interpretation Resources
7.7
Quality-Control Procedures for Data Files and Score Reports
7.8
Conclusion
8
Reliability
8.1
Background Information on Reliability Methods
8.2
Methods of Obtaining Reliability Evidence
8.2.1
Reliability Sampling Procedure
8.3
Reliability Evidence
8.3.1
Linkage Level Reliability Evidence
8.3.2
Conditional Reliability Evidence by Linkage Level
8.3.3
Essential Element Reliability Evidence
8.3.4
Conceptual Area and Claim Reliability Evidence
8.3.5
Subject Reliability Evidence
8.3.6
Performance Level Reliability Evidence
8.4
Conclusion
9
Training and Professional Development
9.1
Training for State Education Agency Staff
9.2
Training for Local Education Staff
9.3
Required Training for Test Administrators
9.3.1
Facilitated Training
9.3.2
Self-Directed Training
9.3.3
Training Content
9.3.4
Completion of All Modules
9.4
Instructional Professional Development
9.4.1
Professional Development Participation and Evaluation
9.5
Conclusion
10
Validity Argument
10.1
Validity Framework
10.2
Intended Uses
10.3
Theory of Action
10.4
Propositions and Validity Evidence
10.4.1
Design
10.4.2
Delivery
10.4.3
Scoring
10.4.4
Long-Term Outcomes
10.5
Evaluation Summary
10.6
Continuous Improvement
10.6.1
Design Improvements
10.6.2
Delivery Improvements
10.6.3
Scoring and Reporting Improvements
10.7
Future Research
11
References
Appendix
A
Supplemental Information About the Overview
A.1
List of Terms
A.2
List of Acronyms
B
Supplemental Information About the Content Structures
B.1
Assessment Blueprints
B.1.1
Blueprints for English Language Arts
B.1.2
Blueprints for Mathematics
C
Supplemental Information About Assessment Design and Development
C.1
Differential Item Functioning Plots
C.1.1
Uniform Model
C.1.2
Combined Model
D
Supplemental Information About Assessment Delivery
D.1
First Contact Survey Items Used for Determining Complexity Bands
D.1.1
Expressive Communication
D.1.2
English Language Arts
D.1.3
Mathematics
D.1.4
Writing
D.2
Distribution of Essential Elements Tested
E
Supplemental Information About Standard Setting
E.1
Example Grade and Subject Performance Level Descriptors
2021–2022 Technical Manual
2021–2022 Technical Manual
Instructionally Embedded Model
December 2022
Placeholder Title
Copyright © 2022 Accessible Teaching, Learning, and Assessment Systems (ATLAS)