Home / Essays / Data and Instrumentation

Data and Instrumentation

Data and Instrumentation
Description of data sources and methods to collect data
What data will be collected and used to answer each evaluation question?
Describe relevant existing datasets (qualitative or quantitative?)
How will you collect new data? (Refer to example instruments in appendices)
Sampling
Describe your sample (for example, is it convenience sample, purposeful, etc.)
How many participants will there be in your data sample?

Here is the directions for this assignment:
This draft plan (completed as a group) will be exchanged with one other group (as assigned by me) for formative feedback. Feedback will describe ways that you are adhering to and the extent that you are addressing the 30 Program Evaluation Standards. For the DRAFT plan please follow the bold headings of Evaluation Plan Outline as a starting place for writing, which I’ve also described in the slides. The Evaluation Plan should be approximately 10 pages (double-spaced) of written text (excluding figures like the logic model, appendices, and references) though there is no requirement for length (much shorter plans probably have not adequately addressed the individual sections.) Overly incomplete drafts will not count for credit or be distributed for peer review feedback.
NOTE: In completing your evaluation plans try to incorporate language to show how you are meeting the 30 Program Evaluation Standards. For example, you might describe steps you are taking or intend to take to engage and involve stakeholders, as well as be responsive to their needs. You might also reflect on your decision to answer particular questions in specific ways as being for practical purposes, meaningful uses, and/or in light of efficiency of your (free) resources. Keep in mind that you are expected to incorporate external references to relevant evaluation sources (including course texts and relevant articles) – see Assignment 5 rubric. Keep in mind, program evaluation standard EA3 will be addressed via peer review and standard EA2 will be addressed through a self-reflection met evaluation with the evaluation report. A contract (template to be provided by the instructor) should be included and you should refer to it when describing your purpose and methods; this will help address standard P2 (as well as others). You may find it unreasonable to touch on every standard in the writing of your plan, but you should at least make an effort to address the PES using appropriate citation. If a particular program evaluation standard (e.g., U3) is highly relevant to your evaluation effort, it is worth mentioning your efforts to address the standard in your writing.

Here are the three questions we are going to ask: (along with the stake holders, data used, and standards):
Question:
1. How often are the students tested to see if they are progressing and on their target reading level?
Stakeholders: students and teachers
Data used:
Test scores – dates from three tests per school year

Focus groups for students to share experience with AR program

Surveys/questionnaires for teachers to complete
Standards:
? U2-Attention to Stakeholders
? U5-Relevant Information
? U6-Meaningful Processes and Product
? U7-Timely and Appropriate Communicating and Reporting
? P5-Transparency and Disclosure
? F1-Project Management
? F2-Practical Procedures
? F3-Contextual Viability
A3-Reliable Information
Question:
2. Who is looking at the data and interpreting the outcomes?
Stakeholders: Teachers, librarians, administrators, parents
Data used:
Test scores – data from three tests per school year

Surveys/questionnaires for teachers, librarians, administrators and parents to complete
Standards:
? U2-Attention to Stakeholders
? U3-Negotiated Purposes
? U4-Explicit Values
? U5-Relevant Information
? U6-Meaningful Processes and Products
? F1-Project Management
? F2-Practical Procedures
? F4-Resource Use
? P5-Transparency and Disclosure
? A6-Sound Designs and Analyses
A8-Communication and Reporting
Question:
3. How does the Accelerated Reader Program (AR) accurately identify students for level placement in the Response to Intervention (RTI) program?
Stakeholders: Students, teachers, RTI and SPED directors, parents
Data used: Test scores – from benchmark test from AR program – STAR – measurement data from universal screener fluency and comprehension tests
Standards:
? U2-Attention to Stakeholders
? U6-Meaningful Processes and Products
? U7-Timely and Appropriate Communicating and Reporting
? F1-Project Management
? F3-Contextual Viability
? P5-Transparency and Disclosure
? P4-Clarity and Fairness
A7-Explicit Evaluation Reasoning
Some additional instructions:
When writing your plan, generally speaking, you should include language as to how you are meeting the 30 PES. For example, when you describe how you will collect data, you could talk about the use of practical procedures to limit disruption, or when you talk about identifying questions, you might discuss how various stakeholder groups were consulted to get involvement from key stakeholders.
When talking about program performance, this is where you need to define your own benchmarks. In order to explain whether the program is doing a good job or not, you need to define what “good” is. Standards/benchmarks need to be set. Thus, if you expect the program will decrease absences, then how much decrease in absences is a good amount? These benchmarks should be defended. For example, the district may have mandated a new program to decrease absences by 2.5%, in other words, a standard may be determined in the founding documents. Another option is to consider a program administrator’s input. It may be that the program administrator can weigh in on what level of decrease would be “bad.” Sometimes a comparable program can be used to set a standard, or maybe a program association has defined acceptable practice (as shown in the live session.)
The 20 program evaluation standards define good evaluation practice (according to the JCSEE, which is intended to represent practicing evaluator interests), but they do not define good program practice. Good/bad program practice involves the evaluator weighing what is valued by different stakeholder groups and considering society as a whole.
Please ask if you need additional info and do not guess on anything!

Leave a Reply

WPMessenger