Where Evaluation Meets Improvement
The IME integrates tested tools & strategies to generate actionable insights, foster learning, and drive improvement.
The IME integrates tested tools & strategies to generate actionable insights, foster learning, and drive improvement.
The Improvement Model for Evaluation (IME) was created to support evaluators with practical tools, flexible methods, and real-time strategies rooted in improvement science.
Our goal is simple: to help evaluators strengthen their practice, make evaluation more useful for decision-making, and promote continuous learning across programs and organizations. It is where evaluation meets improvement.
The Improvement Model for Evaluation (IME) combines the proven structure of the Model for Improvement (MFI) with the dynamic needs of modern evaluation practice.
Grounded in principles of iterative learning, rapid-cycle testing, and stakeholder collaboration, the IME approach equips evaluators with practical tools to generate timely, actionable insights — not just after a program ends, but throughout its implementation.
By bridging the gap between implementation and evaluation, the IME framework promotes:
As evaluation demands shift toward faster, more usable results, the IME offers a pathway to create evaluations that are more adaptive, useful, and meaningful for both practitioners and decision-makers.
The Improvement Model for Evaluation (IME) brings the structure and rapid learning cycles of quality improvement into evaluation practice. By embedding real-time learning, stakeholder collaboration, and continuous adaptation, IME helps evaluators deliver faster, more actionable insights — making evaluation a true engine for improvement.
“Of all changes I've observed, about 5 percent were improvements; the rest, at best, were illusions of progress” ~W. Edwards Deming
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.