Are your programs making the impact you envision or are they costing more than they give back? A few years ago, I worked with an organization grappling with a tough question: Which programs should we keep, grow, or let go? They felt stretched thin, with some initiatives thriving and others barely holding on. It was clear they needed a clearer strategy to align their programs with their long-term goals. We introduced a tool that breaks programs into four categories: Heart, Star, Stop Sign, and Money Tree each with its strategic path. -Heart: These programs deliver immense value but come with high costs. The team asked, Can we achieve the same impact with a leaner approach? They restructured staffing and reduced overhead, preserving the program's impact while cutting costs by 15%. -Star: High impact and high revenue programs that beg for investment. The team explored expanding partnerships for a standout program and saw a 30% increase in revenue within two years. -Stop Sign: Programs that drain resources without delivering results. One initiative had consistently low engagement. They gave it a six-month review period but ultimately decided to phase it out, freeing resources for more promising efforts. -Money Tree: The revenue generating champions. Here, the focus was on growth investing in marketing and improving operations to double their margin within a year. This structured approach led to more confident decision-making and, most importantly, brought them closer to their goal of sustainable success. According to a report by Bain & Company, organizations that regularly assess program performance against strategic priorities see a 40% increase in efficiency and long-term viability. Yet, many teams shy away from the hard conversations this requires. The lesson? Every program doesn’t need to stay. Evaluating them through a thoughtful lens of impact and profitability ensures you’re investing where it matters most. What’s a program in your organization that could benefit from this kind of review?
Program Review Guidelines
Explore top LinkedIn content from expert professionals.
Summary
Program-review-guidelines are structured approaches organizations use to evaluate the impact, value, and alignment of their programs with strategic goals. By following these guidelines, teams can assess which programs to continue, improve, or discontinue, ensuring resources are invested where they matter most.
- Clarify objectives: Start by clearly defining what success looks like for each program and how it supports your organization’s broader mission.
- Use focused metrics: Build your reviews around actionable performance, operational, and cost metrics that connect program activities directly to business or community outcomes.
- Encourage ongoing reflection: Make regular time for team discussions that explore not just what was accomplished, but the real changes and significance behind those results.
-
-
The CDC has updated its Framework for Program Evaluation in Public Health for the first time in 25 years This is an essential resource for anyone involved in programme evaluation—whether in public health, community-led initiatives, or systems change. It reflects how evaluation itself has evolved, integrating principles like advancing equity, learning from insights, and engaging collaboratively. The CDC team describes it as a “practical, nonprescriptive tool”. The framework is designed for real-world application, helping practitioners to move beyond just measuring impact to truly understand and improve programmes. I particularly like the way they frame common evaluation misconceptions, including: 1️⃣ Evaluation is only for proving success. Instead, it should help refine and adapt programmes over time. 2️⃣ Evaluation is separate from programme implementation. The best evaluations are integrated from the start, shaping decision-making in real time. 3️⃣ A “rigorous” evaluation must be experimental. The framework highlights that rigour is about credibility and usefulness, not just methodology. 4️⃣ Equity and evaluation are separate. The new framework embeds equity at every stage—who is involved, what is measured, and how findings are used. Evaluation is about learning, continuous improvement, and decision-making, rather than just assessment or accountability. As they put it: "Evaluations are conducted to provide results that inform decision making. Although the focus is often on the final evaluation findings and recommendations to inform action, opportunities exist throughout the evaluation to learn about the program and evaluation itself and to use these insights for improvement and decision making." This update is a great reminder that evaluation should be dynamic, inclusive, and action-oriented—a process that helps us listen better, adjust faster, and drive real change. "Evaluators have an important role in facilitating continuous learning, use of insights, and improvement throughout the evaluation (48,49). By approaching each evaluation with this role in mind, evaluators can enable learning and use from the beginning of evaluation planning. Successful evaluators build relationships, cultivate trust, and model the way for interest holders to see value and utility in evaluation insights." Source: Kidder, D. P. (2024). CDC program evaluation framework, 2024. MMWR. Recommendations and Reports, 73.
-
Are you tired of discussing activities and outputs in your program review workshops? Me too! Here is how I am borrowing outcome harvesting concepts to change things. Most M&E systems are set up to collect output-level indicators data routinely and high-level outcome data periodically (mostly annually through annual surveys or only in endline 😑) So what happens during the implementation period? We come together and discuss activities and outputs. We can change that. Here is a step-by-step guide on changing this for your program. 1️⃣ Frame the workshop as a reflection focusing on observable intermediate changes “We’re here to harvest and reflect on the most meaningful changes we've contributed to, and how they shape our path forward.” Define intermediate outcome: "A change in the behavior, relationships, actions, policies, or practices of a key actor (e.g., community members, government actors, partners)" Define the reflection timeframe, e.g, 6 months, 12 months, etc. 2️⃣ Use these guiding questions to surface observable changes What has changed in the behavior, relationships, actions, or decisions of people or institutions you engage with? Who changed? In what way? Was the change intended or unexpected? 3️⃣ Explore the contribution of the program to the observed changes What did we (the program) do that helped make this happen? Who else contributed? Were there external enabling or constraining factors? 4️⃣ Reflect on the significance Why is this change important for the program goals? What does this tell us about what’s working or not? (This is just the first part of the reflection process, more in the comments 👇🏾) ------------------------------------------------------------------------- Please 🔄 if helpful and follow me, Florence Randari, for more learning tips.
-
One of the hardest parts of being a TPM running AI/ML programs is assessing whether the work is delivering on the business goals you signed up for. This is where program reviews matter. If you’re running MBRs or QBRs for AI/ML programs, you need to move the conversation beyond demos and accuracy numbers. Make it data-driven with metrics that tie directly to customer and business outcomes. The TPM role is about driving clarity. In the context of AI/ML programs, that means making sure programs are evaluated on the same standards we use for any critical system: performance, reliability, and cost which are all tied back to the business. At a high level, here is how I structure metrics in program reviews to keep them efficient and grounded, while accommodating for the differing needs of AI/ML solutions in production: 1. **Performance Metrics** Agree on the right north star metric for your specific business use-case. 2. **Operational Metrics** These tell you if the model can actually operate at scale and stay relevant as data changes. 3. **Cost Metrics** These force the team to show if the spend is justified. When I facilitate program reviews, these metrics keep discussions focused on outcomes, not technical details. They also create alignment across engineering, science, product, and finance, so that decisions are made using data. 👉 In my latest blog, I go deeper into how to set up these reviews, the specific metrics for each of these categories, metric definitions, what they represent: https://lnkd.in/eWG63PXj 💭 Drop me a comment with questions or how you deal with reporting on AI/ML programs. I appreciate you taking the time to learn and share! #TechnicalProgramManagement #AI #metrics #TPM #MachineLearning
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development