top of page

An Evaluation Blog for Non-Evaluation Program Leaders

Here, I write about some of the tricky situations that we run into when designing

evaluation processes, and how to handle them intelligently and with grace.

Capturing Lessons Learned: A Key to Evaluation Success

August 18, 2024

Tracy Borrelli

 

​Every evaluation project, no matter how straightforward or complex, offers valuable opportunities to learn. Lessons learned are insights gained throughout a project that can inform and improve future work. These might highlight successes worth repeating, pitfalls to avoid, or processes to refine. For program leaders, ensuring these lessons are documented is not just a best practice but a critical investment in organizational knowledge and efficiency.

​In my own experience, lessons learned have often been informal—a document on my desk where I jot down bullet points as they arise. Whether it’s a stakeholder’s observation about communication delays or a logistical oversight, these lessons have proven invaluable for improving future projects.

Image by JESHOOTS.COM

The Benefits of Documenting Lessons Learned

  • Continuous Improvement: Capturing lessons ensures that insights from one project can inform future efforts, reducing the likelihood of repeating mistakes.

  • Informed Decision-Making: With a record of lessons learned, program leaders have a ready reference to guide strategic choices.

  • Team Development and Recognition: Documenting what worked well allows leaders to celebrate team strengths, acknowledge efforts, and highlight how challenges were overcome. This fosters a positive team culture and builds morale.

  • Stakeholder Trust: Demonstrating that lessons are captured and addressed builds confidence among stakeholders and funders.

  • Adaptability: Recognizing external factors beyond the team’s control helps teams focus on what they can influence and adapt effectively without wasting time or energy.

Real-World Examples

Here are a few lessons learned from my projects:

  1. Communication Delays: During one project, a program leader noted, “We need to get the minutes out faster to that committee than we have been.” Delayed communication had slowed decision-making, leading to frustration and missed opportunities for timely input.

  2. Survey Distribution Challenges: Instructions for distributing paper surveys to multiple program sites worked well for some staff but not others, especially when program managers were on vacation. This delay meant participant surveys were excluded from the final analysis, and honoraria distribution was significantly delayed. This risked undermining participant trust and created unexpected ethical and logistical challenges.

  3. Ethical Risks: In the case above, participants who shared sensitive information were left waiting for the promised gift. Over time, the likelihood of locating participants who had moved on increased, risking the perception that their contributions were undervalued.

unsplash-fCzSfVIQlVY_edited.jpg

The Risks of Not Documenting Lessons Learned

 

Failing to document lessons learned can have serious consequences:

  • Repetition of Mistakes: Teams risk repeating errors, wasting time and resources.

  • Erosion of Trust: Stakeholders may lose confidence in the team’s ability to manage projects effectively.

  • Missed Opportunities: Without a record, successful strategies or serendipitous discoveries might be forgotten.

  • Increased Costs: Unaddressed inefficiencies can lead to delays, budget overruns, and strained team dynamics.

Facilitating Lessons Learned Sessions

Lessons learned often emerge organically, through conversations or triggering events. However, deliberate facilitation can surface deeper insights. Here’s how to create a productive environment:

  1. Create a Safe Space: Ensure the process encourages open dialogue, especially when discussing challenges. Consider using a neutral facilitator if trust is an issue.

  2. Diversify Input: Involve team members, stakeholders, and participants to gain a comprehensive perspective.

  3. Use Asynchronous Tools: Digital platforms or shared documents allow people to contribute insights over time.

  4. Focus on Solutions: Frame lessons as opportunities to improve rather than criticisms of past performance.

  5. Celebrate Successes: Make time to reflect on what went well, recognizing team efforts and strengths that contributed to project success.

 

Tracking Lessons Learned

 

To maximize the usefulness of lessons learned, consider a simple, systematic approach:

  • Informal Notes: Keep a document handy for jotting down lessons as they arise.

  • Categorize Risks and Consequences: Record the potential impact of each lesson, whether it relates to costs, timelines, ethics, or trust.

  • Identify Triggers: Note whether the lesson emerged from a conversation, an event, or a facilitated session.

  • Systematize for Reporting: Consolidate lessons into a format that can be included in the final project report, ensuring they inform future decision-making.

  • Highlight Positives: Document team strengths and successful adaptations for future reference and morale-building.

 

A Template for Lessons Learned

 

To support program leaders, I’m developing a sharable template that includes:

  1. A section for jotting down lessons informally.

  2. Fields for identifying risks and consequences.

  3. Space to document triggers and sources of the lessons.

  4. Guidance on how to integrate lessons into final reports.

 

If you’d like to try a copy of this template, reach out to me directly. Eventually, I’ll make it available online alongside other resources for project leaders.

Conclusion: Lessons Learned Are Lessons Earned

Capturing and leveraging lessons learned is one of the most valuable practices for program leaders. By documenting insights and sharing them with the team, you’re not only improving your own projects but also creating a legacy of continuous improvement for your organization. Don’t let these lessons slip away—make them count for the benefit of everyone involved. Celebrate successes, acknowledge efforts, and turn challenges into opportunities for growth. Lessons learned are a testament to the team’s resilience, adaptability, and commitment to excellence.

The Value of Focus Groups: Listening to Clients and Learning What Matters

June 30, 2023

Tracy Borrelli

 

Focus groups provide a unique opportunity to connect directly with program clients, gaining deep insights into their experiences, preferences, and needs. Unlike surveys, which may limit responses to predefined options, focus groups allow for open-ended dialogue and nuanced feedback. For programs serving vulnerable adults, focus groups can illuminate blind spots, challenge assumptions, and guide meaningful improvements. However, conducting focus groups requires technical skill, careful planning, and a deep commitment to respecting and caring for participants throughout the process.

bokeh photography of lights during night

​​​A Case Study: When Assumptions Miss the Mark

A few years ago, I facilitated a focus group for a program working with clients who had many experiences of grief and loss. The program had a commitment to maintain a relationship with clients for as long as the client needed them; a lifelong commitment that sometimes ended with the death of the client. This focus group followed months of disagreements among staff and management over whether photos of deceased clients displayed in the lobby were appropriate. Some staff argued the display honored clients’ memory, while others raised concerns about potential breaches of privacy and the emotional impact on current clients seeking support. Calls for compassion and alternative suggestions were dismissed as infeasible, leading to a polarized debate.

 

To break the impasse, we added related questions to the annual client satisfaction survey. However, the survey results were evenly split between keeping the photos up and removing them, offering little clarity. Recognizing the need for a more in-depth exploration, we collaborated with the team lead and a recreation therapist experienced in art therapy to design a focus group.

Designing the Focus Group

The focus group was carefully planned to ensure participants felt safe and supported. Key elements included:

  • Participant Selection: Clients who were articulate, well enough to participate, and comfortable discussing topics related to death and dying were invited.

  • Session Structure: The session began with an icebreaker, ground rules, and an introduction to the session’s purpose. Clients were assured they could participate as much or as little as they chose.

  • Boundaries and Honoraria: The door was closed after 10 minutes to support and maintain group trust. This also reduced interruptions, helping to create a safe environment and flow of discussion. Latecomers received their honoraria and were thanked for making the effort, even though they couldn’t join the session.  There were 8 group members in total.

  • Facilitation Tools: We used subject-neutral facilitator photo cards to prompt a strengths-based discussion about what it means to die well and the aspects of death and grief that clients felt they could control. Clients were also encouraged to paint or draw their ideas about how the photos of deceased clients could be used in the lobby.

  • Support and Breaks: A therapist was on hand for private conversations if needed. Breaks with pizza and refreshments allowed participants to recharge and step outside if desired.

Street Art_edited.jpg

What We Learned

The focus group revealed that the issue wasn’t the lobby photos themselves but a deeper need for grief counseling. Clients shared their experiences of profound loss and expressed a desire for more explicit support in healing from these experiences. The discussion shifted from a binary decision about the photos to a broader recognition of the importance of addressing grief and loss within the program.

The Unexpected Outcome

In the end, the photos were taken down, but the focus group’s impact went far beyond this decision. The program began asking clients directly about their experiences of loss and brought in palliative care physicians to provide education to the team on death and dying. These changes stemmed from the clients’ feedback and represented a significant shift in the program’s approach to supporting its participants.

How Leaders Can Use This Information

For leaders, this case highlights the importance of:

  1. Persisting Through Ambiguity: When surveys or initial feedback don’t provide clear answers, focus groups can uncover deeper insights.

  2. Centering Client Needs: Instead of framing decisions around staff or management perspectives, start by asking clients what they need most.

  3. Fostering Collaboration: Engaging staff with relevant expertise, such as the recreation therapist in this case, can enrich the focus group design.

  4. Using Data Thoughtfully: Leaders can leverage focus group findings to guide program improvements, build consensus among stakeholders, and demonstrate responsiveness to client needs.

Technical Tips for Successful Focus Groups

  1. Define the Purpose: Be clear about the goals of the focus group. What do you hope to learn, and how will the findings be used?

  2. Recruit Thoughtfully: Select a diverse group of participants who represent the client population. Ensure the invitation process is inclusive and transparent.

  3. Create a Safe Space: Set ground rules for respectful dialogue and ensure confidentiality. Participants should feel comfortable sharing their thoughts without fear of judgment.

  4. Prepare Open-Ended Questions: Use prompts that encourage discussion, such as:

    • “What aspects of the program make you feel most supported?”

    • “Are there things you’d like to see changed?”

    • “How do you feel about [specific program elements]?”

  5. Facilitate with Care: Be attentive to participants’ emotional well-being, especially when discussing sensitive topics. Have resources available in case someone becomes distressed.

  6. Document and Analyze: Record the session (with consent) and take detailed notes. Look for themes and patterns in the feedback to inform actionable recommendations.

Conclusion: The Power of Listening

Focus groups are powerful tools for building trust, fostering collaboration, and ensuring programs align with the needs and preferences of their clients. By approaching these discussions with humility, care, and a willingness to learn, program leaders can uncover valuable insights and make meaningful improvements.

Successful leaders use focus groups not only to gather data but to foster a culture of client-centered decision-making. This means listening deeply, acting on what is learned, and demonstrating to clients that their voices truly matter. When done well, focus groups can illuminate new paths forward, transforming programs and deepening their impact.

Designing Client Satisfaction Surveys: Building Trust and Measuring Impact

March 18, 2023

Tracy Borrelli

 

Client satisfaction surveys are essential tools for understanding how well a program meets the needs of its participants. For programs supporting vulnerable adults and families, satisfaction is more than a transactional measure of service delivery; it reflects the strength of the relationship between the program and its clients. This relationship, built on trust and hope, is often the foundation for clients to engage fully with services, try new approaches, and experience positive outcomes.

Image by Ian Schneider

A thoughtfully designed client satisfaction survey can provide insights into whether programs are fostering these key connections. That said, it is valuable to consider the nature of the relationship when designing the survey. While some programs with short-term interventions may focus on immediate benefits built on trust and hope, most long-term or holistic programs will want to build deeper relationships with clients to influlence and achieve sustained impact.

Understanding the Relationship at the Heart of the Survey

 

The relationship between a program and its clients is central to the success of the services offered. Before designing a satisfaction survey, program leaders should ask themselves questions that help define the kind of relationship they want to have with their clients. These questions should move beyond demographics and catchphrases and focus on the deeper connections that support client engagement and trust:

  • What kind of relationship do we want clients to feel they have with us? Supportive? Empowering? Collaborative?

  • How do we show clients that we are invested in their well-being and success?

  • What do we want clients to experience when they interact with us, and how do we make that happen consistently?

  • How can we demonstrate empathy and respect in every interaction?

  • What kind of feedback are we willing to act on to strengthen our relationship with clients?

  • How do we ensure clients feel safe and valued in their engagement with our program?

  • How can we build trust so that clients feel comfortable sharing their needs, goals, and concerns?

Paying through the Car Window

By reflecting on these relational dynamics, programs can craft questions that focus on meaningful engagement rather than logistical or transactional measures. This approach ensures the survey aligns with the program’s mission and fosters a deeper understanding of the client experience.

Transactional vs. Relational Satisfaction Surveys

A transactional survey often measures logistics, inputs, and outputs—essentially, whether the program delivered services as promised. For example, the last survey I worked on asked clients if they received 

specific services and included a general question about whether their worker treated them with respect. While this approach is useful for demonstrating accountability to funders, it may overlook critical aspects of the client’s experience.

Relational surveys, in contrast, focus on the quality of the relationship and its impact on the client. Questions might include:

  • Do you feel that your program worker understands your needs and goals?

  • Do you trust your program worker to act in your best interest?

  • Have the program’s services made a positive difference in your life?

  • Do you feel respected and valued by the program staff?

  • Are the program’s supports helping you achieve your goals?

 

These types of questions provide deeper insights into how clients perceive their relationship with the program, which is often the key to fostering engagement and achieving outcomes.

Lessons from a Recent Client Satisfaction Survey

In a recent project, we designed a client satisfaction survey based on services provided rather than the relational aspects of the program. The survey included questions about specific services delivered but did not adequately address how clients experienced those services. When the results were presented, stakeholders—particularly clients with Indigenous backgrounds—highlighted gaps in the survey’s ability to capture their experiences. Additionally, newer services introduced since the last survey weren’t well-represented, raising concerns about the survey’s relevance across programs.

 

The analysis and reporting process also presented challenges. Some programs had too few survey participants to ensure anonymity in the results, which required combining their data into larger organizational reports. This approach limited the usefulness of the findings for individual programs and underscored the importance of designing surveys that are both inclusive and actionable.

Recommendations for Designing Effective
Client Satisfaction Surveys

To create a satisfaction survey that reflects the program’s relational goals:

  1. Engage Stakeholders: Involve clients, staff, and external stakeholders in the survey design process to ensure diverse perspectives are included.

  2. Focus on Relationships: Design questions that measure trust, respect, and the client’s sense of empowerment.

  3. Tailor to Program Goals: Ensure the survey aligns with the program’s intended outcomes and relationship model.

  4. Protect Anonymity: Develop processes to safeguard client confidentiality throughout data collection, analysis, and reporting.

Therapy

   5. Consider Alternatives: If survey participation is likely to be low, explore other methods such as focus groups or interviews         to gather meaningful feedback.

Conclusion: Centering Relationships in Client Satisfaction Surveys

Client satisfaction surveys can provide valuable insights, but their design must reflect the relationship the program aims to build with its clients. Programs should approach surveys thoughtfully, ensuring they measure not just services delivered but the trust and hope clients place in the program. By prioritizing relationships, programs can use surveys to foster continuous improvement and deepen their impact.

Whether designing a transactional or relational survey, program leaders must consider the purpose of the survey and its alignment with organizational goals. By doing so, they can ensure that the tool serves both the clients and the program’s mission to make a meaningful difference in people’s lives.

Designing and Using a Volunteer Satisfaction Survey:
A Case Study

January 21, 2023

Tracy Borrelli

 

Volunteers are the backbone of any successful community initiative. Ensuring they feel supported, valued, and empowered to make a meaningful impact is critical for sustaining their engagement. Volunteers need to know their work aligns with a cause they believe in and that their contributions genuinely make a difference. Beyond this, providing the right tools, training, and support is essential to setting them up for success.

Volunteering Group

One way to demonstrate commitment to volunteers and continually improve their experience is through anonymous surveys that gather feedback on their experiences. This is the story of how a volunteer experience survey transformed our approach to planning and executing a public event. (Names and dates have been changed to maintain client privacy)

In 2020, an organization I was helping, Community Impact Volunteers (CIV), a non-profit coordinating a one-night public event, decided to create and implement a volunteer experience survey for the first time. The event required volunteers to collect data, and while we didn’t have any specific concerns about their experiences, we also had no way of knowing how well we were supporting them. We recognized that understanding their perspectives was crucial to improving our processes and making volunteering as impactful and enjoyable as possible.

 

The survey was divided into three main sections of statements asking for the volunteers' degree of agreement on:

  1. How strongly volunteers felt their efforts made an impact for the cause.

  2. How easy it was to volunteer for the event.

  3. How well-prepared they felt through the training provided.

Additionally, we included a general 5-star rating scale and an open-ended question asking how we could improve the experience for future volunteers. The first survey in 2020 provided invaluable insights, highlighting specific areas where we could enhance the volunteer experience. For example, feedback revealed issues with the data collection tools and challenges in engaging with event participants.

In preparation for the 2021 event, we revisited the survey data from 2020 during a planning committee meeting. The survey’s detailed analysis allowed us to pinpoint actionable areas for improvement. While some committee members who had volunteered in 2020 intuitively recognized these issues, having data to substantiate their observations lent credibility to their suggestions. This alignment between anecdotal and data-driven evidence catalyzed a productive brainstorming session, resulting in several key changes:

  • Upgrading the technology for smoother data entry.

  • Revising engagement guidelines for interacting with participants.

Touchscreen Computer

The 2021 survey, conducted after implementing these changes, validated the committee’s efforts. Volunteers rated the upgraded technology and new engagement guidelines significantly higher than in 2020. The survey also received nearly twice as many responses, which strengthened the credibility of the findings. While challenges remained—such as the complexity of data collection questions—the overall feedback reflected a marked improvement in the volunteer experience.

Still, there were lessons learned.

During a committee review, members noted the absence of a question about critical incidents, which would have been relevant and useful, given the public nature of the event. This highlighted the importance of bringing people together with diverse backgrounds to collaborate on a survey before it goes out. Additionally, an oversight in the survey reporting process led to nearly disclosing open-ended responses, which would have potentially compromised anonymity for some volunteers. This brush with a breach of trust highlighted the importance of safeguarding confidentiality in future iterations of the survey.

 

Despite these missteps, the success of the volunteer experience survey underscores its value as a tool for continuous improvement. In 2022, the report will serve as a foundation for planning, ensuring that effective practices from 2021 are maintained and that further enhancements address any lingering issues. This iterative process demonstrates how listening to volunteers and acting on their feedback can lead to meaningful change.

If you’re involved in organizing volunteer-driven events, consider implementing a volunteer experience survey to capture their voices. It’s a simple yet powerful way to show you care about their contributions and to identify actionable improvements. 

Start small, stay consistent, and watch as your volunteers thrive in an environment that values their efforts and continuously strives to improve their experience. Together, we can create a culture of support and excellence for those who give their time to make a difference.

What Is A Program Model And Why Do We Need One?

January 3, 2023

Tracy Borrelli

 

Happy New Year!  Before the winter holiday break, I overheard two people chatting about what a program model is and why we need them. The discussion that unfolded made me realize that there are always people who are brand-new to program evaluation and the most common tools we use to get the job done. So, the purpose of this blog post is to breifly explain program models and how successful leaders use them for program evaluation purposes. 

We will define in very basic terms what program models are, explain why they are important, outline the general steps involved in making a program model and discuss how to use program models for program evaluation. Additionally, we will examine the characteristics of successful leaders who have utilized program models effectively, as well as provide strategies for implementing program models in your own organization or program.

What is a Program Model?

Program models provide an organized view and description of how each component of a program works to achieve its goal. They are often used as a tool for program evaluation and help identify strengths, weaknesses, and areas for improvement in an effective manner. Program models also provide an overall framework that can be implemented to improve the effectiveness of programs. For me, one of the most important ways we use program models is to support and strengthen our communication about the program. And yet another important use for program models is to help keep managers and other program staff on point, reducing distractions, or preventing teams from drifting away from the program's mandated activities and goals. 

Designing a Program Model

The first step toward making a successful program model is setting program goals and objectives. This includes defining the purpose of the program, its intended outcomes, and how each component of the program contributes to achieving these outcomes. This sounds simple, but it is not! It sometimes takes a lot of work, and plenty of deep insight, from a range of people to create a new program model from scratch. Once this is done, the next step involves identifying how different components interact with one another in order to meet program objectives. This includes outlining activities involved in each component, as well as how the activities fit together. 

Program Models and Program Evaluation

When it comes to program evaluation, having a program model helps us to assess whether or not objectives are being met. This is done by comparing actual program outcomes with those outlined in the program model. By doing so, we can identify any discrepancies between expectations and reality and take corrective actions if necessary. Program models also allow us to measure success by providing a framework to assess progress over time. By repeatedly looking at how the work is getting done and how participants are responding to the program we can determine benefits and costs of the program over time, and how different factors may be playing a role in the programs' successes and/or challenges.

Leadership and Program Models

Leadership plays an important role in successfully utilizing program models. Leaders must be knowledgeable about the program, its objectives, and how the different components interact with each other. Additionally, they must have strong organizational skills to ensure that all components are working together towards meeting program goals. Furthermore, leaders must have excellent communication skills to effectively explain the program model and its purpose to stakeholders. Having a diagram of the program model can be especially useful for communicating the costs, benefits, and interactions of various components of a social program. This is because social programming is often based on assumptions of human life that are not universally experienced, or that are not easy to see.

Conclusion

In conclusion, program models are an important tool for program evaluation that can be utilized by successful leaders in order to improve the effectiveness of programs. They provide an organized view of how each component works together towards achieving the goal, as well as a framework for measuring success. Program models serve as an invaluable tool for leaders to explain and assess program outcomes and make necessary adjustments to ensure that the program meets its objectives. By implementing the strategies discussed above, you can use program models effectively in your own organization or program. 

How Successful Program Leaders 
Handle Data Surprises​

December  27, 2022

Tracy Borrelli


First, let's be crystal clear that your program data must be as trustworthy as can be! If not, then any surprises that come from your data could be difficult to make any sense of. More importantly, depending on the type of surprise you are experiencing from your data, a program leader could be making very difficult decisions with plenty of risk or opportunity involved. So if you are dealing with data that can't be very well trusted, feel free to read the rest of this blog post as a firm nudge to get your data cleaned up and in order. 

With that out of the way, let's carry on!

There is a saying that if a director or manager in the social sector truly knows what is going on in the program that they are running, there should not be any surprises arising from program data. While this is often true, I have personally seen some pretty important surprises come from trustworthy evaluation data. After all, evaluation data is there for testing assumptions, not just accepting unproven ideas and beliefs as truth! I'm sure that there are as many ways to handle data surprises as there are leaders of programs. However, let's take a look at a few thoughtful ways to deal with data and analyses that are not what you expected. That way you will have your bases covered if this ever happens to you.

Ask 'Why?' or 'How?'

When the results of a program evaluation do come as a surprise, successful leaders take the time to ask "Why?". Instead of jumping to conclusions or making decisions based on gut instinct, it is important to interpret the data with a curious mindset. Leaders must seek out the context behind surprises to understand what has led the data and subsequent analyses to diverge from expectations. This can mean digging deeper into the data and looking for hidden patterns, trends, or correlations that help explain why the results are different than anticipated. Is our data telling us that there have been unintended consequences of program delivery activities? Was there something new happening outside of the program? It is important to take a step back and look at the bigger picture before making any decisions. Leaders should ask questions, challenge assumptions, and explore viable solutions to make an informed decision when surprising results present themselves. By taking a methodical approach, asking questions, and taking some time to collect fresh answers, surprises can be transformed into meaningful insights that will help inform the next decision.

Clarify the Context

To transform surprises into meaningful insights, leaders should look beyond just the numbers and take a holistic view of data. Analyzing both quantitative and qualitative information can help provide context and uncover key learnings that may not have been initially visible. Further investigation through interviews or focus groups can be used to understand why certain results have occurred or to identify areas that need further development. Leaders should also look at surprising results through a lens of continuous improvement, and use them as an opportunity to adjust strategies and processes to achieve better outcomes. By doing so, surprises can be used as invaluable tools for gaining insight into program performance and potentially driving better decision-making in the future.

Weigh the Opportunities and Risks

Surprising program results can also be seen as an opportunity to identify new directions or rethink existing strategies. Unexpected results can reveal areas for improvement that could not have been identified without the data. Successful leaders use unexpected program results as a chance to challenge assumptions and experiment with different approaches to solving problems. Rethinking program assumptions can unlock possibilities for programs, such as identifying untapped strengths or finding ways to increase efficiency or reduce costs. By taking the time to understand why surprises occurred in the data and how they can be used as an opportunity, program leaders are better able to innovate and adapt when facing changes in the program, its people, or its environment.


Though surprises that come from data and analyses can open up opportunities, they also come with certain risks. When surprises occur from analyzing program data, leaders must exercise caution. Leaders must be careful not to jump to conclusions or act on gut instinct, as this can lead to costly mistakes. Assessing risks could include considering how such surprises might affect program participants, community members, partnerships, grants and donations, or other areas of the non-profit. Perhaps there are already consequences to participants and community members, and this is what the data is telling us. By taking a cautious but data-driven approach to surprises, program leaders can ensure that surprises are used to their advantage and not taken as cause for alarm. 

Protect Stakeholder Relationships By 
Continuing to Build Trust

Non-profit leaders are wise to consider how surprises might impact all program stakeholders (or, may already be impacting stakeholders) and take steps to ensure they are prepared for unexpected outcomes. This could include responding proactively with an explanation of why the results were different than expected and how the team is taking steps to address them. Taking a proactive approach helps build trust and demonstrates that surprising results are being taken seriously. For the sake of others, leaders need to be prepared to act quickly to capitalize on opportunities or mitigate risks, while also making sure any decisions take into account as much of the context as possible. That is a lot to take on! By modelling a transparent approach and staying flexible, leaders can make the most of these situations and adjust their strategy accordingly. 

Conclusion

To sum up, surprises in program data present a range of opportunities and risks for non-profit program leaders. To make the most of surprises, leaders need to take an analytical approach and ask why the results differ from expectations. By digging deeper into the data and exploring viable solutions, surprises can be transformed into meaningful insights that will help inform decisions. Non-profit leaders should also consider potential risks associated with surprises and prepare both themselves and program stakeholders for unexpected outcomes. With a curious mindset, cautious approach, and transparent communication, leaders can use surprises as an opportunity for growth and innovation.

Building Evaluation Capacity in Your Program or Organization

December 19, 2022

Tracy Borrelli


How well does your program work? This is a question that many program managers and administrators ask themselves. And, the answer to this question should always be "We don't know, let's find out." Too often, programs are run without any sort of evaluation taking place. This can lead to serious headaches further down the road. In this blog post, we will discuss how to build evaluation capacity within your program so that you can start collecting data and analyzing it more effectively!

Capacity-building is an important part of program evaluation as it helps leaders and their teams develop skills and knowledge to effectively organize, analyze, and use trustworthy data. Capacity-building includes activities such as training, mentoring, and coaching that help teams understand how and when to manage various evaluation tools and techniques. These activities can provide clarity on how to better communicate evaluation processes and make informed decisions based on the data collected. Capacity-building also promotes collaboration, as it encourages teams to work together to use program evaluation results. This, in turn, enhances the ability to serve their stakeholders. By building capacity within a program, evaluators can help ensure that the results of an analysis are meaningful and useful for improving program services. 

Assessing your program's need for evaluation capacity-building

Assessing the need for capacity-building within a program helps ensure that evaluation results are meaningful and effective. Evaluators should assess the level of skills, knowledge, and confidence among program staff when it comes to data collection, analysis, and interpretation. If there are gaps or areas where team members lack expertise or know-how, then capacity-building activities should be implemented to help fill these gaps and equip program staff with the skills they need to use evaluation results effectively. Additionally, teams should assess how well their current evaluation practices are working and identify any areas for improvement. By taking the time to assess capacity-building needs, programs can bet that their evaluations will be in a better position to be used. 

Developing a capacity-building plan​

Once the need for capacity-building has been identified, the next step is to develop a plan to build evaluation capacity within your program. This plan should include activities such as training and coaching, mentoring, and resources that help staff learn how to use tools and techniques skillfully and with confidence. Additionally, teams should consider ways in which they can leverage existing expertise when available. For example, if there are members of the team who have experience or knowledge in data analysis, they can be tapped to help others learn how to use evaluation tools and interpret results. Overall, developing a plan for capacity-building helps teams to overcome obstacles and get up to speed on building evaluation capacity. 

Implementing a capacity-building plan

Once the plan for capacity-building has been developed, it is time to implement it. This is usually done through a series of training and coaching sessions that focus on teaching teams how to use evaluation tools and interpret results accurately. Additionally, mentoring can help ensure that team members are supported as they learn how to use data and analysis. For example, experienced members of the team can help answer questions and provide guidance as needed. Finally, resources such as tutorials and guides should be available to support teams in their evaluation efforts. By implementing a capacity-building plan, programs will take action and learn from using evaluation tools and interpreting results. 

Evaluating your progress toward better evaluation capacity

Evaluating the progress of capacity-building activities is also important for setting up evaluation efforts for success. Teams should review their plan regularly to assess whether or not it is working, and identify any areas for improvement. Program staff should also be asked to provide feedback on how they feel about the capacity-building activities and what has been learned so far. Evaluators should also consider ways to measure the impact of capacity-building so they can see how program staff are using the skills and knowledge acquired during training sessions or other activities. By evaluating the progress of their capacity-building efforts, teams can correct any mistakes and fine-tune the skills needed to support program evaluation activities.

In conclusion

In conclusion, building evaluation capacity in a program is highly recommended for the successful use of evaluation processes and results. It is important to assess the capacity of program staff and to develop a plan that considers the skills and knowledge needed for effective data collection, analysis, and interpretation of results. Implementing this plan through activities such as training sessions, coaching, mentoring, and resources can help program staff effectively use evaluation. Finally, it is important to regularly evaluate the progress of capacity-building efforts and measure the impact to ensure that program evaluation efforts are successful. 


Overall, developing strong evaluation capacity within a program is essential for success and provides teams with the skills they need to make informed decisions based on data. Taking these steps to build evaluation capacity can help program staff better understand and have confidence in their evaluation results.

How to Make Sure Your Social Program Is Evaluated Fairly

December 12, 2022

Tracy Borrelli


Making sure that your social program is evaluated fairly is one of the most important things you can do as a program leader. You want to be sure that everyone involved in the evaluation process is fair and unbiased and that your communication about the evaluation is clear, inclusive, and transparent. In this blog post, we will discuss some valuable tactics to make sure that your social program is assessed fairly. Taking these actions will reduce the worry associated with whether program evaluations will be conducted fairly.

The first step along the path to ensuring a fair evaluation is the importance of a fair and unbiased evaluation team. There should be no conflicts of interest in the group, and everyone should be able to objectively assess the program. Additionally, all members of the team need to be properly trained and understand the criteria being used for evaluating the type of program you run. Communicating with your community of stakeholders is key, as is providing enough time to build trust in the evaluation process.

 

Conflict of Interest and Bias

 

It is important to ensure that any potential conflicts of interest are identified and managed. If there is a history of program staff experiencing bias or favoritism, it should be discussed openly before the evaluation begins so that the evaluation team can take measures to avoid repeating breaches of trust. Additionally, if an evaluation team member has any personal involvement in the program being evaluated (e.g. a personal relationship with a staff member or participant), they should not serve on the evaluation team. This can help prevent any perceived bias and make sure that everyone involved can remain fair and impartial when assessing the program.

Another thought on conflict of interest is that there is sometimes an exception given for the evaluator that serves as an internal staff member of the program. This is a tricky position to hold, and the risks are very real that a director, manager, or other staff of the program may try to influence the outcome of the evaluation. If you are overseeing the evaluation team, it will be important for you to manage your temptation to skew the results. This can happen with or without the conscious intention to do so. 

With so much at stake (jobs, participant outcomes, community well-being, to name a few) it is, unfortunately, a common occurrence for leaders to attempt to influence the evaluator in a way that pleases them. However, if there is an honest and open relationship between them for working towards the best interest of the program and its participants, and there is a shared value of trustworthy data in decision-making, the issue of a fair evaluation is less of a concern.

 

Understanding the Evaluation Criteria

 

It is also important to make sure that everyone involved in the evaluation process has a deep understanding of the criteria being used for evaluating social programs. Evaluators should be adequately trained on the program's specific criteria, as well as any applicable laws or regulations. Program staff should also have a good understanding of the evaluation and how the program will be viewed and assessed. It's one thing to train staff to do their job well, it's another to have them understand why they are doing that job and how it affects the program outcomes you are working for. Having a clear understanding of what will be evaluated can help reduce worry and ensure fair evaluations.

Communicating About the Evaluation

It is essential to communicate with all parties involved to reduce the worry associated with an evaluation. Before the process begins, provide detailed information outlining how the program will be evaluated and what criteria will be used. Create a team environment that values honest communication. Doing this can help foster trust between those being evaluated and those conducting the evaluation.

When communicating about the evaluation procedure, it’s important to be clear, reasonable, and transparent. Be sure to cover all relevant aspects of the evaluation including expectations, timeline, criteria, and any other relevant information. This will help ensure that everyone understands what is expected from them and how the program will be evaluated. By creating an open dialogue between those being evaluated and those conducting the evaluation, you can reduce worry associated with evaluations and make sure everyone involved knows that fairness is a priority. 


It is also important to create an environment where communication among group members is open and honest. Everyone needs to feel comfortable sharing their opinions without feeling like they will be judged or punished if they disagree with someone else’s opinion.


Creating a team environment that encourages open and honest communication is essential for fair evaluations. Before the process begins, it is important to set ground rules that foster respect and ensure that everyone’s voice is heard. Allowing everyone to express their thoughts and ideas without judgment or fear of recrimination will create an atmosphere where fair evaluations can take place. Additionally, promoting dialogue and encouraging team members to challenge each other's opinions can help bring out different perspectives on the program being evaluated, resulting in a fair assessment.

Inclusivity

Make sure that as many community members as possible are given input into the process. This includes staff members who can provide insight into how effective a program has been as well as participants and others who can provide a perspective on how the program is benefiting them.


It is important to create an inclusive environment so that as many community members and stakeholders have a meaningful role in the evaluation process. This can be achieved by inviting those from different backgrounds and experience levels to participate in the evaluation. Additionally, providing feedback loops throughout the process can help make sure that everyone's voice is heard. Finally, making sure that participants understand why their input is needed and how it will impact any final decision can also help create a fair and balanced evaluation process.

It is also important to ensure that the criteria used are relevant and appropriate for assessing the population's needs and circumstances. Invest some time to provide feedback on the outcomes of the evaluation so that members of vulnerable populations can understand how their input was used. By considering these steps you can create a fair and ethical program evaluation experience for all involved. 

 

Making Time

 

If you have read this far, you have probably already noticed that a fair evaluation will take up plenty of time. One of the most common mistakes that evaluators see program leaders making is trying to deliver an evaluation on a timeline that has been cut too short. No matter what else you have on your plate, you will need to set aside enough time or delegate someone you trust to manage all of this communication before the evaluation occurs. The investment of time is going to be worth it if you want to ensure that everyone's worries about a fair and ethical evaluation are put at ease.

 

How much time you need to invest in reducing worries before an evaluation will depend on your program, the community you work in, and the unique dynamics in all areas of your work. If you have any questions about your evaluation process, it's never too late to start asking them and talking about the answers with people. Evaluation processes are sometimes tricky to explain and even more tricky to understand, so make sure that you don't put off your search for clarity. The more you practice asking questions about evaluation and thinking about the answers, the better you will get at planning for your next evaluation.

By taking the time to plan out a fair evaluation process, you can reduce any worries about fairness from the start. It is necessary to allocate sufficient time for communication before, during, and after the evaluation to provide feedback on the outcomes. Investing this time into your social program's evaluation will help build trust among all who are impacted by it and assure them that their input has been taken into consideration. 


With these tactics in hand, you can rest assured that your evaluation will result in a fair outcome, and everyone involved will have peace of mind knowing their opinion was valued throughout the process. Fair social program evaluations not only bring confidence within an organization but also within the community, and with other stakeholders such as funders, board members, and volunteers of the program. With fair evaluations, everyone can gain a better understanding of how their input has been used to improve social programs.

Never Change or Start a Social Service Program Without Checking These Common Assumptions First

December 5, 2022

Tracy Borrelli


When starting a new social program or changing an old one, it's important to make sure that our assumptions are clarified before we move forward. Too often, organizations rush into designing new programs without taking the time to properly assess their assumptions. This can lead to wasted time and resources, and may even result in complete program failure. In this article, we'll discuss what assumptions are and why it's important to think about them before launching from ideas to action. We'll also provide tips on how to evaluate your assumptions and make sure that your program is successful!

First, let's quickly revisit what assumptions are. Assumptions are those judgments, opinions, or beliefs that we make in our heads about things, that we accept as truth, without any proof or evidence in support of that truth. Assumptions may be true or false. We just don't have any actual evidence or proof yet. 


In social programs, it's common to have an idea of what appears to be a concern or a problem that needs to be solved by a new service,  a process overhaul, or maybe an investment of money or other assets. We see problems in the social sector all the time and we love working hard to solve them. However, I have seen countless people assume they have the perfect solution to a pernicious problem, but they did not check with the very people who live with the problem. Sound familiar? We all have experiences of making assumptions and finding out we were wrong.


Assumptions can be based on experience, current trends, or any other factor that you think might influence the success of your proposed program. Assumptions are just a shortcut in your thinking that helps you to get through life when things get tough. On their own, they aren't good or bad. But sometimes assumptions are sneaky, and you might not realize that you don't have enough proof for those thoughts and beliefs to be true. And that is where we get into trouble with them when creating new programs.

Assumptions About the Problem

It's important to take the time to identify and test your assumptions before moving forward with any new program idea. Talk about your ideas with as many people as possible! Most of all, share your ideas with the people who you want to help. Ask them what they think and especially how they see the problem. Be open to what they say, and even what they don't say. Ask yourself what might be missing from your conversations and think about new ways to test your thoughts and beliefs about the problem. By sharing your ideas and assumptions with other people, you will start to get a more clear picture of the true problem, if there is one, and how other people see it. You will also learn how people have been solving their problems without you and how much they would appreciate a new solution or program. 

Let's look at an example. 

An example of a program assumption could be that implementing a new online training program will reduce staff burnout. If you have not talked about this with your staff, you don't have proof that this problem is true yet. Is it true that staff are burning out? If so, how much of a problem is it? Is it true that training programs can reduce staff burnout? Where did that idea come from? Do staff ask for online training opportunities when you talk about burnout? How do they currently solve their experiences of burnout? 


There are some effective ways to ask these questions and have these conversations without accidentally leading staff to tell you what they think you want to hear (more sneaky assumptions!). In other words, you should design this line of inquiry around making it safe for staff to be honest about the experience (or lack) of burnout. That might look like hiring someone to interview staff, or it could be that you use a series of anonymous surveys. Both methods have their weaknesses and strengths. An evaluator can help you navigate that, and even introduce you to other ways of testing these early assumptions.

If you need to collect the information yourself, be sure to carefully document these conversations. Record who you spoke to, and when you had your conversations. Be careful about how you ask your questions and try not to lead people to believe that your solution is ideal. This is your data, and it will be your evidence. Make it trustworthy data! This is how you get closer and closer to the truth behind your assumptions about the problems that people have. The closer you get to the truth, the more likely that it will help you to craft a better solution. 


Assumptions are important when developing new programs, as they provide a starting point to build solutions and measure outcomes. Without clarifying those early assumptions, creativity and innovation can become difficult to develop. You need to be able to test your assumptions and when they fail to be true, you can continue to test new assumptions until you find an effective program idea that meets the needs of your future participants, community, or stakeholders.

Assumptions About Your Solution

Once you have validation and proof of the problem, and you now have established an idea for a great new program, the next step will help ensure that you have a solid foundation for successful service delivery. Taking the time to test assumptions about how your new program will be a success might seem like a waste right about now. After all, didn't you just spend all that time checking in with people to understand the problem? 


The truth is you did talk about and refine the problem, but you did not test the solution - aka: your shiny new program idea. Depending on your idea for a new program, testing it can save money, as it avoids wasting resources on a program that may not meet its goals due to inaccurate assumptions. It can also save your clients and staff from wasting their time, energy, and hope in a program that does not live up to its promises.

Assuming You Are The First To Find A Solution

We overestimate what we know all the time. In this way, a common assumption is that once we confirm something is true, we think that we know everything about it. Let's go back to our previous example. The assumption that online training will lead to reduced burnout could be further evaluated by looking at previous research to see if this activity has been tried in your type of program before. Assumptions are often used this way to identify gaps in knowledge or resources. Learning more about how others have tried to solve this same problem can help you to reduce mistakes, even before your new program is tested for the first time. Look into what is going on in other geographical areas, other cultures, academic research, and reports that have been done by other program operators just like yours. By learning about how others have created and tested similar program ideas, you can start to see activities in finer detail. You will also learn more about how to look for success.

Dealing with Surprises

 

Sometimes success comes in ways that are strange and new to us but very meaningful to our program participants. Other surprises can also come up. All of your assumption checking and program research might give you a lot of confidence in your new program idea, only to be blindsided by unintended consequences. Unintended consequences can be particularly disturbing if they are undermining your program participants. But unintended consequences are another area for you to look for hidden assumptions and search for possible new and improved solutions.

 

When Does Testing Stop?

With all of this talk about testing assumptions, you might feel discouraged about how far your idea will get before you reject it. Try not to feel bad. If the problem was not a true problem, just an assumption that turned out to be false, you win. You win because you found out that people did not have the problems that you were concerned about. If you reached the stage where you developed a new program idea and even found other studies of similar programs that have helped people, you win. You win because now you stand a better chance of finding even more support to test your program idea in real life. 

Be mindful, though. If you get that far, you still have work to do. But you can come out a winner. You can win because you can build assumption testing into the very beginning of the program. You will have all of the assumptions you can think of sorted out and you will know how to measure success. And if the program is successful and you have the evidence and proof of that, everybody wins. However, even if you test your new program idea with real people, in real situations, and the program does not work out, you and your community will still win by terminating a program that didn't make good use of time, money, or hope. That is winning through failure. It can be hard, but it is better to  be aware and fail sooner, rather than later.


In conclusion, assumptions play a huge role in the development of social service programs. Taking the time to test assumptions is essential for creating solutions that meet the needs of program participants. When assumptions are tested and proven to be true, everyone involved in a successful program can benefit. Testing assumptions before beginning a program can help identify gaps in knowledge or resources that may prevent success. On the other hand, even if your program does not work out as planned, it is better to know ahead of time than waste resources.


This article has outlined how important it is to consider assumptions when developing new programs and services and has suggested steps for testing assumptions. You are now more equipped to make informed decisions when developing strategies for successful program delivery.

Why Program Evaluation Is Important (And How to Do It Right)

November 28, 2022

Tracy Borrelli


Most social programs are created with the best of intentions. However, without proper program evaluation, it can be difficult to know whether or not your program is actually having the desired impact. This blog post will discuss why program evaluation is important, and how you can go about doing it effectively. We'll also cover some of the common pitfalls that often occur during program evaluation, so that you can avoid them!

Program evaluation is essential for understanding whether your social programs are having the desired impact. Evaluating a program's effectiveness helps ensure that resources are being used efficiently, and it also provides valuable insights into how particular interventions or strategies might be improved to better serve their intended purpose. With proper program evaluation, team members can learn from their experiences and make more informed decisions about future programming efforts. As well, evaluations can help identify gaps in knowledge and inform areas of focus for further research and development. In sum, regular program evaluation is key to creating successful social initiatives that have real-world impacts on the communities they serve. 

Before beginning a program evaluation, it is important to consider the various methods and tools available for gathering data. Surveys are one of the most common ways to collect information on participant experiences and outcomes, but there are also other options such as interviews or focus groups. In addition, it's important to make sure that your team has a clear understanding of what success looks like when evaluating a program. This could include measuring specific metrics such as increased participant engagement or decreased costs. Once the evaluation plan is in place, teams should develop an action plan based on the insights they have gathered in order to stay focused on their goals and objectives throughout the evaluation process. 

One of the most common pitfalls in program evaluation is failing to accurately measure progress. Without a consistent system for tracking changes and metrics, it can be difficult to make meaningful assessments of your program's effectiveness. Additionally, teams may fail to take into account diverse perspectives when evaluating their programs. By not considering different points-of-view and stakeholder feedback, teams may miss important information that could lead to more effective solutions. Finally, the evaluation process itself can be time-consuming and costly if not managed properly, with team members spending too much time collecting data without focusing on how that data can inform decisions about programming efforts. 

To avoid these pitfalls, teams should start by setting clear expectations for what success looks like. This includes setting measurable goals and metrics that can be used to track progress over time. Teams should also ensure they are engaging diverse stakeholders in the evaluation process and collecting feedback from multiple sources. Finally, teams should consider how they can streamline their evaluation process and reduce the amount of time spent on data collection. By taking these steps, teams can ensure that their program evaluations are effective and meaningful, leading to better social programs and more successful outcomes for everyone involved.


In conclusion, program evaluation can help teams understand their social programs and ensure they are making an impact. With the right methods and tools in place, teams should be able to effectively evaluate their programs while avoiding common pitfalls. Ultimately, this will lead to more successful initiatives that have tangible benefits for the communities they serve. 

5 Ways to Get Your Staff More 
Engaged in Data Entry

November 21, 2022

Tracy Borrelli


Staff motivation is essential for any data entry initiative. Without staff support, data entry can be late, missing, and prone to errors. In this blog post, we will discuss five ways to get your staff more engaged in data entry. This will put you on the path to more trustworthy data and decision making. We will also discuss what you need to remember when asking staff to do data entry, so that the process is as easy as possible for them!

I hear it all the time. Leaders know how important it is to get that data entry completed accurately and on time. Everyone seems to be looking for more answers than you have on hand. Even so, your staff are having a hard time getting the data entry piece nailed down. Don't worry! Here are five ways to get staff more engaged in data entry and other program evaluation activities. 


1) One way to get staff more engaged in data entry is to block off enough time for them to do it. It can take up to 15 minutes for the brain to settle into a new task and get into a state of flow. The other issue is that staff might be more productive at certain times of the day. Some people are at their sharpest in the morning, while others become keeners quite a bit later in the day. So giving support might look like giving staff about 15 minutes extra time just to get the brain into focus, and then being flexible about the time of day that they are at their sharpest.

2) Use data entry software that staff are already comfortable with. One size does not fit all when it comes to data entry software. Some staff will prefer a simple interface while others might want more features. The important thing is that staff are comfortable with the software so that they can focus on data entry and not on learning a new tool.


3) Make it easy for staff to find and organize the data they need to enter. This might seem like a no-brainer, but staff are more likely to be engaged in data entry if they can easily find the data they need. This might mean having a central repository for data, or clear instructions on where to find the data, and a simple guide to help them learn what they are entering and why you need it.

4) Provide staff with feedback on their data entry. Staff are more likely to be engaged in data entry if they feel like their efforts are making a difference. One way to do this is to provide staff with feedback on the data after it has been analyzed. This might mean sharing data entry results with staff on a regular basis, creating a dashboard that shows the teams combined effort, or sending out monthly reports so everyone can see the fruits of thier labor.

5) Make data entry fun! This might seem like a tall order, but there are ways to make data entry more fun for staff. One way is to create a competition around data entry. This might involve giving staff points for every piece of data they enter, or giving prizes for the staff member who enters the most data. Another way to make data entry more fun is to gamify it. This might involve turning data entry into a game, or providing rewards for staff who complete tasks quickly.


Data entry is a critical part of service delivery, but it can be a tough sell to staff. By providing staff with the right tools, making it easy for them to find the data they need, and giving them feedback on their efforts, you can help staff see the value in data entry and get them more engaged in the process.

Why Trustworthy Data Matters to Your Organization

November 14, 2022

Tracy Borrelli


At its core, data is information. The more accurate and complete the information we have, the better our chances of making informed decisions – for our businesses, our governments and society as a whole. Unfortunately, far too often data is collected without thought to its long-term value or meaning. Worse yet, it’s often processed in a way that undermines its integrity. We need to do better. When data is collected and processed with care and thoughtfulness, it becomes an incredibly powerful tool for positive change. 

The Importance of Trustworthy Data

Data is only valuable in decision making if it can be trusted. When data is untrustworthy, it becomes nearly worthless – and this can be incredibly damaging for businesses and organizations of all sizes. Unreliable data can lead to poor decision-making, wasted resources and even financial instability. Inaccurate or incomplete data can result in suboptimal outcomes and even cause real harm to the people we are trying to help.


On the other hand, when data is collected and processed with care, it becomes a powerful tool that can be used to inform and improve decision-making at all levels. Having trustworthy data is like having a heavy duty flashlight with full battery power when you would otherwise be in the dark. It helps you to find and describe the important things that have been happening in your organization in clear detail. With trustworthy data, non-profit businesses are able to make more informed decisions about where to allocate their resources, how to better serve their program participants and what strategies will lead to the most success.

4 Thoughts on Collecting and Processing Trustworthy Data:

1. Data should be collected for a specific purpose, and only to answer very specific questions. Sometimes, in social service agencies, we collect a lot of personal information from people without thinking much about why we need to ask for it. Knowing why your organization needs to collect certain data, and what your specific questions are, will help you to be ethical and just in the way that you collect your data. It will also help you to keep your data giving you the answers you need to know and reducing the chaos and noise in your data collection systems.

2. The methods used to collect, store and process data should be be crafted thoughtfully. These days even small non-profits can collect a huge volume of useful administrative data without realizing it that it has been quietly accumulating in the background. Training sessions, work related to program expansions, innovation, advocacy work - There's always a lot going on.


There are some good questions to ask yourself as you start to make plans about collecting and processing data. Do you want to prove cause and effect? Are you exploring a relatively new type of program activity?


Analysis involves looking for patterns in your data and making sense of what you find. How will you analyze it? Who will help you understand it better? This takes time and in some cases, it's not as simple as it sounds. Remember that not all data are numerical. Some data represent words, images, sounds and other forms of information. Careful planning about how you will learn the answers that you need for your organization to reach it's objectives will keep your trove of data from going to waste.


Data should be processed in a way that does not distort its meaning or integrity. There are a lot of powerful tools that help us to handle large volumes of data very quickly. This makes it seem like we can do a lot of complex, analytical work at the click of a mouse, without really thinking about the decisions we need to make first. For example, we may need to remove misleading pieces information, or carefully fix errors before we do the analytical work. This is not always easy to figure out. Special care must be taken to ensure that the data accurately represents the answer to the questions that you and your organization need to know. Your data's accuracy will be linked to your leadership's integrity.

3. Privacy of data is always a major concern. Do you know where your data is actually, physically stored? It can be confusing to suddenly find out that data you collected from your program participants is stored with a big company somewhere outside of your geographical boundaries, and that it is now subject to the laws of the country or state that it resides in. 


Do staff use paper files and folders? Do they transport that information in thier personal vehicles? Your organization must ensure that all information is collected and stored securely. It should be accessed only by authorized individuals and stewarded according to your local privacy legislation.

4.  Last but not least, the statements you make about your results should always lead back to the data. Your data is what led you to the results, and you do not want to lose the trust of your community by sharing results that stray away from the facts that you found. Once you have your results and recommendations, how will you share them? Who is your audience? What is the best way to reach them? You will likely want to share the results with more than one audience and should take the time to ensure that you are giving them the results that are most meaningful to them at the time and place that works best for them. 

Trustworthy Data Matters to Everyone

Data has the power to create positive change in the world around us, and social service organizations play a vital role in collecting and using this data responsibly. With its potential for positive change, trustworthy data which is managed by social programs is truly a force for good in the world.


Having trustworthy data means being able to make more informed decisions about where to allocate resources, how to better serve program participants, and what strategies will lead to the most success. This commitment to collecting and using trustworthy data leads to more trustworthy reporting, and that helps create a better future for us all.

3 Mental Shifts You Need to Make to Get the Most Out of Program Evaluation

November 7, 2022

Tracy Borrelli


Program evaluation can be a powerful tool for quality improvement in your organization. However, to get the most out of it, you absolutely need to make a few mental shifts. In this blog post, we will discuss three of the most important ones. First, you need to be curious and ask questions about what is happening in your program. Second, you need to be non-judgemental and open minded when looking at data. Third, you need to think strategically about how you can use evaluation findings to improve your program. If you can make these three shifts, you will be well on your way to getting the most out of program evaluation!

Curiosity Mindset

Curiosity is an important mindset because it helps you to develop a knack for questioning old patterns and noticing new patterns when they arise in your work. Curiosity also helps you to frame and reframe questions that might not occur to less curious people. When people are curious and willing to learn more about the programs they work in, they develop deeper insights into how programs work, the people who work there, and the social problems that they are trying to solve. Curiosity helps you to keep learning, even when you are getting bad news about your programs. When team members around you feel defeated and want to give up, curiousity will help you keep searching for answers that might help your teams to keep moving the work forward and solving some of the most serious social problems that humans face today.

Be Nonjudgemental

Repeat after me: "The data are just data. They are not me. They are not the other people connected to the program. Data are tools. They are here to help."

 

Being non-judgemental and open minded when working with evaluation data can't be understated. 

 

This mindset is absolutley critical if you are going to use data to make important decisions in progams that solve intractible social problems. People hate being judged, and if you are looking at your own program data (or someone else's!), and it does not make you or your team feel successful, you may begin to feel defensive, or worse, like you are failing miserably in your work. When this happens, you may start blaming and shaming instead of resuming curiosity to get to the bottom of the reason why the data are telling you something negative.


Weirdly enough, positive findings can also induce judgey feelings in ourselves and others which can be counter productive. The positive feelings of safety and confidence that come with a positive judgement can undermine your ability to see new problems and remain curious and alert. Then when something important happens and it needs your attention, you may be less openminded than you need to be to act on opportunities or threats to the program.


Being non-judgemental in this sense is not about neglecting tough decisions about your program! It's about looking at the data and seeing the data objectively as a tool in a fact finding exercise. What you do with the facts will inherently involve judgment and decision-making. What you want to be cautious about is looking at data and then jumping to the personal opinions and values you have about yourself and others who you may see as being represented by the data. 

Thinking Strategically​

Strategic thinking is all about imagining what is happening in the fuzzy realm of the future and how you will deal with the both the knowns and unknowns. In order to use evaluation data from your program strategically, you need to be able to see how it might be used in new situations that your program has little control over. When new politics, policies, economies, and other worldly situations crowd into your program's lane, you need to know if your program is ready for it or not. Understanding the importance of data regarding your program strengths and weaknesses is going to be easier if you regularly practice thinking about how ready your program is for the future. 

Overall, to get the most out of program evaluation, you need to be curious, non-judgemental, and think strategically. These three mindset shifts will help you to develop a deeper understanding of your programs, make better decisions, and improve the quality of your work.

What is Program Evaluation and What Does It Mean for Your Social Program?

October 31, 2022

Tracy Borrelli


When it comes to program evaluation, there seems to be a lot of confusion about what it is and what it does. Some people seem to think that program evaluation is only used as a tool for scientific research, while others believe that it is nothing more than a bureaucratic process that gets in the way of effective program delivery. The truth is that program evaluation is both of these things and more. In this blog post, we will discuss the basics of program evaluation and how it can help your non-profit organization achieve its goals.

Program evaluation is the process of assessing the effectiveness of many types of social programs. It can help organizations to learn about the impact of their activities, identify areas for improvement, and make changes to programs when necessary. While program evaluation is often used in the context of scientific research, it is also a valuable tool for managers and administrators.

Program evaluation is a process based on modern social science. The methods and techniques are well establised scientific processes but they are constantly being explored, improved, and refined. This way, they can provide accurate and reliable information about program effectiveness. 


In addition, program evaluation is often conducted by impartial experts who are objective and unbiased in their assessment of social programs. Program evaluation experts also work closely with program leadership to ensure that they (the evaluators) understand what a program aims to achieve, who it is there to help, and how the agency plans to meet its objectives. This ensures that evaluation results are trustworthy and can be used to make sound decisions about program effectiveness.

Despite all of its potential benefits, program evaluation sometimes gets a bad rap for generating bureaucratic busywork. This is because there are a lot of ways that programs already collect and store information and often they do not realize that they need to look at it from new angles or create new tools for capturing the full scope of the work they do. Sometimes it can seem like the information is collected, analyzed, and reported, but nothing seems to ever change for the program recipients and staff.  For anyone who hates to waste time and money, this is the worst possible outcome of an evaluation. But when thoughtfully planned and executed, a well designed evaluation can be a valuable tool for program managers and administrators. Program evaluation can help organizations to learn about the more meaningful impact of their activities, identify areas for improvement, and make helpful changes to programs when necessary.

Program evaluation can help organizations to learn about the more meaningful impact of their activities, identify areas for improvement, and make helpful changes to programs when necessary. Evaluation helps to improve social programs by providing leaders with information about program strengths. It can even unearth valuable hidden insights. This information can be used to make changes to program activities so that they are even more effective in the future. For example, program evaluation can help to identify new strategic directions, test new program ideas, and scale established programs to help more people.

If you are responsible for running a social program, I strongly encourage you to consider using program evaluation as a tool to assess program effectiveness. Program evaluation can help you to learn about the meangful impact of your activities, identify areas for improvement, and make changes to your program when necessary. When used thoughtfully, program evaluation can be a powerful tool for success.

TMACT Basics for Leaders in the Homelessness Sector

September 7, 2021

Tracy Borrelli


This was originally published as an article on my LinkedIn profile:

https://www.linkedin.com/pulse/tmact-basics-leaders-homelessness-sector-tracy-borrelli/


Leaders exist in every corner of homelessness services, and Assertive Community Treatment (ACT) teams are no exception. Assertive Community Treatment is a type of program that helps people who suffer from serious mental illnesses to live their lives in the community, rather than in a hospital or group residential setting. Staff who work on the ACT team come from many professional backgrounds (for example: nursing, social work, psychiatry, or addictions treatment). Some staff have thier own experience as a person with mental illness and/or homelessness. Staff visit their clients often and spend many hours helping clients with the day to day aspects of their lives that can be painfully difficult. This difficulty is due to the debilitating nature of their symptoms of mental illness.

If you are a new leader on an ACT team, you might be hearing about the TMACT and how it is used to assess your program. TMACT stands for: Tool for Measuring Assertive Community Treatment. Your team might even have a TMACT assessment booked, and you are not 100% sure what to expect yet. If this makes you nervous, I hope that this article will reduce some, if not all, of your concerns.

What’s a TMACT?

The TMACT is a data collection and analysis tool that helps us to evaluate how well the ACT program is working. It gives us a snapshot in time, that helps to ensure that the people who are served by the program are getting the level of care they need to feel better. ACT programs and the TMACT are based on decades of evidence, proving that if you follow the program the way it is intended, it increases wellness for the people in the program. It even reduces costs to the community through:

  •  Reduced hospitalizations

  •  Reduced incarcerations

  •  Reduced episodes of homelessness that lead to shelter use

Some jurisdictions make TMACT assessment mandatory. This is intended to maintain assurances that people are receiving the level of care that is needed to help them feel better. These teams can be very expensive to run, so it is important to most funders (and tax payers!) to know that the work is rolling out the way it needs to. In other jurisdictions, it may not be mandatory, but it is requested by funders, or other stakeholders to ensure that the program is meeting the community’s expectations.

A TMACT assessment is meant to be completed by evaluators who are competent in the ACT model and have expertise in interviewing and data collection procedures. Ideally, a TMACT evaluator is a consultant from outside of the ACT team. The TMACT should not be conducted by someone directly connected to service delivery or who might have direct oversight of the program. This is to protect the assessment from our natural human biases and improve the trustworthiness of the analysis.

For executive leadership and staff members alike, a TMACT has the potential to be stressful. This is generally because of unknowns along the way and, understandably, many people simply fear being criticized about their work or experiencing failure. However, when a TMACT assessment is done well, it is a great learning experience for everyone involved. A good TMACT should lead to deeper insights about service interactions, program strengths, and what might need to change to make the program even better.

What you can expect during a TMACT assessment

Chronological order of events:


1. Once the TMACT has been given the green light, the next thing that needs to happen is for the TMACT evaluation team to connect with the Team Leader. Once that happens, they start setting dates. The Team Leader also gathers some data on a spreadsheet ahead of time.

2. One evaluator will have a separate call with the Team Leader to do an initial interview.

3. Two evaluators make a two day visit to the program and divide their work to:

a. Observe the daily meeting

b. Observe the weekly meeting

c. Ride along to observe home visits with team members

d. Interview team members

e. Interview a group of clients/consumers of the program

f. Gather remaining data from contact notes

g. Debrief meeting with the team

h. Connect with the Team Leader to gather any remaining data

4. The evaluators then leave and analyze the data that they have gathered to prepare a report. It’s possible that they will return to present the findings in person, however it may not be possible due to travel requirements.

You can see that a range of data is gathered from many different sources. This makes it possible to find patterns within the team’s work. For example, a pattern might unfold across interviews, spreadsheet data, community visits, and contact notes. Examples of patterns might be how well integrated employment supports are, or how the team may be missing some opportunities in addictions treatment. Over the years, I’ve seen many different patterns emerge from all of that data. Emerging patterns will depend on a lot of factors that come together at once.

Noteworthy interactions between the program and evaluators:

  • If you are being observed in a meeting or on a community visit, it’s really important to behave as you normally would in these situations. Meetings happen at a quick pace and you don’t want to miss something important because you were preoccupied with the evaluator. Likewise, during a community visit, a nervous staff member can trigger a client/consumer to behave in a way that reduces your ability to do your best work. So, if you feel nervous, take a minute to get grounded and into the present moment with your job, and let things unfold the way they normally would.

  • If you are asked to be interviewed, this is a great opportunity to explain your work to someone who is keenly interested in all the gory details of what you do every day! You might be amazed to find that you know a lot more about your work than you thought. If there is more than one of you being interviewed at a time, you will have a chance to gain insight from one another, which is an immediate benefit to the whole team. What a great win!

  • Your evaluators should give you opportunities to ask questions and get answers throughout the assessment. During the debrief meeting with the team, the evaluators might have additional questions intended to help them understand conflicting or unusual information that may hae arisen. They may also give you an indication of how well the assessment is going so far, without getting into the results. The debrief meeting is also an opportunity for program staff to ask questions about the assessment and get clarity before the evaluators leave.

  • The client/consumer interview is a group interview, sometimes referred to as a focus group. This is usually a short meeting and the evaluator asks a few key questions that are related to the way that staff members behave with them. They will be asked about things like how their rights to housing and decisions about their health care are respected, and if they have been offered specific types of services. In my experience, clients are often quite generous with their praise of staff and express heartfelt gratitude for the ACT team’s services.

  • Once the program receives the report, you will find that it is filled with numbered ratings and detailed explanations. This is to help everyone understand how close the program is running to the way it is intended to run, which we know is better for clients. There will also be descriptions of the program’s strengths, and recommended areas for improvement. The strengths and recommendations come directly from the data that were collected from your program and arose from patterns found across multiple data sources. This gives a lot of weight to the trustworthiness of the results.

 

If you have any questions or comments about experiencing the TMACT, I’d love to hear from you!

3 Ways That Trustworthy Data Supports Leadership in the Homelessness Sector

August 30, 2021

Tracy Borrelli


This post was originally published as an article on my LinkedIn profile:

https://www.linkedin.com/pulse/3-ways-trustworthy-data-supports-leadership-sector-tracy-borrelli/

Program leaders in the homelessness sector often have access to loads of internal administrative data. They also have to deal with many data related problems like:

  • Maintaining standards of practice

  • Describing complex problems and solutions to the public

  • Pressure to secure funding

  • Developing powerful arguments for policy or regulatory changes

If that’s you or someone you know, you’ll probably agree that sometimes it’s obvious how to make the best use of that data, and sometimes it’s not.


I get pretty excited about the trustworthiness of data! What I mean by trustworthiness is data’s ability to help us answer specific questions accurately, and consistently. It takes work to get it there, but trustworthy data leads to better problem solving and collaboration with partners who also work with the homeless.


To put it more specifically, trustworthy program data can support leaders by increasing the chances of

  •  Better decision making

  • Learning from failure faster

  • Clarifying communication.

Here is a more detailed look at the 3 ways that trustworthy data supports leadership in the homelessness sector:

Making the best decisions possible

Decision making is one of the hardest and scariest things about leading in the homelessness sector. The problem of homelessness is incredibly complex with high stakes for everyone involved, from the individuals who experience homelessness, to those who are building community-wide strategies to address it.


Decision making in homelessness involves a lot of risk. The problem of homelessness is intertwined with other onerous problems like:


  • Serious mental illness

  • Racism

  • Higher burdens of disease

  • Poverty

  • Early death

Economic issues for the public involve higher rates of service use, such as law enforcement, shelter use, addictions treatment, and hospitalization. With stakes as high as they are in solving homelessness problems, making decisions can have important consequences.


As an example, I once worked on a project with an agency that hadn’t checked their data quality for a very long time. They knew that they had a lot of clients who were not only homeless but who also had a high rate of serious mental illness. When I presented to them how often mental illness was found in their electronic records, they were surprised at how seldomly it was recorded. When they started regularly updating the records to be more accurate, it became easy to justify formal partnerships with mental health professionals. This led to much more appropriate support for a very large group of clients.


Now, a trustworthy dataset won’t necessarily render decision making easy, I know, but it will definitely be less difficult when we can trust that the information is as accurate, complete, and answers the specific questions as consistently as possible. The weaker the data are in any of those three domains, the riskier it becomes to make the best possible decisions.

Dealing with failure.

Failure is a hot button topic for most leaders in general, and there are a range of ways to deal with it. For successful leaders, failure is an unforgettable teacher and having a trustworthy data set can help you learn from failure faster than if you have an inaccurate data set or no data at all. Monitoring a trustworthy data set means that you might be able to see a pernicious problem building up in advance, so that you can start trying to solve it before a failure happens. It can also help you to look backwards in time, after a failure, and see if there was something you could learn to avoid the same failure in the future.

Not long ago, I was completing an annual evaluation of a comprehensive homelessness program. Sometimes staff helped clients receive and take their medications and doing this was an important part of their success in the program. But it turned out that problems with medication were showing up in the evaluation data at that time, even though it was not part of the assessment. When I alerted program staff to this issue, they quickly started organizing a way to reduce these serious problems. The findings about the medication issues were a side note to that assessment and never made it into the final report. However, the program was able to immediately put measures in place that would prevent dangerous or even lethal complications with clients’ medications in the future.

 There are other bonuses for leaders who actively learn from failure. The homelessness sector is intensely collaborative. Many other agencies and levels of government are also working on the same problems with the same high stakes, and they are just as afraid of failure as anyone. If you have data that has the ability to shine a light on why a failure happened, you can help others by showing them exactly what to look for as well. The insight you bring to the table will improve problem solving conversations, and this will elevate your role as a collaborator in your community.

Clearer communication

It’s very common for leaders in the homelessness sector to feel intimidated and overwhelmed by all the risks, complexity, and types of available data. This stress load can lead us to shut down, misperceive frankness as aggressive criticism, or we can check out of a situation when we need to be fully present.


Our feelings of fear and anxiety when we are intimidated and overwhelmed can be reduced when we make time to explore, logically and without judgement, the connections between what our questions are and how well the data answer those questions. We can zero in on the problem that is actually being solved, and what is being left unsolved. In turn, this makes it easier to explain both what you know and don’t know about your efforts to solve complex homelessness problems. That kind of clarity can get you and others unstuck and moving forward again productively.

I’ve learned that leaders in homelessness don’t just work at the executive level. I have worked with highly skilled, highly regarded front-line staff who felt intimidated or afraid of their program being “evaluated”. This was mainly a fear of being criticized or having to change the way they worked. Sometimes, reports based on trustworthy data have shed light on how front-line work fits into the whole organization, and even benefits the whole community. This has profoundly changed workers’ perspectives about how others see their work. For example, I have seen front line workers go from being resistant to introducing a new service, to becoming its greatest champions. Or they have told me that they can go home and actually talk confidently about the benefits of their work in a way that friends and family can now understand.


I hope this article helped you to understand how having trustworthy data can help leaders in the homelessness sector to solve this complex problem. If you are on the road to building up your data to solve homelessness problems, I would love to hear your ideas!

Blog MODE

This blog contains my thoughts and advice to social sector leaders about doing program evaluation. 

 

If you are working in an agency that assists some of the most seriously marginalized people in today's western culture, trying to figure out an evaluation process can become overwhelming very quickly. 

 

I write about some of the tricky situations that we run into when designing evaluation processes, and how to handle them intelligently and with grace.

 

Tracy

bottom of page