nonprofit data collection

Right Sized Evaluation in Grant Writing: What It Is and How to Do It

 
 

Introduction: Why I'm Revisiting This Conversation Now

Here's something I hear from nonprofit leaders all the time: they know evaluation matters, but they're not sure they're doing it right.

Not because they don't care about outcomes — they got into this work precisely because they care about outcomes. The uncertainty comes from a very specific place: the feeling that whatever they're doing to measure their impact isn't rigorous enough, isn't fancy enough, or isn't generating the kind of data that will make a funder's eyes light up during a site visit.

Two years ago, I sat down with Mary Connor, co-founder of Soccer Without Borders, for my Spark the Fire Interviews series. Mary had just won first place in the GrantStation grant writing contest, and I wanted to pick her brain about what made her proposal stand out from the pack. We covered storytelling, empowering language, trust-based philanthropy—all the things I teach my grant writing students.

But the part of our conversation that has stuck with me most—the part I keep coming back to in my workshops, my classes, and my own late-night grant writing sessions—was what Mary said about evaluation.

I've been teaching grant writing and nonprofit capacity building for years, and I'll be honest: I wasn't expecting a conversation about evaluation to be the thing that kept me up at night. But here we are.

I'm bringing this interview back now because what Mary described two years ago has only become more urgent. Federal funding landscapes are shifting. Foundation giving is tightening. The competition for every grant dollar has intensified in ways that would have felt unthinkable a decade ago. Nonprofits are being asked to do more with less, prove more with fewer resources, and somehow demonstrate transformative impact on budgets that barely cover payroll.

In this environment, the organizations that will win funding aren't necessarily the ones with the biggest evaluation budgets or the fanciest data dashboards. They're the ones that can clearly articulate what they do, why it works, and how they know—without overpromising or pretending to be something they're not.

That's what right-sized evaluation is about. And Mary Connor gave me one of the best real-world examples I've ever seen.

Let me walk you through what I learned from our conversation and why it matters even more today than when we first talked.

What Is Right-Sized Evaluation, Anyway?

Right-sized evaluation is the practice of designing monitoring and evaluation systems that are proportionate to your organization's size, budget, capacity, and stage of development. It's the Goldilocks principle applied to data: not too much, not too little, but just right.

The term gained significant traction with the publication of The Goldilocks Challenge: Right-Fit Evidence for the Social Sector by Mary Kay Gugerty and Dean Karlan (Oxford University Press, 2018), which won the Terry McAdam Award for best book in nonprofit management. The book argues that organizations fall into one of three traps when it comes to monitoring and evaluating their programs: collecting too few data, collecting too much data, or collecting the wrong data entirely.

Sound familiar? If you've ever spent an entire afternoon wrestling with a logic model that felt more like a logic prison, you know exactly what they're talking about. I certainly have, and I see my students struggling with this every quarter.

Right-sized evaluation rejects the premise that every nonprofit needs to conduct a randomized controlled trial to prove it matters. Instead, it asks a much more useful question: What do we need to know to get better at what we do, and what can we credibly show our funders and stakeholders?

In today's hyper-competitive funding environment, that question isn't just philosophical. It's strategic. The organizations that can answer it clearly are the ones writing the proposals that rise to the top of the pile.

The Goldilocks Problem: Too Much, Too Little, or Just Wrong

In my years of teaching and consulting, I've seen three portraits play out again and again. I bet you'll recognize them instantly.

The Data Hoarder. This organization collects everything. Pre-tests, post-tests, quarterly surveys, annual surveys, focus groups, case studies, participation logs, attendance trackers, and probably the barometric pressure on the day of each program session. Their staff spends more time entering data than delivering programs. Their reports are 40 pages long. Nobody reads them. The data sits in a Google Drive folder that someone named "EVALUATION FINAL FINAL v3 (2)." Their program staff resents the paperwork. Their participants are exhausted from being surveyed. And when you ask the executive director what they've learned from all this data, they stare at you like you've asked them to explain quantum mechanics in Swahili.

The Data Avoider. This organization "knows" their program works because they can see it in the faces of the people they serve. They have powerful anecdotal evidence and heartfelt testimonials. When a funder asks about outcomes, they share a moving story and hope that's enough. Sometimes it is. Often it's not. They struggle to articulate what success looks like in measurable terms—not because they don't care, but because nobody ever taught them how. Their board meetings include a lot of nodding and phrases like "we're making a real difference."

The Wrong Data Collector. This is perhaps the most tragic case. This organization has been diligently measuring things that have nothing to do with their actual theory of change. They track outputs when they should track outcomes. They measure satisfaction when they should measure behavior change. They've been counting heads when they should be counting milestones. It's not that they're not working hard at evaluation. They're working hard at the wrong evaluation.

When I reviewed Mary's winning grant proposal, I could immediately see that Soccer Without Borders had avoided all three traps. That's rare. And in a field where reviewers are reading dozens or hundreds of proposals, it stands out like a neon sign.

Right-sized evaluation offers an escape from all three of these traps—and in a competitive funding landscape, it gives you a genuine edge.

Let the Academics Do the Heavy Lifting: How to Borrow Existing Research

Here's the most liberating idea from the entire right-sized evaluation philosophy, and the thing Mary said in our interview that made me literally stop and scribble a note to myself:

You don't have to re-prove what's already been proven.

Read that again. Tattoo it on your forearm. Put it on a coffee mug. I might actually make that mug.

When I asked Mary about the evaluation section of her proposal—which, I should say, was deep—she described something that I think every nonprofit leader and grant writer needs to hear. She told me that early in her time as executive director, she was taught that evaluation exists on a continuum, from basic monitoring all the way to long-term impact evaluation. And she said something I found profoundly honest: when you come out of an academic environment, everything points to the rigorous study as the be-all and end-all. But there are concrete steps you can take and a journey you can go on as an organization that starts well before you get to that level.

Soccer Without Borders made a deliberate choice to align their program design with existing research rather than trying to generate new research themselves. As Mary explained it to me, people smarter than her in specific academic disciplines have already demonstrated that mentoring relationships matter—that kids are more likely to experience academic and mental health benefits if they have a positive mentor or role model in their life. A coach can be that mentor. But the organization needed to show, through feedback and rigorous monitoring, that they had created the conditions for that relationship to form and that the kids actually experienced their coaches that way.

Her point was sharp: every community organization should not have to bear the burden of measuring and re-proving the same thing. If research has already demonstrated at a country level that every year of schooling adds an estimated 10% to lifetime income, then her organization's job was to keep kids in school. Their job was not to re-prove the link between education and economic outcomes. That's already been established.

When she said that, I wanted to stand up and applaud. Because this is exactly the mindset shift I've been trying to teach my students, and hearing it from a practitioner who had just won a national grant writing contest was the validation I needed.

Here's how this approach works in practice for your next proposal:

Step 1: Identify the evidence base for your work. What does the research literature say about the type of intervention you're delivering? If you run an after-school tutoring program, there are decades of research on effective tutoring practices. If you operate a food bank, there's robust evidence on the relationship between food security and health outcomes. You don't need to generate this evidence yourself—you need to use it.

Step 2: Design your program to align with the evidence. This means being intentional about program components. If research shows that mentoring relationships are a key driver of youth outcomes, you need to design your program to facilitate meaningful mentoring relationships—not just check a box that says "mentor assigned."

Step 3: Monitor fidelity of implementation. This is the piece most organizations skip, and it's the piece that made Mary's proposal sing. It's not enough to say you have a mentoring program. You need to demonstrate that the mentoring is actually happening the way you designed it. Do participants actually perceive their coaches as mentors? Do they feel safe? Is the dosage sufficient?

Step 4: Track realistic, right-sized outcomes. For Soccer Without Borders in Nicaragua, that meant tracking academic advancement—keeping kids in school past the documented drop-off points of fourth grade and the end of primary school. They could point to the research showing that each additional year of education produces measurable benefits, and then show that their program was contributing to keeping girls enrolled. They didn't need to measure the lifetime economic output of their participants 20 years from now. They needed to track whether girls were staying in school.

As Mary put it to me: her organization's contribution was to keep kids in school, create the best possible conditions for girls to reach their full potential. Their contribution was not to generate new research for the world. That distinction is everything—and in her grant proposal, she was able to tie their small, focused contribution to big frameworks like the Sustainable Development Goals and Ernst & Young's research on the benefit of sport for future women leaders. She described it to me as "A plus B equals we'll see"—if you design it with intention and rigorously measure that you're doing what you say you do, the reader can draw the link.

That's right-sized evaluation. And in a competitive grant landscape, it's devastatingly effective.

The CART Principles: Your New Best Friends

Gugerty and Karlan's Goldilocks Challenge introduces the CART framework, which gives organizations four principles for building a data strategy that's actually useful. I teach this to my students as a diagnostic checklist for whether your evaluation system is serving you or just torturing you.

C – Credible: Data are high quality and analyzed appropriately. Ask yourself: Would a skeptical but fair reviewer trust this evidence?

A – Actionable: Data will actually influence future decisions. Ask yourself: Will we change anything based on what we learn?

R – Responsible: Data collection creates more benefits than costs. Ask yourself: Is the burden on staff, clients, and partners justified by what we'll learn?

T – Transportable: Data builds knowledge that can be used in the future and by others. Ask yourself: Could another organization or our future selves use these findings?

The CART framework is powerful because it forces you to ask hard questions before you design your evaluation, not after. It's the organizational equivalent of measuring twice and cutting once.

The "Responsible" principle deserves special attention because it's the one most nonprofits ignore. As the Bridgespan Group has noted, nonprofit staff time is limited, as is that of constituents and partners. Every minute a youth participant spends filling out a survey is a minute they're not in your program. Every hour a frontline staff member spends entering data is an hour they're not building the very relationships that your theory of change depends on. Right-sizing data collection means thinking carefully about the tools you use, the amount of data you collect, and the time it takes to collect it.

Perhaps sampling a representative set of participants tells you just as much as surveying everyone. Perhaps quarterly check-ins are more useful than monthly ones. Perhaps a five-question feedback form is more honest and actionable than a 50-question assessment that participants fill out with increasing resentment and decreasing accuracy.

When I look at grant proposals today—as a writer, a reviewer, and a teacher—I can tell immediately when an organization has thought through these principles versus when they've just thrown spaghetti at the evaluation wall. Reviewers can tell too. And when every point on a scoring rubric matters, that clarity is a competitive advantage.

A Real-World Case Study: Soccer Without Borders

When I first read Mary's winning proposal, I hadn't yet heard of Soccer Without Borders. By the time I finished, I was genuinely moved—not just by the work, but by the intellectual honesty of how they presented it.

Soccer Without Borders operates direct service programming in Nicaragua, Uganda, and across the United States, using soccer as a vehicle for youth development and education. Their Nicaragua program, the subject of the winning proposal, operates on what Mary described to me as approximately $120,000 or less.

That's not a budget that accommodates a formal impact evaluation team. And here's what I want every one of my students and colleagues to understand: it doesn't need to.

Here's what Soccer Without Borders did instead, and it's a masterclass in right-sized evaluation:

They identified the evidence base. Research overwhelmingly shows that mentoring relationships improve academic outcomes and mental health for youth. Post-Title IX research in the United States has documented the connection between women's participation in sport and advancement in business and education. Organizations like EY (Ernst & Young) have published research on the benefits of sport for future women leaders.

They designed with intention. The program wasn't just "soccer for kids." It was designed around evidence-based principles: creating safe spaces, fostering mentoring relationships between coaches and participants, building a pathway from participant to coach that develops local women leaders. When Mary first went to Nicaragua in late 2007/early 2008, there was essentially no women's sports infrastructure. They had to build it from the ground up—and they did so with the research literature as their guide.

They monitored what mattered. Instead of trying to measure everything, they focused on whether the conditions for success were being created. Did participants perceive their coaches as mentors? Did they feel safe? Was the program creating the relationship dynamics that research says produce positive outcomes? This is process evaluation—monitoring fidelity of implementation—and it's exactly right-sized for an organization of their scale.

They tracked realistic outcomes. Academic advancement. Keeping girls in school past the documented dropout cliffs. And here's something that gave me chills when Mary told me: more than two-thirds of the women now leading the program came through it as participants. Mary met them in 2008 as kids. Now they're coaching and paying it forward to girls in their own communities. That was always the vision—and it took years to realize. You can't achieve that in year one.

They connected the dots without overreaching. In her proposal, Mary linked their focused contribution to larger frameworks like the UN Sustainable Development Goals and EY's research on sport and women's leadership. She didn't pretend her $120,000 program had independently proven the long-term economic impact of girls' education. She showed that they were doing their part in a larger ecosystem, doing it well, and doing it consistently.

That is not a hedge. That is intellectual honesty. And when I tell you it's more compelling to a thoughtful reviewer than inflated claims of impact—I mean it. I've sat on enough review panels to know.

Building a Culture of Learning, Not Just Reporting

One of the most striking things Mary said in our interview—and the thing I now quote in almost every workshop I lead—was this: evaluation should be mostly about feedback and making your program better. If you create a culture of collecting information from your participants in order to improve your program, that's when you're going to get better. If you're only collecting information because you have to report it to somebody who doesn't really care, that's a problem. The data you collect should be stuff you actually want to know.

I remember nodding so hard I probably looked ridiculous on camera.

This distinction—between evaluation-for-learning and evaluation-for-compliance—is one of the most important in the nonprofit sector, and it maps perfectly onto the CART framework's "Actionable" principle. If you're not going to change anything based on the data, why are you collecting it?

Here's a practical test I now give my students that I developed after my conversation with Mary:

The Monday Morning Test. When your latest batch of participant feedback comes in, does your team gather around it on Monday morning, eager to see what it says? Or does it sit in someone's inbox until the quarterly report is due? If it's the former, you have a learning culture. If it's the latter, you have a compliance culture. The data doesn't change, but the organizational posture toward it is everything.

Organizations with genuine learning cultures tend to collect less data overall, but they use what they collect more intensively. They have shorter surveys with more focused questions. Their staff meetings include time to discuss what the data is telling them. Program adjustments happen in real-time, not once a year when the evaluation report drops.

Mary's advice was refreshingly direct: don't try to do too much or pretend your program is doing too much. Be proud of what you're actually doing and show the thought you've put into it.

In a grant landscape where reviewers are reading their thirtieth proposal of the week, that kind of clarity and confidence is magnetic. It's the difference between an organization that knows who it is and one that's trying to be everything to everyone.

Empowering Language and the Evaluation Connection

This is where our conversation took a turn that connected two of my greatest passions—storytelling and evaluation—in a way I think many organizations miss.

As many of you know, I've been traveling around the country leading workshops at conferences on using empowering language in grant writing and storytelling. This work is deeply personal to me, and during my interview with Mary, I shared some research (conducted at Stanford) finding that not only did program participants have better outcomes when empowering language was used, but that donors are equally motivated to give when organizations use empowering language as opposed to deficit-based or shaming language.

That was a lightbulb moment in our conversation because it connects directly to how organizations frame their evaluation data.

Consider two ways of presenting the same findings:

Deficit framing: "75% of participants came from food-insecure households. After 12 months in our program, food insecurity dropped to 40%."

Strengths-based framing: "Participants in our program demonstrated remarkable resilience, with 60% of families achieving food security within 12 months—a journey supported by our wraparound services and the community networks families built together."

Same data. Radically different narrative. The first positions your participants as problems to be solved. The second positions them as agents of their own transformation.

What I noticed in Mary's proposal—and what I told her during our interview—was that the quotes she used from participants showcased individuals on a journey. She didn't pull a quote about how terrible life was before Soccer Without Borders arrived. She chose quotes that showed people in motion, growing, leading. The women in the program weren't victims of their circumstances. They were, as Mary described one leader named Natalia, incredible people who were born into different circumstances and whose resilience and strength should be how they show up in an application.

Mary wrote the proposal from a first-person plural perspective—"we do this, this is our vision and story"—and she named leaders by name. She told me she intentionally tried to center the staff and kids in Nicaragua as the heroes of the story, not the American writing the proposal. And they are the heroes—that wasn't a narrative strategy, it was the truth.

Soccer Without Borders even invited researchers to examine whether their program dynamics fell into a white savior narrative—and received positive feedback that their model of authentic collaboration and intentional development of local women leaders transcended those dynamics.

The lesson for your next proposal? Your evaluation data isn't just numbers. It's a story. And how you tell that story—whether in a grant proposal, a board report, or an annual impact summary—either honors or diminishes the people at the center of your work. I wish this could become not just a norm but a rule in our field: participants are the heroes, never the victims.

Trust-Based Philanthropy and the Shifting Evaluation Landscape

Our interview touched on trust-based philanthropy, and two years later, this topic has only grown more urgent.

I asked Mary about trust-based philanthropy because I believe there is a change coming in terms of how proposals are submitted and how organizations demonstrate their work. Mary's response was one of the most honest reflections on equity in grant writing I've ever heard.

She pointed out something I think about constantly: she has a master's degree in social sciences and an undergraduate degree in a writing-heavy major, and the winning grant proposal still took her over 40 hours and three weeks to write—with help from a colleague. If the system requires that level of education and expertise to even submit a proposal, how are we going to shift power and investment dynamics into communities that have been historically underinvested in? She raised the layers that compound this problem: language barriers, technology access, the simple fact that her staff in Nicaragua—the very people whose work the proposal described—could not have applied directly under the current system.

That hit me hard. I teach grant writing, which means I'm operating within a system that I also believe needs fundamental reform. Trust-based philanthropy and right-sized evaluation share a common ancestor: the recognition that the current system of demonstrating impact creates costs that often outweigh benefits, especially for small organizations and those led by and serving marginalized communities.

Two years ago when Mary and I talked, trust-based philanthropy felt like a promising trend. Today, as competition for grant awards has intensified and many organizations are fighting for survival, the need for a more equitable and proportionate approach to evaluation isn't just nice-to-have. It's essential. The organizations doing the deepest work in the most underserved communities are often the least equipped to navigate a 20-page narrative with 40+ questions and a multi-tab budget spreadsheet—not because they lack capability, but because the system wasn't designed for them.

I'm hopeful. I'm seeing movement toward more streamlined applications, shared measurement systems, and funders who understand that not every grantee needs to independently prove what the evidence base has already established. But we're not there yet. And until we are, right-sized evaluation gives smaller organizations a way to compete with honesty and integrity rather than bluster and bloat.

Practical Steps to Right-Size Your Evaluation Today

Ready to stop over-measuring, under-measuring, or wrong-measuring? Here's the roadmap I share with my students, informed by my conversation with Mary and grounded in the Goldilocks framework.

1. Start with your theory of change. Before you collect a single data point, articulate why you believe your activities will produce your intended outcomes. What are the causal links? What evidence supports those links? This is the foundation everything else builds on. Mary was taught this early in her leadership journey—that evaluation starts with evidence-based design, not with a survey instrument.

2. Conduct a literature scan. You don't need a PhD for this. Google Scholar, SSRN, and plain-language research summaries from organizations like the Bridgespan Group, SSIR (Stanford Social Innovation Review), and Child Trends can give you a solid grounding in the evidence base for your type of intervention. What has already been proven? What don't you need to re-prove?

3. Design your program to align with the evidence. Be intentional about program components. If research says dosage matters, track dosage. If research says relationship quality matters, design for it and measure it. Soccer Without Borders didn't accidentally create a mentoring program—they designed one based on what the evidence said works.

4. Apply the CART test to every metric. For each piece of data you plan to collect, ask: Is it credible? Is it actionable? Is it responsible? Is it transportable? If the answer to any of these is no, reconsider.

5. Monitor implementation fidelity. Are you doing what you said you'd do? This is often the most neglected layer of evaluation and the one most useful for organizational learning—and, I'd argue, the most persuasive in a grant proposal. Process data—participation rates, session quality, participant feedback, staff observations—helps you improve in real time and shows reviewers that you're serious about quality.

6. Track realistic outcomes. Choose outcomes that are ambitious but achievable within a reasonable timeframe. Not everything has to be a long-term impact measure. Short-term outcomes (knowledge gained, attitudes shifted, behaviors adopted) and medium-term outcomes (school retention, employment, health indicators) are valuable and measurable. As Mary told me: pick what's right-sized for your organization and say, "Here, this is what we can control. This is what our program is designed to do."

7. Build a feedback loop. Evaluation without action is just expensive curiosity. Create structures—staff meetings, quarterly reviews, annual retreats—where data is discussed and used to make real program decisions. Make it stuff you actually want to know.

8. Tell the story honestly. Connect your data to the bigger picture using the evidence base. Show how your right-sized contribution fits into a larger ecosystem of change. Don't overstate. Don't understate. Be proud of what you're doing and show the thought behind it. In Mary's words: that needs to be enough. And having sat on review panels and judged proposals, I can tell you—when it's done well, it absolutely is.

FAQ

Q: What's the difference between monitoring and evaluation?

Monitoring is the ongoing, routine collection of data about your program's activities and outputs—think of it as checking the dashboard while you're driving. Evaluation is a more formal assessment of whether your program is achieving its intended outcomes—more like an annual vehicle inspection. Both are important, and right-sized evaluation includes both. For most small and mid-sized organizations, strong monitoring is actually more valuable on a day-to-day basis than formal evaluation. Mary described this continuum beautifully—from measurement to monitoring to long-term impact evaluation—and emphasized that the early stages are where most organizations should focus their energy.

Q: Does right-sized evaluation mean I can just skip the hard stuff?

Absolutely not. Right-sized doesn't mean easy or superficial. It means proportionate and strategic. You still need rigor—your data still needs to be credible and high quality. What changes is the scope and methodology. A well-designed pre/post survey with thoughtful questions can be far more valuable than a poorly executed quasi-experimental design.

Q: What if my funder requires a formal impact evaluation?

Start a conversation. Many funders are open to discussing what "evaluation" means in context. Some may be satisfied with strong monitoring data and evidence of your program's alignment with existing research. Others may have specific requirements that you need to meet. Either way, the right-sized framework gives you a language for having a more productive conversation about what evidence is appropriate for your organization's size and stage. If you can articulate your theory of change and show how your monitoring data connects to a broader evidence base, you're in a strong position.

Q: How do I find existing research to support my program design?

Start with these free or low-cost resources: Google Scholar for academic research; the Bridgespan Group's practical guides; Stanford Social Innovation Review (SSIR) for practitioner-oriented analysis; Child Trends for youth-serving organizations; the Urban Institute's Outcome Indicators Project for sector-specific metrics; and the book The Goldilocks Challenge by Gugerty and Karlan for a comprehensive framework. You can also look at what indicators peer organizations are using and what outcomes their funders have accepted.

Q: Won't funders think we're lazy if we don't do our own impact evaluation?

This is the fear I hear from my students all the time, but the reality is shifting. Increasingly sophisticated funders understand that expecting a $100,000 program to produce the same quality of evidence as a multi-million-dollar research study is neither reasonable nor efficient. What funders want to see is thoughtfulness: a clear theory of change, evidence-informed program design, strong implementation monitoring, and honest reporting on realistic outcomes. Mary Connor's grant proposal—which won first place in a national contest—did exactly this. She didn't pretend Soccer Without Borders had independently proven the long-term economic impact of girls' education. She showed that they were doing their part in a larger ecosystem, doing it well, and doing it consistently. The judges loved it. Enough said.

Q: How does right-sized evaluation relate to trust-based philanthropy?

They're natural allies. Trust-based philanthropy advocates for reducing the reporting burden on nonprofits, providing multi-year unrestricted funding, and trusting organizations to use data for their own learning and improvement rather than solely for donor accountability. Right-sized evaluation provides the practical framework for what evaluation looks like in a trust-based relationship: proportionate, learning-oriented, and honest about what a given organization can and should be measuring.

Q: What is the CART framework?

CART stands for Credible, Actionable, Responsible, and Transportable. Developed by Mary Kay Gugerty and Dean Karlan in The Goldilocks Challenge, it's a set of principles for building data collection systems that are useful rather than burdensome. Data should be high quality and trustworthy (Credible), should inform real decisions (Actionable), should create more benefits than costs for staff and participants (Responsible), and should build knowledge that can be used by others and in the future (Transportable).

Q: Can small organizations really do meaningful evaluation?

Yes—and in some ways, they can do it better than large ones. Small organizations are closer to their participants, can iterate faster, and can build genuine feedback loops without bureaucratic overhead. The key is focusing on what matters most. As the Bridgespan Group advises, prioritize a shorter list of outcomes, think carefully about how you collect data (sampling may be just as informative as surveying everyone), and focus on what's important rather than burying yourself in a mountain of data. Soccer Without Borders is living proof that a $120,000 program can have a world-class evaluation strategy—one that won a national competition, no less.

Q: How does right-sized evaluation give me a competitive advantage in grant writing?

In a crowded field, reviewers can immediately tell the difference between an organization that has genuinely thought through its evaluation approach and one that has copy-pasted boilerplate language about "pre/post assessments" and "continuous quality improvement." When you ground your evaluation in existing research, monitor implementation with intention, and track outcomes that are clearly within your sphere of influence, your proposal reads as confident, credible, and self-aware. That's exactly what Mary did—and it's exactly what reviewers are looking for.


Your Next Step

You don't need a massive evaluation budget or a PhD in research methods to write a compelling evaluation section. What you need is a clear theory of change, an understanding of the evidence base behind your work, and the confidence to be honest about what your organization can realistically measure and achieve.

If this article resonated with you, I'd encourage you to start with one action this week: identify one piece of published research that supports your program's approach and bookmark it for your next proposal. That's the first step toward right-sized evaluation — and toward an evaluation section that reviewers will actually believe.

Want to go deeper? Watch my full conversation with Mary Connor of Soccer Without Borders on the Spark the Fire Interviews, and pick up a copy of The Goldilocks Challenge by Gugerty and Karlan. And if you're ready to build the skills to turn insights like these into winning proposals, check out my Certificate in Grant Writing course — where we cover everything from evaluation design to empowering storytelling so you can write with confidence from the very first draft.

About the Author

Allison Jones, CEO and Founder of Spark the Fire Grant Writing Classes, LLC, built one of the highest-rated grant writing education programs in the world, recognized for four consecutive years. She holds the Grant Professional Certified (GPC) credential, is one of only 30 nationally approved trainers by the Grant Professionals Certification Institute, and has trained over 5,000 grant writers. Her book Meaningful Grant Writing is forthcoming in 2026.

Your Turn! Reply and Comment

👉 Now I'm curious—what's been your biggest challenge in gaining grant writing experience? Have you tried any of these paths, and if so, what worked or didn't work for you? Share your experience in the comments below.

Want more grant writing tips delivered to your inbox? Subscribe to the Spark the Fire Newsletter.

Two years ago, my conversation with Mary Connor of Soccer Without Borders changed how I think about evaluation, and it continues to shape how I teach grant writing today. In a funding landscape that's more competitive than ever, the principles of right-sized evaluation aren't just academically interesting—they're a survival strategy. For more on the Goldilocks framework, I recommend The Goldilocks Challenge: Right-Fit Evidence for the Social Sector by Mary Kay Gugerty and Dean Karlan (Oxford University Press, 2018). To learn more about Soccer Without Borders and their award-winning work in Nicaragua, Uganda, and across the United States, visit their website. And if you're a grant writer or nonprofit leader wrestling with evaluation, know this: you don't have to be Harvard. You just have to be honest, intentional, and right-sized.