RIDDLE: How do you measure the success (or failure) of your planned giving program?
Today I’m very proud to provide a guest post written by one of MarketSmart’s brilliant, yet curious, Account Managers—Lizzie Weiland. She’s got an interesting riddle to solve. Let’s see if you can help her figure it out.
It all started with what seemed to be a simple question.
I was speaking with a client recently about the success he’s been experiencing with our software and strategies when he asked a really great question: “How can I measure the success of my planned giving program in comparison to others?”
He already knows that he’s been discovering more “hidden” legacy gifts for his organization than ever before because he can easily compare past results with recent metrics. But he was wondering how his results compare to other nonprofits’ figures because a board member was pressing him for a report.
At first, he was hopeful that perhaps, the answer could be boiled down to a simple benchmark formula such as — Nonprofits should attain one new planned gift intention per year per 1,000 active donors
That sounded great! But it seemed way too good to be a true standard for performance.
Can this question be answered at all?
If you think about it for a minute, it’s a real challenge. How exactly ARE fundraisers supposed to know how well THEIR planned giving program is doing compared to other nonprofits? How can they know if they are over-performing or under-performing? Does any simple metric exist?
I think it could be dangerous to compare one organization to another. After all, fundraisers always tell us that their organizations are different. Their donor lists are different. Their missions are different. And, their programs are different.
If that’s the case, how can an “industry standard” exist for a planned giving program comparison metric?
I tried to figure it out anyway.
Of course, we couldn’t tell him to just tell his board member, “It’s tricky! Our organization is different!”
So, instead, I began to examine several factors in an attempt to normalize the data for a comparative analysis:
- List size
- Data integrity
- Number of active donors versus inactive donors (or those who have never donated such as members or advocates)
- The level of donor engagement with the organization
The plot thickens.
The more I thought about each of those factors, the more challenging this problem became.
- List size. Each organization has a different number of records in their database. Some organizations have volunteers, members and advocates included while others only have donors. So should all the records be used as we calculate our number of legacy commitments per 1,000? Should that include volunteers, members, and advocates? Or just donors?
- Data integrity. What if the data isn’t good? That will affect the number of legacy commitments per 1,000 formula tremendously. For instance, was any part of the database purchased or appended? If so, should those records be included in the count? Did everyone in the database explicitly opt-in to the database? How did they get in there? Does the organization frequently remove unsubscribes, bounces, and bad mailing addresses to keep the list current? All of these factors impact the quality of a nonprofit’s data.
- Active donors vs. inactive (or non-donors such as members or advocates). If we decide to ignore the volunteers, members and advocates for our calculation, should we only look at active donors? I think doing so would be a big mistake. I’ve had several clients tell me that, more and more, they are receiving legacy gifts from people who never donated (non-donors). And in some cases, the number of legacy gifts from these non-donors is even higher than from their active donors. For instance, one of our clients receives 80% of their bequests from non-donors!
- Level of donor engagement with your organization. Some organizations are very good at engaging donors and cultivating relationships while others ignore their donors or send them way too many asks via email. Since all this differs so tremendously for each organization, wouldn’t it be safe to assume that highly engaged donors would be more likely to make a legacy gift? Alternatively, wouldn’t we want to assume that organizations with low levels of donor engagement would generate fewer legacy gifts? If list size, data integrity and # of active vs. # of inactive donors are messy factors, should we focus on the # of engaged donors instead?
Very challenging isn’t it?
Eureka! How about industry growth?
After thinking about the above factors, I got frustrated. So I started Googling the subject a bit and I think I might have found the answer thanks to this Blackbaud white paper. According to one of its authors (Katherine Swank— one of our pals!), the best metric for determining how many legacy gift intentions you should be receiving each year would have to be based on the industry growth as a whole for legacy gifts made each year.
This seems to make a lot of sense.
The report states that planned gifts to charitable organizations have grown on average 4.5% to 5% every year, even in economic downturns.
I think my client should use this formula with his own year-over-year data to determine how he’s doing comparatively. For instance, if he received 40 intentions last year, then he should receive 42 next year (a 5% increase).
In other words, he probably should look solely at his program and how it is growing compared to the very simple industry growth percentage.
What do you think? Agree?
Do you have a formula you’d recommend?