Some say Opportunity Zones aren’t doing the good they were meant to, but changing data suggests criticisms are premature
Since the Opportunity Zones initiative was created as part of the Tax Cuts and Jobs Act of 2017, speculation began as to whether or not the program would be effective, with plenty of declarations made that it wouldn’t actually make an impact on distressed communities before any results were in.
A recent study by the University of California at Berkeley shares many of the most common critiques, namely that investment has been too heavily focused on real estate and is only helping already-affluent areas. Plenty of eager voices have concluded the program isn’t effective and can’t be salvaged.
While there are valid criticisms of the Opportunity Zones program (after all, it is still in its infancy and regulations should be refined based on what we learn from these first few years), that doesn’t mean the initiative itself has been a total failure.
The truth is, much of the data being used to critique the program is misleading and has severe limitations. Let’s take a look at some of these common criticisms and why the conclusions drawn from available data can be short-sighted.
Where the data falls short
There are two main critiques of Opportunity Zones that have been raised time and again: that OZ investment greatly favors real estate projects, which create fewer permanent employment opportunities for area residents, and that the majority of OZ investment is concentrated in a small number of areas that are not those of greatest need.
First and foremost, it’s important to note that almost all the data analyzed in the UC Berkley study came from before the issuance of final regulations by the IRS in December 2019. Without this clarity in place, it should come as no surprise that investors gravitated to real estate projects, which tend to be relatively low-risk, simple investments. The hope would then be that after the regulations were issued, those numbers would change, and that’s largely what has happened.
Since that final guidance was issued, the number of funds with a focus on operating businesses – where critics want to see more investment – has steadily grown: in fact, half of all new fund formations in the second quarter of 2020 were focused on or included such investments.
It tracks, too, that the lack of clarity in the program’s infancy led OZ investors toward areas that were already showing signs of improvement. It should also be noted that many governors and mayors, as the Economic Innovation Group (EIG) has written, “deliberately chose to use OZ designation to…cement turnarounds in their priority neighborhoods.”
Ironically enough, one of the critiques of another program designed to help the country’s most distressed communities – the Low Income Housing Tax Credit (LIHTC) – is that there aren’t enough projects in “high-opportunity” areas, since moving to them can help children rise out of poverty.
Just as funds have diversified in terms of investment types, they’ve also diversified geographically. In Ohio, 28 tracts received investment in 2019, 50 did in 2020, and 46 have in the opening months of 2021 alone. In Indiana, OZ capital is helping convert an abandoned industrial site near Gary into a logistics hub, revitalizing the main street and preserving the local newspaper in rural Brookville, and contributing to affordable housing in Evansville, Seymour, and Indianapolis. EIG’s OZ Activity Map displays hundreds of similar investments in places from Alabama to rural Colorado to Baltimore.
The truth is that while OZ investment was heavily concentrated in real estate, those numbers are no longer current. And while investment was largely limited to areas that had already been seeing improvement, that information is no longer current either, and it’s debatable whether investing in those areas should be viewed negatively at all. So what can we really learn from available data?
What the data can really tell us
While many have pointed to the Berkeley study to note that early OZ investment was mostly in real estate and limited in geographical scope, the story should be that since 2019, investment has rapidly diversified, showing a great deal of promise for the future. The problem is not that the Berkeley data is wrong, merely that it’s premature because those numbers are improving.
Instead of looking at numbers in a vacuum, the way to see trends in data is to focus on how it changes. We need to look at the pace of growth in OZ investment compared to similar government programs to anticipate where it’s headed rather than drawing conclusions from the first two years.
These critiques are also shortsighted because of one of the program’s biggest flaws: the lack of impact reporting standards. Critics and proponents of the OZ initiative agree that a lack of impact reporting opens the door to fraud and abuse, and want to see proper requirements implemented. So why draw conclusions about the program’s effectiveness based on what we all agree are incomplete impact measurements?
That’s why JTC works with its clients to accurately analyze and report on impact, because investors are flocking to funds where impact is a priority. The best funds measure impact, and investors are showing that’s what they want, an encouraging sign for the future of the program. What we need are proper industry best practices, and those who don’t follow those practices shouldn’t be used as an example of the industry as a whole.
How to help the OZ initiative be successful
Industry leaders want to see the Opportunity Zones initiative succeed because it shows immense potential to help distressed communities in their recovery from COVID and beyond. They’ve heard these common criticisms, and don’t contest that improvements can be made, such as adding impact reporting requirements. What’s crucial is not to throw the baby out with the bathwater: properly analyze where OZ investment has been effective (and why) so the program can be improved.
The most important thing we can do now is demonstrate the actual impact OZ investment is having. We’ve seen that incomplete data like that used in the Berkeley study can lead inexperienced readers to draw conclusions that are premature, so full and accurate data like that supplied by JTC is crucial. If the program is to last, funds need to show they’re making an impact, and ultimately, those that are the most successful in terms of impact will be the most successful in attracting capital as well.