In the last recession, the American Recovery and Reinvestment Act (ARRA) established a $7 billion pool was established for states to modernize their unemployment systems. Applications for these funds were due in August 2011.1 States could use the funds to pay UI benefits directly, or to improve their unemployment services programs.
In 2009, states were granted a $500 million fund for unemployment “modernization.”2 These funds3 could be spent on administrative improvements, but they couldn’t be used to pay benefits directly. Over 80% of states reported that they would invest these funds in technology improvements.4.
In retrospect, these millions don’t do not appear to have meaningfully improved any state’s unemployment benefit system or capacity to handle the current increase in unemployment claims. Post-recession surveys revealed most states cited their greatest resulting wins as “youth summer camps”6” which weren’t a top concern coming out of the pandemic for anyone we spoke with.
According to NASW, “Spending funds quickly and in a timely manner was also a challenge frequently cited” regarding modernization funds in the last recession pandemic.7
How can we do it better this time around? Instead of attaching funds to process measures, like hours of training delivered or youth summer camps created, U.S. DOL can attach funds to success metrics. It can fund demonstration projects tightly coupled to goals, which in turn can develop the insight and collateral for other states to achieve those same (or better) goals.
Systems respond to metrics and incentives. The more that states, partners, and U.S. DOL can focus attention on claimant-centric success metrics, the better the experience will be for those who depend on it.
The success of efforts to improve unemployment benefits delivery can’t be measured in hours invested, dollars spent, or lines of COBOL eliminated — success is determined by the impact for real claimants in the real world.
“The DOL regime should include basic measures of success and failure (including adequate customer service) that can be assigned a grade that should be prominently featured on the DOL website to provide transparency to the public and compare the operation of programs across the states.”8 — National Employment Law Project
States must fight fraud. But reactionary measures in the moment, like blocking claimants who use the same address or who switch bank accounts frequently (measures that, in the end, don’t protect against organized crime efforts, either), hurt claimants without truly addressing fraud.
Leaders need to decide if their goal is to deliver timely, adequate benefits payments to the unemployed, or to continue to reward not paying people or avoiding “fraud” at the expense of delivering benefits.
“Part of this increase in erroneous denial has to do with the fact that systems have been over-calibrated to prevent overpayments at the expense of paying appropriate benefits.”9 — National Employment Law Project
The cost of erroneously flagging fraud can be stark:
In the end, MiDAS [Michigan Integrated Data Automated System] flagged nearly 40,000 workers for fraud, in which a staggering 93 percent of those were inaccurate, according to NELP. What’s worse, the penalty for fraud in Michigan is four times the amount paid, plus 12 percent interest; and many of those affected by these measures lost everything. Detroit Attorney Jonathan Marko, who represented Michigan residents in bringing a claim against the state, said: “Some of these people committed suicide. Some lost their homes. Some had to declare bankruptcy.”10 — New America Foundation
We need thoughtfully-crafted success metrics that consider claimants, claims processors, and organized crime. And the first measure of success should be, “Did all eligible people receive benefits?” rather than “How many ineligible people received benefits?”11
We know that unemployment benefits aren’t delivered equitably.12 To change this, we need to measure where we’re going wrong.12. Unemployment can learn from the financial industry’s strategies for detecting bias in outcomes:
By carefully altering the way different demographic groups are assigned to protected or sensitive classes, and ensuring these groups have equal predictive values and equality across false positive and false negative rates, you can better detect bias in your AI. These five steps can help you detect bias in your algorithms:
A critical step for ensuring equitable access to benefits is collecting demographic information.
U.S. DOL could create blanket permission and best practices for collecting and measuring consistent demographic information for equitable outcome analysis.
Today, U.S. DOL measures states on timeliness for states’ handling of first payments, continued claims, nonmonetary adjudication determinations, and appeals, as well as measures of the quality of adjudication determinations. States track the percentage of:
All successes for the American Recovery and Reinvestment Act (ARRA) referenced training hours delivered not jobs secured.13. With the American Rescue Plan (ARP), we have an opportunity to refocus success metrics on outcomes and not intermediate steps that may or may not result in desired outcomes.
“For example, what if we measured by benefits-participation rates and focused on people who are eligible but not enrolled? What if we decided success looked like reducing the billions of federal dollars that are left on the table every year because people don’t realize they’re eligible for benefits programs, or by how close we were to reaching every hungry child in the country and eliminating child hunger?”
“What if we consistently measured customer satisfaction? We’re all asked to leave reviews or give feedback to almost every private-sector company we interact with. Could more government services ask the public how satisfied they are with their experience, or how it might be upgraded in a meaningful way?”
“And what if we measured for efficiency and efficacy? Not in dollars spent by government but in the time it takes to deliver benefits that are sufficient to meet people’s needs.”14
“When data are connected to vision, conversations become customer- and family-centered. SNAP and Medicaid participation rates won’t just be numbers; they’ll instead indicate whether families are accessing the full package of benefits for which they are eligible. The new set of questions may be: How quickly do families get benefits? How much delay do they experience between each step of their process? How many eligible families have their cases closed for administrative reasons and experience churn at redetermination? How are customers’ experiences and access to programs changing over time? Are we providing consistent customer service no matter which office someone walks into? Data should be used to illustrate and examine customer service and the agency’s impact on families.”15
Success metrics should drive behaviors we want, instead of explicitly defining the “how.”
Requiring a service level agreement (SLA) of 99.9% effectively requires cloud, but it sets a much clearer expectation. This performance-based expectation avoids the unfortunate circumstance of moving an aged system into the cloud and still having 6 hours of downtime per day.
Here are some other examples of putting the what before the how:
A third party must be able to validate adherence to any given success metric, preferably in real-time, and not through a manually-generated report submitted once a year. For example, key metrics need to be re-evaluated after every functionality change to monitor for unforeseen (or hoped-for!) impact.
Possible claimant-centric success metrics can include:
Code for America’s “The Status Quo of Safety Net Assessment” should be required reading for anyone considering new unemployment-related success metrics.
There’s currently no downside to prevent employers from contesting all unemployment claims.16 We recommend that U.S. DOL introduce a counter-measure that penalizes employers who contest too many claims that are ultimately decided in the claimant’s favor.
This section was written in collaboration with Cassandra Madison of The Tech Talent Project and the California Unemployment Strike Team.
Many states’ unemployment insurance issues are rooted in similar problems — technical systems, business processes, staffing models that aren’t built to withstand crisis-level demand, and limited or no shared understanding of data. These underlying problems also manifest in similar ways across states, creating huge backlogs that leave Americans waiting for financial support when they are most vulnerable and systems wracked by fraud, costing millions of dollars.
Strike teams tasked with improving unemployment insurance programs design and implement strategies to quickly fix the root causes of systemic failures.
Groups looking to drive local progress will need to understand the specifics of each state’s architectural, cultural, staffing, and vendor challenges. This is particularly true for any group trying to help from the outside.
Investing the time to understand each state’s challenges through upfront discovery work will increase the chances of success in the short term, building trust with teams on the ground while maintaining the executive alignment needed to spur longer-term progress.
While the incremental roadmap to improvement will look different in each state, there are likely to be clusters of states with similar problems, opening up the possibility for shared technical approaches and/or teams across states and perhaps even the development of key shared services at the federal level. The discovery work will be critical in illuminating these patterns and associated opportunities.
With this in mind, we recommend launching a phased approach to the strike force rollout that’s rooted in the discovery sprint process. Taking this approach will provide valuable insights and allow U.S. DOL to make meaningful progress quickly, while remaining nimble and responsive to the information that emerges.
Gather basic information on the processing systems in each state, so that we can sort them into “mainframes” vs “Oracle databases” vs whatever. This could be over email or one structured interview per state. We recommend some questions here.
Pick no more than 5 states that represent the diversity of systems and sizes (based on Phase 0), and launch a 4-week discovery process in each state to help identify key issues, pain points, root causes, and key players. This information will be used to craft a prioritized roadmap of both quick wins and longer-term needs and to identify the specialized skills sets needed by implementation teams on the ground. This will also help U.S. DOL identify common issues across states, allowing for potential collaboration and the development of 2 to 3 central solutions.
For issues that won’t be addressed with a shared service, these initial assessments can lay the groundwork for how U.S. DOL will award and measure the success of individual state grants. There should be an emphasis on defining and measuring outcomes such as modern payment timeliness, first contact resolution rates, and elastic claim processing capacity.
Use the information gathered in Phase 1 to inform the Phase 2 approach and the initial resources allocated to the discovery process. We recommend waiting to launch Phase 2 until meaningful work is underway to improve service in Phase 1 states, as that work will need to continue as the Phase 2 discovery work is launched.
The insights gathered from both Phase 1 and Phase 2 should be continuous and the information used to iterate on the roadmap, approach, and team composition.
We recommend that U.S. DOL deploy floating implementation teams to support the work of strike teams. These teams would work under the direction of each state team to solve specific problems.
As the discovery sprint work progresses in multiple states, you’ll get a better idea of what kinds of teams are needed. There are likely to be similar technical and business process problems faced across states, opening up the possibility that support teams and solutions can be shared.
While we can speculate on what at least a few of these teams may be, any decisions about team composition or focus should wait until at least the first Phase of discovery sprints are underway and there’s an early roadmap.
A reliable integrated command center provides a mechanism for making strategic, timely decisions — both in a crisis and during calmer times. To be effective, the integrated command center needs to be highly visible across the organization and have the teeth to make decisions and changes rapidly.
California has been running a successful integrated command center on a weekly basis since Fall 2020. Every week, the team gets together to:
If you don’t have enterprise-wide data visibility or the support for an integrated command center just yet, you can start on a smaller scale. Or you could start with one data source and what you can learn from it. Rhode Island regularly reviews the 400 codes that kick applications out of the automated process (“clean claims”) to manual review. This helps them identify opportunities to increase automation.
Or you could start with a weekly meeting focused on the top 2 issues that are known to you, as well as on how to develop more robust data capability.
State demonstration projects (pilots) are an excellent way to determine the “art of the possible” for reaching specific metrics and milestones. For example, a state may want to pilot:
There are smaller-scale projects needed, too, like:
A demonstration project could result in shareable code, but is more likely to result in other forms of shared collateral, such as:
The best places for demonstration projects are states that want to be pilot sites and are fully invested in determining the art of the possible goals for increasing access and service to claimants. This approach pairs nicely with the strangler pattern, with a working group as a convenient way to disseminate successes.
Texas WIC shared that when they introduce new ideas and features to their technology vendors, they have the vendor team share no-code mockups or prototypes and explain how the feature would impact operations before any code is written. This enabled the WIC program to better understand and direct the development efforts and ensure the final product met their needs.
President Biden’s Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the Federal Government: “As part of this study, the Director of OMB shall consider whether to recommend that agencies employ pilot programs to test model assessment tools and assist agencies in doing so.” U.S. DOL and/or private philanthropy could fund specific demonstration projects related to specific goals.
Apart from specific problems as suggested above, states can also model improvement on specific unemployment modules.
Possible demonstration pilots outlined in this report include:
When deciding which states to work with on demonstration projects, in working groups, or in other contexts, it can be helpful to have a framework by which to group them into cohorts based on different topics and dimensions. Conducting an ecosystem survey can help identify some patterns and groupings.
A cohort isn’t meant to be “bad” or “good,” but merely a useful grouping.17 For example, if you were interested in running a demonstration project to develop a more useful contact center taxonomy, you probably wouldn’t want to start with a state that doesn’t have a contact center.
While states are experiencing similar UI problems, the political/cultural landscape and level of digital maturity will vary from state to state.
These are obvious first choice states because they have some modern technical expertise, technical resources, and a high commitment and openness to doing things differently. We recommend these as a Phase 1 cohort, because there will be fewer cultural barriers and they’ll be able to make more progress quickly.
These states are in high need, eager for help, and open to doing things differently. But they differ from digital leaders, because they have little to no experience with modern tech principles and practices. Progress may be slower, but there is a solid opportunity to build the foundation for more lasting institutional change. We also recommend early adopters as Phase 1.
States in this cohort are in high need and “open to help,” but they dislike disruption. Key leadership may say that they’re committed to U.S. DOL, but they express skepticism to their teams privately. They can also create barriers to system access, slow-walk new initiatives, or find a reason to end the partnership early.
To be successful in these states, U.S. DOL will need demonstrated commitment at key executive levels on the business and tech side (CIO and labor leadership). Recognize that longer-term change may or may not be possible here, and place them in Phase 2.
These states don’t want help and are taking action to undermine or disrupt benefit programs. We don’t recommend providing assistance to these states at this time.
Other cohort variables could include:
Working groups are an effective strategy for providing a safe, practical place to troubleshoot challenges and share solutions.
States that participated in the Center for Law and Social Policy’s Work Support Strategies (WSS) Initiative credited participation in multi-state learning communities as a main driver of success. According to participants in the WSS, working groups can provide opportunities to:
This group of 18 states is generating a collection of promising practices and sharing resources. The following guidelines are the foundation of their success:
This allows the group to stay focused on manageable problems.
Before the monthly meeting, each participating state meets with the facilitator for up to one hour to share their process, challenges, and questions.
The group comes together to share promising practices. Based on the topic and interest, sub-groups may form to collaboratively work on shared challenges.
It publishes promising practices on a public website after a formal clearance process. The emphasis is on collaboration and reusability, and members share exact policy language, forms, and spreadsheets.
The Integrated Benefits Initiative was a collaboration between Code for America, the Center on Budget and Policy Priorities, and Nava Public Benefit Corporation. These organizationsy partnered with 5 states to pilot faster, more effective, and less expensive ways for people to access critical government services including SNAP and Medicaid. The fundamentals of this model are called out below.
The pilot cohort was selected for their commitment to innovation — to experiment with new technology and methods, and to work together to consider and prioritize the client experience within their eligibility and enrollment processes. Each of the pilot states were in the middle of eligibility system modernization efforts.
Pilots gave states an important opportunity to leverage human-centered design and agile methods to move faster on one aspect of their eligibility and enrollment process; to gather a wealth of user research; to rapidly prototype, test, and iterate; and ultimately to demonstrate impact on key outcomes to inform future work.
With the goal of creating common modules that could be reused across states, organizations and pilot states shared research and their approaches to building eligibility and enrollment modules so they could move faster.
In the unemployment space, monthly topics could include wage verification for gig workers; collating voice of the customer data; providing claimant-friendly claim status; tracking contact center first call resolution rates; and so on.
Based on successes in complementary spaces — including the examples we provided above — we recommend that U.S. DOL create one or more working groups where state unemployment programs can come together to learn from each other. In the unemployment space, monthly topics could include:
These groups could be hosted by U.S. DOL or by private philanthropy (such as New America).
There are multiple types of shared services currently or potentially in play for unemployment benefits.
A shared service is the only real option for these:
For the following shared services, possible bad outcomes can be balanced against success metrics, benchmark monitoring, and an escape clause for states if those benchmarks aren’t being met consistently.
Here are a couple of examples of services that could be centralized:
The benefits of centralizing a service to a single place can mean improved service and lower cost. But if that shared service is terrible, it also means that everyone is stuck with bad service.
For example, what happens to a shared or centralized unemployment benefits system if a political party takes control that wants to end or limit unemployment benefits? A central eligibility rules engine sounds great if its goal is to expand access to more eligible individuals. It sounds less great if it can then be used to programmatically deny large swaths of people benefits, in one place (instead of having to change the rules in 54 places).
As states embark or consider embarking on multi-state software collaboratives, it’s useful to look at what caused prior unemployment system collaboratives to fail, so as to avoid those pitfalls.
Challenges have included:
Here are a couple of examples of currently successful efforts:
Those involved have attributed success to:
The relationship between U.S. DOL and states is adversarial, at best.27 New U.S. DOL leadership that has come from states and advocacy organizations can make significant strides towards changing this.
Here are some of the concrete opportunities we heard about in our interviews:
In addition to American Rescue Plan (ARP) funds, philanthropy could support the development of claimant-centric, effective unemployment benefits delivery by:
“When asked about their greatest early accomplishments with Recovery Act funding, many states and localities pointed to their rapid start-up of the WIA Summer Youth Program and their ability to place hundreds or thousands of youth in summer jobs so quickly.” (p 20) ↩
From NASWA: “The 20 states analyzed were selected purposely to provide balance and diversity on factors such as population size, region, degree of co-location of Wagner-Peyser labor exchange services and WIA services, unemployment rate, health of the state UI trust fund [Reserve Ratio Multiplier], and UI recipiency rate.” (p viii) ↩
WyCAN was a multi-state unemployment insurance software consortium that included Wyoming, Colorado, Arizona, and North Dakota. The effort began in 2009 with a $62 million grant from the U.S. Department of Labor, in addition to funding from the member states. They teamed up via a cooperative purchasing governance agreement to build a monolithic system that would serve all of their needs. The states’ benefits processes proved too different to be reconciled under a single system, and the work was abandoned, the unspent $47 million returned to the Department of Labor. (p10) https://softwarecollaborative.org/cooperatives/wycan ↩ ↩2
Multiple State agencies also noted that while the original intent of tying USDA technology funding to consortia was to make technology cost-effective, it’s unclear if that intention has actually been met. State agencies that we spoke with who are not members of a consortium shared that their independence made it easier to make necessary and timely changes to their MIS. - link ↩ ↩2
“Iowa left the consortium early on due to concerns over Iowa’s ability to support the underlying .net technology (vs. the Java platform they were using).” https://vermontdailychronicle.com/2020/04/22/scott-pulled-plug-on-troubled-ui-upgrade-then-this-pandemic-hit/ ↩ ↩2
While membership in a consortium allows WIC State agencies to share resources and funding, some State agencies have found that this model can make it difficult to get new features prioritized, and that the pace of development and releases can seriously delay critical improvements to the participant experience. Additionally, members of the consortia we spoke to explained that regardless of caseload or funds contributed, all members have equal voting power, which can be frustrating when needs are different due to a different scale of operations. - https://s3.amazonaws.com/aws.upl/nwica.org/wic-technology-landscape-_-final-report-design.pdf ↩
“Agency of Digital Services Secretary John Quinn’s provided more detail: “The underlying issue is that Idaho is not willing to give up intellectual property rights of the system being developed, and they will not hesitate to act in the best interest of their state regardless of the effect on the consortium or partner states,” he explained to Vermont Daily earlier this week” https://vermontdailychronicle.com/2020/04/22/scott-pulled-plug-on-troubled-ui-upgrade-then-this-pandemic-hit/ ↩
“VT and ND have been beholden to partner state Idaho as both VT and ND do not have the internal development staff to build their own systems. Idaho has been willing to collaborate with VT and ND who pay for Idaho resources involved with system development. Yet, as ID is a sovereign state, VT and ND have little to no recourse to hold ID accountable for the quality or content that is developed nor the timeline in which it is delivered.” https://vermontdailychronicle.com/2020/04/22/scott-pulled-plug-on-troubled-ui-upgrade-then-this-pandemic-hit/ ↩
“Governance problems are well illustrated by the Internet Unemployment System (branded as “iUS”). This small consortium was started by the State of Idaho in 2012, building atop the successful work that Idaho had already done to modernize its unemployment software infrastructure, with Iowa and Vermont also participating. (Iowa later dropped out and was replaced with North Dakota.) The project continued clear through 2019, with Idaho performing the software development work. At the beginning of 2020, Vermont raised the alarm, complaining of governance problems: specifically, Idaho was willing to let other states borrow iUS, but was unwilling to let them make any modifications to it, and naturally prioritized the needs of Idaho over those of Vermont or North Dakota. The governors of the three states tried to resolve these conflicts and, unable to do so, agreed to dissolve the iUS consortium. (This story was recounted by Vermont’s Agency of Digital Services’ Secretary John Quinn, in an April 2020 letter to the Vermont Daily Chronicle.)” (p9) ↩
This project began as a four-state consortium. Tennessee dropped out almost immediately, and Georgia withdrew around six months prior to launch. ↩
South Carolina also solved some challenges, like obtaining official app store listings, that North Carolina was subsequently spared from. ↩
“It’s important that co-ops start small; not 20 members, but 2.” - https://beeckcenter.georgetown.edu/wp-content/uploads/2021/04/Sharing-Government-Software_Final.pdf ↩
It’s also important that co-ops start by solving a small problem. They shouldn’t start by building an entire unemployment insurance claims system. They should start by building a common application form, a common fraud-detection interface, or a shared platform for submission of eligibility documentation. Co-ops should create something valuable that can be implemented rapidly, so that members can learn how to work in this way. (p11) ↩