A Playbook for Improving Unemployment Insurance Delivery

Recommended Way Forward

Recommended Way Forward

What We Can Learn from ARRA

In the last recession, the American Recovery and Reinvestment Act (ARRA) established a $7 billion pool was established for states to modernize their unemployment systems. Applications for these funds were due in August 2011.1 States could use the funds to pay UI benefits directly, or to improve their unemployment services programs.

In 2009, states were granted a $500 million fund for unemployment “modernization.”2 These funds3 could be spent on administrative improvements, but they couldn’t be used to pay benefits directly. Over 80% of states reported that they would invest these funds in technology improvements.4.

Summary Estimates of State Investments from the $500 Million Recovery Act Grant for UI Administration (data from 19 interview states)5

In retrospect, these millions don’t do not appear to have meaningfully improved any state’s unemployment benefit system or capacity to handle the current increase in unemployment claims. Post-recession surveys revealed most states cited their greatest resulting wins as “youth summer camps”6” which weren’t a top concern coming out of the pandemic for anyone we spoke with.

According to NASW, “Spending funds quickly and in a timely manner was also a challenge frequently cited” regarding modernization funds in the last recession pandemic.7

To do better this time, we propose success metrics and demonstration projects.

How can we do it better this time around? Instead of attaching funds to process measures, like hours of training delivered or youth summer camps created, U.S. DOL can attach funds to success metrics. It can fund demonstration projects tightly coupled to goals, which in turn can develop the insight and collateral for other states to achieve those same (or better) goals.

Defining Success

Systems respond to metrics and incentives. The more that states, partners, and U.S. DOL can focus attention on claimant-centric success metrics, the better the experience will be for those who depend on it.

Success is about impact

The success of efforts to improve unemployment benefits delivery can’t be measured in hours invested, dollars spent, or lines of COBOL eliminated — success is determined by the impact for real claimants in the real world.

“The DOL regime should include basic measures of success and failure (including adequate customer service) that can be assigned a grade that should be prominently featured on the DOL website to provide transparency to the public and compare the operation of programs across the states.”8 — National Employment Law Project

Success metrics should emphasize service delivery, not fraud prevention.

States must fight fraud. But reactionary measures in the moment, like blocking claimants who use the same address or who switch bank accounts frequently (measures that, in the end, don’t protect against organized crime efforts, either), hurt claimants without truly addressing fraud.

Leaders need to decide if their goal is to deliver timely, adequate benefits payments to the unemployed, or to continue to reward not paying people or avoiding “fraud” at the expense of delivering benefits.

“Part of this increase in erroneous denial has to do with the fact that systems have been over-calibrated to prevent overpayments at the expense of paying appropriate benefits.”9 — National Employment Law Project

The cost of erroneously flagging fraud can be stark:

In the end, MiDAS [Michigan Integrated Data Automated System] flagged nearly 40,000 workers for fraud, in which a staggering 93 percent of those were inaccurate, according to NELP. What’s worse, the penalty for fraud in Michigan is four times the amount paid, plus 12 percent interest; and many of those affected by these measures lost everything. Detroit Attorney Jonathan Marko, who represented Michigan residents in bringing a claim against the state, said: “Some of these people committed suicide. Some lost their homes. Some had to declare bankruptcy.”10 — New America Foundation

We need thoughtfully-crafted success metrics that consider claimants, claims processors, and organized crime. And the first measure of success should be, “Did all eligible people receive benefits?” rather than “How many ineligible people received benefits?”11

To provide benefits more equitably, measure what we’re doing wrong.

We know that unemployment benefits aren’t delivered equitably.12 To change this, we need to measure where we’re going wrong.12. Unemployment can learn from the financial industry’s strategies for detecting bias in outcomes:


Excerpt from Fair AI: How to Detect and Remove Bias from Financial Services AI Models

By carefully altering the way different demographic groups are assigned to protected or sensitive classes, and ensuring these groups have equal predictive values and equality across false positive and false negative rates, you can better detect bias in your AI. These five steps can help you detect bias in your algorithms:

  • Ensure all data groups have an equal probability of being assigned to the favorable outcome for a protected/sensitive class.
  • Ensure all groups of a protected/sensitive class have equal positive predictive value.
  • Ensure all groups of a protected/sensitive class have predictive equality for false positive and false negative rates.
  • Maintain an equalized odds ratio, opportunity ratio and treatment equality.
  • Minimize the average odds difference and error rate difference.

A critical step for ensuring equitable access to benefits is collecting demographic information.

Recommendation for the federal government

U.S. DOL could create blanket permission and best practices for collecting and measuring consistent demographic information for equitable outcome analysis.

Let’s start measuring outcomes instead of process.

Today, U.S. DOL measures states on timeliness for states’ handling of first payments, continued claims, nonmonetary adjudication determinations, and appeals, as well as measures of the quality of adjudication determinations. States track the percentage of:

  • First payments made within 14/21 days
  • Continued claims made within 7 days
  • Continued claims made within 14 days
  • Non-monetary determinations made within 21 days
  • Lower authority appeals decided within 30 days

All successes for the American Recovery and Reinvestment Act (ARRA) referenced training hours delivered not jobs secured.13. With the American Rescue Plan (ARP), we have an opportunity to refocus success metrics on outcomes and not intermediate steps that may or may not result in desired outcomes.


Excerpt from Government Programs Should Measure How Well They Help People:

“For example, what if we measured by benefits-participation rates and focused on people who are eligible but not enrolled? What if we decided success looked like reducing the billions of federal dollars that are left on the table every year because people don’t realize they’re eligible for benefits programs, or by how close we were to reaching every hungry child in the country and eliminating child hunger?”

“What if we consistently measured customer satisfaction? We’re all asked to leave reviews or give feedback to almost every private-sector company we interact with. Could more government services ask the public how satisfied they are with their experience, or how it might be upgraded in a meaningful way?”

“And what if we measured for efficiency and efficacy? Not in dollars spent by government but in the time it takes to deliver benefits that are sufficient to meet people’s needs.”14


Excerpt from Work Support Strategies Initiative: 12 Lessons on Program Integration and Innovation:

“When data are connected to vision, conversations become customer- and family-centered. SNAP and Medicaid participation rates won’t just be numbers; they’ll instead indicate whether families are accessing the full package of benefits for which they are eligible. The new set of questions may be: How quickly do families get benefits? How much delay do they experience between each step of their process? How many eligible families have their cases closed for administrative reasons and experience churn at redetermination? How are customers’ experiences and access to programs changing over time? Are we providing consistent customer service no matter which office someone walks into? Data should be used to illustrate and examine customer service and the agency’s impact on families.”15


Write metrics that measure the what, not the how.

Success metrics should drive behaviors we want, instead of explicitly defining the “how.”

Instead of saying “use cloud computing,” say “have an SLA of 99.9%.”

Requiring a service level agreement (SLA) of 99.9% effectively requires cloud, but it sets a much clearer expectation. This performance-based expectation avoids the unfortunate circumstance of moving an aged system into the cloud and still having 6 hours of downtime per day.

Here are some other examples of putting the what before the how:

  • In order to know a state’s application abandonment rate, they have to have website instrumentation.
  • In order to get same-day payments to a population that speaks 10 different languages, a state will need to develop effectively transadapted materials.
  • To keep its recertification drop-off rate higher, a state has to proactively remind people to recertify.

Consider up front how each metric will be monitored and reported.

A third party must be able to validate adherence to any given success metric, preferably in real-time, and not through a manually-generated report submitted once a year. For example, key metrics need to be re-evaluated after every functionality change to monitor for unforeseen (or hoped-for!) impact.

These proposed new success metrics focus on service delivery.

Possible claimant-centric success metrics can include:

  • Digital application abandonment rate
  • First contact resolution rate
  • Average hold time
  • End to end claim automation (percentage of claims that can be decided correctly instantly, or if batch jobs are in play, within 24 hours with zero human intervention)
  • Error rate
  • Recertification abandonment rate
  • Appeals rate

Recommended required reading

Code for America’s “The Status Quo of Safety Net Assessment” should be required reading for anyone considering new unemployment-related success metrics.

There’s currently no downside to prevent employers from contesting all unemployment claims.16 We recommend that U.S. DOL introduce a counter-measure that penalizes employers who contest too many claims that are ultimately decided in the claimant’s favor.

Strike Teams

This section was written in collaboration with Cassandra Madison of The Tech Talent Project and the California Unemployment Strike Team.

Many states’ unemployment insurance issues are rooted in similar problems — technical systems, business processes, staffing models that aren’t built to withstand crisis-level demand, and limited or no shared understanding of data. These underlying problems also manifest in similar ways across states, creating huge backlogs that leave Americans waiting for financial support when they are most vulnerable and systems wracked by fraud, costing millions of dollars.

Strike teams tasked with improving unemployment insurance programs design and implement strategies to quickly fix the root causes of systemic failures.

Strike teams need to understand each state’s challenges.

Groups looking to drive local progress will need to understand the specifics of each state’s architectural, cultural, staffing, and vendor challenges. This is particularly true for any group trying to help from the outside.

Investing the time to understand each state’s challenges through upfront discovery work will increase the chances of success in the short term, building trust with teams on the ground while maintaining the executive alignment needed to spur longer-term progress.

Use a phased approach to rolling out strike teams.

While the incremental roadmap to improvement will look different in each state, there are likely to be clusters of states with similar problems, opening up the possibility for shared technical approaches and/or teams across states and perhaps even the development of key shared services at the federal level. The discovery work will be critical in illuminating these patterns and associated opportunities.

With this in mind, we recommend launching a phased approach to the strike force rollout that’s rooted in the discovery sprint process. Taking this approach will provide valuable insights and allow U.S. DOL to make meaningful progress quickly, while remaining nimble and responsive to the information that emerges.

Phase 0: Do a landscape assessment.

Gather basic information on the processing systems in each state, so that we can sort them into “mainframes” vs “Oracle databases” vs whatever. This could be over email or one structured interview per state. We recommend some questions here.

Phase 1: Pilot with 3 to 5 states.

Pick no more than 5 states that represent the diversity of systems and sizes (based on Phase 0), and launch a 4-week discovery process in each state to help identify key issues, pain points, root causes, and key players. This information will be used to craft a prioritized roadmap of both quick wins and longer-term needs and to identify the specialized skills sets needed by implementation teams on the ground. This will also help U.S. DOL identify common issues across states, allowing for potential collaboration and the development of 2 to 3 central solutions.

For issues that won’t be addressed with a shared service, these initial assessments can lay the groundwork for how U.S. DOL will award and measure the success of individual state grants. There should be an emphasis on defining and measuring outcomes such as modern payment timeliness, first contact resolution rates, and elastic claim processing capacity.

Phase 2: Expand to an additional 5 states.

Use the information gathered in Phase 1 to inform the Phase 2 approach and the initial resources allocated to the discovery process. We recommend waiting to launch Phase 2 until meaningful work is underway to improve service in Phase 1 states, as that work will need to continue as the Phase 2 discovery work is launched.

The insights gathered from both Phase 1 and Phase 2 should be continuous and the information used to iterate on the roadmap, approach, and team composition.

Recommendation for the federal government

We recommend that U.S. DOL deploy floating implementation teams to support the work of strike teams. These teams would work under the direction of each state team to solve specific problems.

As the discovery sprint work progresses in multiple states, you’ll get a better idea of what kinds of teams are needed. There are likely to be similar technical and business process problems faced across states, opening up the possibility that support teams and solutions can be shared.

While we can speculate on what at least a few of these teams may be, any decisions about team composition or focus should wait until at least the first Phase of discovery sprints are underway and there’s an early roadmap.

Integrated Command Center

A reliable integrated command center provides a mechanism for making strategic, timely decisions — both in a crisis and during calmer times. To be effective, the integrated command center needs to be highly visible across the organization and have the teeth to make decisions and changes rapidly.

Stories from the field

California

California has been running a successful integrated command center on a weekly basis since Fall 2020. Every week, the team gets together to:

  • Review available data on the backlog (including the top reasons associated with backlogged claims)
  • Review “voice of the customer” data, which is a compilation of the top reasons for phone calls, electronic messages, and social media complaints
  • Identify 2 top issues to address based on the data
  • Hypothesize a root cause for each of the 2 issues and agree on solutions to root causes
  • Review past solutions to see how they’re working and what needs adjusting

Rhode Island

If you don’t have enterprise-wide data visibility or the support for an integrated command center just yet, you can start on a smaller scale. Or you could start with one data source and what you can learn from it. Rhode Island regularly reviews the 400 codes that kick applications out of the automated process (“clean claims”) to manual review. This helps them identify opportunities to increase automation.

Or you could start with a weekly meeting focused on the top 2 issues that are known to you, as well as on how to develop more robust data capability.

Read best practices for running an integrated command center from the California Strike Team.

Demonstration Projects / Pilots

State demonstration projects (pilots) are an excellent way to determine the “art of the possible” for reaching specific metrics and milestones. For example, a state may want to pilot:

  • Same-day unemployment payments
  • A new, largely-automated determination process
  • A claim status tracker to increase first contact resolution rates and reduce contact center volume

Demonstration projects don’t have to be huge.

There are smaller-scale projects needed, too, like:

  • Creating an easier-to-understand list of separation reasons
  • Automating 1099 wage verification
  • Ensuring that everyone’s name is valid on unemployment applications

Projects ideally result in shareable information, strategies, and tools.

A demonstration project could result in shareable code, but is more likely to result in other forms of shared collateral, such as:

  • Reusable, non-code components that can be developed, rapidly tested, and iterated on
  • More helpful categories for tracking call reason in a customer support center
  • The “right” way to capture applicant names to accommodate special characters, people with no last name, etc.
  • Ways to phrase common application questions that increases the percentage of people who are able to answer them successfully
  • The ratio of people who preferred (hypothetically) chatbot to asynchronous contact from a call center, given all the options (This can inform staffing projections.)
  • A rewritten claimant letter that more people can easily understand
  • An optimized list of claim statuses
  • A repeatable way to track equitable outcomes across race, ethnicity, gender, age, etc. across existing forms and processes
  • Procurement language
  • A benchmark for a best-possible first contact resolution rate

The best places for demonstration projects are states that want to be pilot sites and are fully invested in determining the art of the possible goals for increasing access and service to claimants. This approach pairs nicely with the strangler pattern, with a working group as a convenient way to disseminate successes.

Story from the field

Texas WIC shared that when they introduce new ideas and features to their technology vendors, they have the vendor team share no-code mockups or prototypes and explain how the feature would impact operations before any code is written. This enabled the WIC program to better understand and direct the development efforts and ensure the final product met their needs.

Recommendation for the federal government

President Biden’s Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the Federal Government: “As part of this study, the Director of OMB shall consider whether to recommend that agencies employ pilot programs to test model assessment tools and assist agencies in doing so.” U.S. DOL and/or private philanthropy could fund specific demonstration projects related to specific goals.

Apart from specific problems as suggested above, states can also model improvement on specific unemployment modules.

Possible demonstration pilots outlined in this report include:

  • A central unemployment account where anyone can check their current balance and/or fix errors, without having to file a claim
  • A central federal unemployment eligibility rules engine that states could consume to apply rules consistently
  • Determining the optimal place for the federally-compliant identity verification process to live
  • Measuring equitable outcomes for identity verification
  • IRS wage verification for W2 employees
  • Wage verification for gig workers
  • Optimized applicant experience
  • List of easy to understand reasons for separation
  • Measuring payment provider performance
  • Same-day unemployment benefits delivery
  • Measuring first contact resolution
  • Optimal contact center taxonomy and routing logic
  • Employer-filed claims best practices
  • Automating and speeding up employer verification

Cohort Identification

When deciding which states to work with on demonstration projects, in working groups, or in other contexts, it can be helpful to have a framework by which to group them into cohorts based on different topics and dimensions. Conducting an ecosystem survey can help identify some patterns and groupings.

A cohort isn’t inherently “good” or “bad.”

A cohort isn’t meant to be “bad” or “good,” but merely a useful grouping.17 For example, if you were interested in running a demonstration project to develop a more useful contact center taxonomy, you probably wouldn’t want to start with a state that doesn’t have a contact center.

There are 4 major categories of states that need help.

While states are experiencing similar UI problems, the political/cultural landscape and level of digital maturity will vary from state to state.

Digital leaders

These are obvious first choice states because they have some modern technical expertise, technical resources, and a high commitment and openness to doing things differently. We recommend these as a Phase 1 cohort, because there will be fewer cultural barriers and they’ll be able to make more progress quickly.

Early adopters

These states are in high need, eager for help, and open to doing things differently. But they differ from digital leaders, because they have little to no experience with modern tech principles and practices. Progress may be slower, but there is a solid opportunity to build the foundation for more lasting institutional change. We also recommend early adopters as Phase 1.

Reluctant adopters

States in this cohort are in high need and “open to help,” but they dislike disruption. Key leadership may say that they’re committed to U.S. DOL, but they express skepticism to their teams privately. They can also create barriers to system access, slow-walk new initiatives, or find a reason to end the partnership early.

To be successful in these states, U.S. DOL will need demonstrated commitment at key executive levels on the business and tech side (CIO and labor leadership). Recognize that longer-term change may or may not be possible here, and place them in Phase 2.

Openly hostile

These states don’t want help and are taking action to undermine or disrupt benefit programs. We don’t recommend providing assistance to these states at this time.

There are additional cohort variables to consider.

Other cohort variables could include:

  • Presence or absence of a specific UI program (such as Short-Time Compensation)
  • In-house IT teams vs vendor agreements (which may be harder to modify for a pilot)
  • Political environment (e.g., divided government)
  • Minimum wage laws
  • Unemployment rates
  • Unemployment claim volume
  • Size18 CLASP’s State Benefits Readiness Assessment Interim Framework is also a useful way for designing cohorts.

Working Groups

Working groups are an effective strategy for providing a safe, practical place to troubleshoot challenges and share solutions.

Working groups offer opportunities.

States that participated in the Center for Law and Social Policy’s Work Support Strategies (WSS) Initiative credited participation in multi-state learning communities as a main driver of success. According to participants in the WSS, working groups can provide opportunities to:

  • Reduce isolation — Regular interaction between people who do similar work can be motivating. It can be easier to deal with setbacks and press for change with a network.
  • Have constructive conversations — When work group meetings are led by skilled facilitators, they provide the opportunity for structured, goal-oriented discussions.
  • Learn from others — WSS members reported learning from professionals from different agencies within their own states, as well as from practices in other states.

There are successful working groups to learn from.

New America’s Child Welfare Working Group and the Integrated Benefits Initiative offer 2 promising models.

New America’s Child Welfare Working Group

This group of 18 states is generating a collection of promising practices and sharing resources. The following guidelines are the foundation of their success:

Monthly topics are distinct and well defined

This allows the group to stay focused on manageable problems.

Participants do their homework.

Before the monthly meeting, each participating state meets with the facilitator for up to one hour to share their process, challenges, and questions.

Meetings focus on solutions.

The group comes together to share promising practices. Based on the topic and interest, sub-groups may form to collaboratively work on shared challenges.

The group contributes to the field quickly and iteratively.

It publishes promising practices on a public website after a formal clearance process. The emphasis is on collaboration and reusability, and members share exact policy language, forms, and spreadsheets.

The Integrated Benefits Initiative

The Integrated Benefits Initiative was a collaboration between Code for America, the Center on Budget and Policy Priorities, and Nava Public Benefit Corporation. These organizationsy partnered with 5 states to pilot faster, more effective, and less expensive ways for people to access critical government services including SNAP and Medicaid. The fundamentals of this model are called out below.

Pilot states were ready to work differently.

The pilot cohort was selected for their commitment to innovation — to experiment with new technology and methods, and to work together to consider and prioritize the client experience within their eligibility and enrollment processes. Each of the pilot states were in the middle of eligibility system modernization efforts.

They started small and experimented with new practices together.

Pilots gave states an important opportunity to leverage human-centered design and agile methods to move faster on one aspect of their eligibility and enrollment process; to gather a wealth of user research; to rapidly prototype, test, and iterate; and ultimately to demonstrate impact on key outcomes to inform future work.

They shared research and approaches early and often.

With the goal of creating common modules that could be reused across states, organizations and pilot states shared research and their approaches to building eligibility and enrollment modules so they could move faster.

In the unemployment space, monthly topics could include wage verification for gig workers; collating voice of the customer data; providing claimant-friendly claim status; tracking contact center first call resolution rates; and so on.

Recommendation for the federal government

Based on successes in complementary spaces — including the examples we provided above — we recommend that U.S. DOL create one or more working groups where state unemployment programs can come together to learn from each other. In the unemployment space, monthly topics could include:

  • Wage verification for gig workers
  • Collating voice of the customer data
  • Providing claimant-friendly claim status
  • Tracking contact center first call resolution rates.

These groups could be hosted by U.S. DOL or by private philanthropy (such as New America).

Shared Services

There are multiple types of shared services currently or potentially in play for unemployment benefits.

Each category of shared services offers its own pros and potential cons.

1. Verifying information from a single primary source

A shared service is the only real option for these:

2. Sharing data across unemployment systems

For the following shared services, possible bad outcomes can be balanced against success metrics, benchmark monitoring, and an escape clause for states if those benchmarks aren’t being met consistently.

3. Providing a central service that multiple states can use

Here are a couple of examples of services that could be centralized:

The benefits of centralizing a service to a single place can mean improved service and lower cost. But if that shared service is terrible, it also means that everyone is stuck with bad service.

When considering a shared service, ask these questions first.

  • What is the definition of success? How will it be measured over time? How can participants see progress against these benchmarks?
  • How will participating states hold the system accountable to benchmarks?
  • Who is best suited to achieve these outcomes?
  • How will changes be decided and prioritized? What happens when states disagree?
  • What happens when states need state-specific functionality?
  • Who pays?

Weigh potential longer-term consequences.

For example, what happens to a shared or centralized unemployment benefits system if a political party takes control that wants to end or limit unemployment benefits? A central eligibility rules engine sounds great if its goal is to expand access to more eligible individuals. It sounds less great if it can then be used to programmatically deny large swaths of people benefits, in one place (instead of having to change the rules in 54 places).

Learn from prior challenges to sharing services.

As states embark or consider embarking on multi-state software collaboratives, it’s useful to look at what caused prior unemployment system collaboratives to fail, so as to avoid those pitfalls.

Challenges have included:

  • Inability to resolve eligibility requirements across member states16
  • Difficulty prioritizing new features fairly19
  • Questions over whether a consortium model actually saved money17
  • Inability to support the underlying technology18
  • Disputes over intellectual property20
  • Inability to hold the project accountable21 22

Understand successfully shared services.

Here are a couple of examples of currently successful efforts:

  • North Carolina reports a successful partnership with South Carolina23 in co-creating a shared unemployment benefits platform. The two states can trade off responsibility for leading development of particular modules. For example, South Carolina led the creation of a mobile application for recertification and claim status. Once ready, North Carolina had to just review the file specifications in order to adopt it.24
  • Multiple states reported finding the Bureau of Labor Statistics shared technology services to be positive.

Those involved have attributed success to:

  • Starting small, with just a couple of states25
  • Starting with a concrete, tractable problem26

State / US DOL Relationship

The relationship between U.S. DOL and states is adversarial, at best.27 New U.S. DOL leadership that has come from states and advocacy organizations can make significant strides towards changing this.

U.S. DOL can improve its relationships with states.

Here are some of the concrete opportunities we heard about in our interviews:

Opportunities for Philanthropy

In addition to American Rescue Plan (ARP) funds, philanthropy could support the development of claimant-centric, effective unemployment benefits delivery by:

  • Funding demonstration projects with key states with whom they are aligned on the given mission
  • Starting a working group cohort to rapidly develop solutions to thematic challenges at a regular cadence, alongside larger demonstration projects
  • Conducting a field blueprint to rapidly identify themes and patterns across states, to feed into cross-state solutions, identify partners for demonstration projects, and surface projects for the working group
  • Funding an Algorithmic Justice League audit of identity verification vendors used in unemployment

Go to the next section: Acknowledgements


Notes

  1. Each state’s share was based on its proportionate share of FUTA taxable wages multiplied by the $500 million. Most state laws require appropriation of these funds by the state legislature. p171 

  2. https://www.naswa.org/system/files/2021-03/usdolreleasesnaswareport.pdf p206 

  3. https://www.naswa.org/system/files/2021-03/usdolreleasesnaswareport.pdf p208 

  4. “When asked about their greatest early accomplishments with Recovery Act funding, many states and localities pointed to their rapid start-up of the WIA Summer Youth Program and their ability to place hundreds or thousands of youth in summer jobs so quickly.” (p 20

  5. https://www.nelp.org/publication/from-disrepair-to-transformation-how-to-revive-unemployment-insurance-information-technology-infrastructure/ 

  6. https://www.nelp.org/publication/nelp-testimony-michele-evermore-michigan-unemployment-claims-processing/ 

  7. https://www.newamerica.org/pit/reports/unpacking-inequities-unemployment-insurance/the-power-of-employers/ 

  8. https://www.codeforamerica.org/programs/insight-and-impact/scorecard/status-quo/ 

  9. https://www.newamerica.org/pit/reports/unpacking-inequities-unemployment-insurance/ 

  10. https://www.clasp.org/publications/report/brief/policy-recommendations-fight-poverty-hunger-health 

  11. https://www.governing.com/now/government-programs-should-measure-how-well-they-help-people.html 

  12. https://www.clasp.org/sites/default/files/publications/2017/04/WSS_Lessons_4.1.16-.pdf  2

  13. https://www.newamerica.org/pit/reports/unpacking-inequities-unemployment-insurance/the-power-of-employers/ 

  14. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive-order-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/ 

  15. From NASWA: “The 20 states analyzed were selected purposely to provide balance and diversity on factors such as population size, region, degree of co-location of Wagner-Peyser labor exchange services and WIA services, unemployment rate, health of the state UI trust fund [Reserve Ratio Multiplier], and UI recipiency rate.” (p viii

  16. WyCAN was a multi-state unemployment insurance software consortium that included Wyoming, Colorado, Arizona, and North Dakota. The effort began in 2009 with a $62 million grant from the U.S. Department of Labor, in addition to funding from the member states. They teamed up via a cooperative purchasing governance agreement to build a monolithic system that would serve all of their needs. The states’ benefits processes proved too different to be reconciled under a single system, and the work was abandoned, the unspent $47 million returned to the Department of Labor. (p10) https://softwarecollaborative.org/cooperatives/wycan  2

  17. Multiple State agencies also noted that while the original intent of tying USDA technology funding to consortia was to make technology cost-effective, it’s unclear if that intention has actually been met. State agencies that we spoke with who are not members of a consortium shared that their independence made it easier to make necessary and timely changes to their MIS. - link  2

  18. “Iowa left the consortium early on due to concerns over Iowa’s ability to support the underlying .net technology (vs. the Java platform they were using).” https://vermontdailychronicle.com/2020/04/22/scott-pulled-plug-on-troubled-ui-upgrade-then-this-pandemic-hit/  2

  19. While membership in a consortium allows WIC State agencies to share resources and funding, some State agencies have found that this model can make it difficult to get new features prioritized, and that the pace of development and releases can seriously delay critical improvements to the participant experience. Additionally, members of the consortia we spoke to explained that regardless of caseload or funds contributed, all members have equal voting power, which can be frustrating when needs are different due to a different scale of operations. - https://s3.amazonaws.com/aws.upl/nwica.org/wic-technology-landscape-_-final-report-design.pdf 

  20. “Agency of Digital Services Secretary John Quinn’s provided more detail: “The underlying issue is that Idaho is not willing to give up intellectual property rights of the system being developed, and they will not hesitate to act in the best interest of their state regardless of the effect on the consortium or partner states,” he explained to Vermont Daily earlier this week” https://vermontdailychronicle.com/2020/04/22/scott-pulled-plug-on-troubled-ui-upgrade-then-this-pandemic-hit/ 

  21. “VT and ND have been beholden to partner state Idaho as both VT and ND do not have the internal development staff to build their own systems. Idaho has been willing to collaborate with VT and ND who pay for Idaho resources involved with system development. Yet, as ID is a sovereign state, VT and ND have little to no recourse to hold ID accountable for the quality or content that is developed nor the timeline in which it is delivered.” https://vermontdailychronicle.com/2020/04/22/scott-pulled-plug-on-troubled-ui-upgrade-then-this-pandemic-hit/ 

  22. “Governance problems are well illustrated by the Internet Unemployment System (branded as “iUS”). This small consortium was started by the State of Idaho in 2012, building atop the successful work that Idaho had already done to modernize its unemployment software infrastructure, with Iowa and Vermont also participating. (Iowa later dropped out and was replaced with North Dakota.) The project continued clear through 2019, with Idaho performing the software development work. At the beginning of 2020, Vermont raised the alarm, complaining of governance problems: specifically, Idaho was willing to let other states borrow iUS, but was unwilling to let them make any modifications to it, and naturally prioritized the needs of Idaho over those of Vermont or North Dakota. The governors of the three states tried to resolve these conflicts and, unable to do so, agreed to dissolve the iUS consortium. (This story was recounted by Vermont’s Agency of Digital Services’ Secretary John Quinn, in an April 2020 letter to the Vermont Daily Chronicle.)” (p9

  23. This project began as a four-state consortium. Tennessee dropped out almost immediately, and Georgia withdrew around six months prior to launch. 

  24. South Carolina also solved some challenges, like obtaining official app store listings, that North Carolina was subsequently spared from. 

  25. “It’s important that co-ops start small; not 20 members, but 2.” - https://beeckcenter.georgetown.edu/wp-content/uploads/2021/04/Sharing-Government-Software_Final.pdf 

  26. It’s also important that co-ops start by solving a small problem. They shouldn’t start by building an entire unemployment insurance claims system. They should start by building a common application form, a common fraud-detection interface, or a shared platform for submission of eligibility documentation. Co-ops should create something valuable that can be implemented rapidly, so that members can learn how to work in this way. (p11

  27. https://usdr.gitbook.io/unemployment-insurance-modernization/ui-journey-map/the-agency-journey/relationship-with-us-dol 

© 2023 NEW AMERICA

Supported by

The Families & Workers Fund