Every two years, the U.S. Government Accountability Office (GAO) identifies, lists, and ranks “government operations with vulnerabilities to fraud, waste, abuse, and mismanagement, or in need of transformation to address economy, efficiency, or effectiveness challenges” in its “High-Risk List”. This comprehensive report provides the public, and Offices of Inspector General (OIGs), a direct spotlight on vulnerable government programs. This report is extraordinarily helpful, and Congress and OIGs can use it to direct their own investigations.
A regular, comprehensive report covering the IG community could be similarly useful. A ranking or ratings system evaluating OIGs would be an important element of any such report. Such a ranking or ratings system could: (1) more easily direct congressional focus toward underperforming OIGs, (2) allow OIGs to more easily compare their practices and adopt more effective ones, and (3) provide the American public insight into the effectiveness of their government watchdogs. Of course, any ranking or ratings system could have unintended negative consequences without rules, context, and criteria.
This post is a first attempt to lay out a framework for developing a ranking or ratings system for OIGs and explaining its purpose. The key criteria are inspired by the Brookings Institute’s report on the impact of political appointees on Inspector General efficacy. Set forth below are preliminary criteria for developing a framework for assessing Inspector General performance. Feedback is welcome as this research project continues.
Detecting and preventing waste, fraud and abuse
The most important criteria in any ranking report should be effectiveness at deterring waste, fraud and abuse. OIGs are tasked in part with “prevent[ing] and detect[ing] waste, fraud, and abuse relating to their agency’s programs and operations.” However, there is little reporting on how effective individual OIGs are at this task. This may be in part because of how difficult it is to fairly quantify this aspect of OIGs’ jobs. Certain OIGs will prevent and detect more “absolute” waste than others, in part because certain agencies are given more funds and leeway than others (for example, the HHS OIG oversees an agency with a budget of almost $1.3 trillion, while the Denali Commission OIG oversees an agency with a budget of $46 million, the equivalent of 3.5% of HHS’s budget). While an OIG’s effectiveness at detecting and preventing waste, fraud, and abuse should not be a rigid measurement, it should carry significant weight. Among other criteria, any ranking would have to account for the potential for waste, fraud, or abuse within an OIG’s agency, the OIG’s detection rate, and the OIG’s enforcement rate. How to measure these elements and how much weight to give them should be a subject of further research.
Promoting economy and efficiency
Promoting economy and efficiency should be another key component to ranking IG effectiveness. Congress has tasked OIGs with “promoting economy and efficiency in the administration of [agency] programs and operations.” As with the previous criteria, this may be difficult to fairly quantify. Some agencies are currently more efficient than others, and an OIG that exposes inefficiencies may hurt its grade relative to other OIGs if its agency does not accept the OIG’s suggestions. Take, for example, a scenario in which Agency A and Agency B both have $10 billion budgets and both waste $1 billion, but only Agency A’s OIG reports the $1 billion waste. The ranking could mistakenly assume that Agency B has no waste, and thus award Agency B’s OIG a higher rank for enforcing efficiency. Alternatively, Agency A and Agency B could both have $10 billion budgets, but Agency A wastes $1 billion while Agency B wastes nothing. Both Agency A’s and Agency B’s OIGs are diligent, but Agency A reports the $1 billion waste while Agency B, rightfully, reports no waste. Reporting Agency A’s OIG as more effective than Agency B’s would not be appropriate. Another question is how to rank different OIGs with different amounts of waste, both in gross quantity and in percent of overall budget. Any ranking would have to account for the OIG’s timetable for dealing with open recommendations, the effectiveness of implemented recommendations, and/or the OIG’s rate of progress based on their reported benchmarks. These issues, along with many others, must be adequately addressed in any rankings system.
Accessibility
Accessibility to OIG work product should be the final pillar of any OIG effectiveness ranking. Congress has put a floor on accessibility by requiring semiannual reports, but there are many ways to improve upon that. OIG reports should be clearly written and adopt easy-to-read graphic elements. OIG websites should be easy to navigate and search. This grade may be easier to fairly quantify, as accessibility is less reliant on the OIGs’ agencies and more of a straight-up grade on the OIGs. Accessibility could be measured with a standardized scored rubric, using criteria such as website usability and vocabulary accessibility. Of course, certain OIGs have more funding than others, and so those OIGs may have an advantage. This would have to be taken into account, perhaps by separating the rankings by larger and smaller agencies.
Conclusion
A consistent report ranking OIG performance might help illustrate which OIGs have room for improvement or require additional resources and highlight best practices among the various offices. Obviously, a downside to numerical rankings is that they can be over-interpreted and relied upon to the exclusion of more qualitative factors. Still, such a system would make it easier for Congressional committees to determine how their IG stakeholders are performing compared to others and help efforts to enhance effectiveness and performance across the government. This is only a preliminary suggestion. Feedback and reader input would be most welcome.