The Department of Homeland Security quietly released its annual report on the National Network of Fusion Centers several days ago on the Department’s website, available at this link. This is the fourth consecutive year that the Department has produced and issued such a report, measuring the relative maturity of the 78 state and local fusion centers with respect to a defined set of Critical Operational Capabilities and Enabling Capabilities. (Previous years’ reports are available at this link).
The latest report provides a rich, updated overview of the current state of the national network of fusion centers. I won’t try to summarize the full report, but instead would note my three key takeaways:
1. Fusion center costs. The total operational cost of the national network of fusion centers was calculated at $328.3 million for 2014, of which $184.8 million (56.3%) was funded by state and local dollars. A smaller amount – $143.7 million (43.2%) – of funding for fusion centers came from federal coffers, either directly in terms of staff deployed at fusion centers, or indirectly through homeland security grants.
This reality of balanced cost-sharing between federal, state and local levels is a strong counterpoint to the occasional budget-related critiques of the fusion centers. It’s also worth noting that this annual federal investment in state and local fusion centers is miniscule in comparison with other federal homeland security and counterterrorism activities, adding up to around 0.24% of the annual DHS budget or 0.27% of the annual National Intelligence Program budget – a proportionally small investment in enhancing the capabilities and awareness of state and local entities across the country in support of these critical national security missions.
2. Collaborative analytic products. The new report notes that the network of fusion centers issued 272 collaborative analytic reports in 2014, i.e. where two or more fusion centers jointly develop and publish a report on a given topic. This number is a significant increase from the 115 collaborative reports issued in 2013. Without knowing more about the content of these reports, it’s difficult to definitively assess the meaning of this increase, but this is the kind of trend that is likely to lead to analytic reporting that is increasingly valuable to federal agencies.
For example, If Fusion Center A works together with Fusion Centers B and C on a report on, say, human trafficking issues, and they all identify common trends in terms of traffickers’ activities that may have otherwise appeared incidental to an investigation, that information likely has analytic and investigative value nation-wide with respect to addressing the issue, including to federal law enforcement agencies. Such collaboration is a positive indicator for fusion centers’ continued maturation.
3. The limits of performance assessment. The report notes that the 78 fusion centers averaged a score of 96.3 out of 100 on the assessment of their capability, an average increase of four points from the 2013 assessment, and a significant increase from an average score of 76.8 in 2011. 29 of the 78 fusion centers had a perfect score of 100 in 2014.
This increase in fusion center capabilities is a reflection of dedicated, serious effort in the past five years by the fusion centers to mature their capabilities, and thus to ensure that they are providing value to their state and local stakeholders and their federal partners. As a result, fusion centers today are increasingly efficient in their business processes and are very judicious with respect to privacy and civil liberties-related concerns – it’s not an accident that there have not been any privacy and civil liberties-related scandals at fusion centers in the last 3-4 years, as there were on several occasions in the mid to late 2000’s.
However, it is important to recognize the limits of this assessment process, as the Government Accountability Office noted in a report released in November 2014:
The overall assessment scores represent fusion centers’ progress in establishing designated baseline capabilities—such as implementing specified policies and procedures—but the scores may not reflect improvements in overall performance or homeland security contributions. That is, the assessment questions are intended to capture the extent to which each fusion center—regardless of size or staffing level—has met baseline capabilities to receive, analyze, gather, and disseminate information. However, the actual output of products and services can vary considerably by center based on risk environment, resource levels, or other factors. For example, 11 individual attributes constitute the “analyze” capability, and represent a broad range of activities, such as having developed an analytical production plan, the ability to access subject matter experts, and being able to contribute to local and national threat assessments. A center may report the successful completion of such activities and improve its overall assessment scores, but the scores do not reflect if the center effectively administered these activities or if they resulted in any considerable impact.
Given that the fusion center assessment process has reached its Lake Wobegon phase, with all of the centers now above average, it is imperative that the next assessment be revamped, and focused not only on capability-building but also assessing these issues of performance and effectiveness, in a way that is cognizant and respectful of state and local governance of fusion centers.
There are many additional details in the full report that are worth examining – you can read the full report here.