Dennis D. McDonald (ddmcd@ddmcd.com) consults from Alexandria Virginia. His services include writing & research, proposal development, and project management.

How Our Increasing Digital Connectedness Improves Government Program Evaluation

How Our Increasing Digital Connectedness Improves Government Program Evaluation

By Dennis D. McDonald

Introduction

Will the way the Federal government measures the performance of government programs have to change due to increasing use, by government employees, of smart phones, tablets, social networks, and file sharing services?

I wondered about this while attending the monthly Government Performance Coalition meeting on April 23rd at George Washington University in Washington DC. There, Dustin Brown and Kathy Stack from the White House Office of Management and Budget (OMB) discussed the Administration’s 2014 Budget. Brown & Stack focused on the CREATING A 21ST CENTURY GOVERNMENT chapter with special attention to these sections: 

  • Using evidence to get better research, pages 52 to 54
  • Strengthening evaluation and sharing what works, pages 54 to 55
  • Managing for results, pages 55 to 56

They described behind-the-scenes efforts, as orchestrated by OMB, to increase the use of performance measures and “evidence-based” decision-making in how government programs are planned, managed, and evaluated. In the process they made it clear that improving how performance data and other measurement approaches are used in government bureaucracies isn’t something that can happen overnight, nor is it something that is very visible outside the government.

Using data to support management and planning isn’t new

Using data to support the planning, management, and evaluation of government programs certainly isn’t something that is particularly new. Nevertheless, the Obama administration has made transparency and management efficiency major themes in changing how the Federal government is run. OMB’s efforts to incorporate both performance data and performance against defined goals and objectives, as described by Brown & Stack and incorporated into the above-linked budget document, is a step forward, even if it doesn’t garner the publicity of other, more public-facing programs.

Mobile technologies and program planning

Still, it’s not yet clear to me, as an independent consultant and government “outsider,” whether efforts to improve performance measurement and accountability are taking full advantage of the increasing access to and use of mobile technologies such as smart phones, tablet computers, and the networking services they connect to.

Innovation versus tradition

True, significant progress has already been made in making government agency web sites more mobile friendly. Individual agency web sites as well as  Data.gov and Performance.gov are making data about government operations more accessible. Yet, I was struck by Brown & Stack’s description of the fundamentally traditional communication and collaboration processes by which performance measurement and program accountability are being pursued via working groups operating through internal government channels. 

The traditional nature of the processes is not necessarily a bad thing. Government agencies are, after all, organizations with traditional values and structures. As with any bureaucracy, to some extent you have to “play by their rules” even when you are trying to change how they behave.

How are we using the tools?

I wonder if, in our drive to improve how government operates through better performance and evaluation measures, are we really taking advantage of the tools now available to both government employees and the public via increasingly powerful social networks, smart phones, and tablets?

As I’ve suggestedelsewhere, it’s probably a mistake to view smart phones and tablet computers simply as little portable computers that have effective email, messaging, network, and Internet access. They have the potential for being much more than that. The owners of personal smart phones and tablets, sometimes without direct authorization of their employers, are already using them to share work related information and to collaborate. How many times, for example, do government employees “vote with their feet” and bypass “official” internal intranets and files haring networks by using public services such as Google Drive, DropBox, or even Skype to share files? How often are such services being used to bypass traditional email and file attachment clutter?

Quite often, I’ll guess. I’d therefore be very surprised if the planning efforts described by Brown & Stack haven’t already benefited substantially from the use of such devices and the services they can connect government employees to.

How does technology-enabled collaboration impact how people work?

It’s reasonable to ask whether such technology-enabled collaboration and sharing actually speeds up or enhances how collaborative work is performed. As a proponent of technology enabled collaboration, I’ve come to realize that the best collaborative networking tool is not necessarily the most secure one, nor is it the most elegantly designed one, nor is it the one that is most cleverly or seamlessly integrated with a proprietary content management system.

No, the best collaborative networking tool is the one that people actually use to get their work done.

I’m guessing that a lot of government employees are using their own devices and networking resources to do their jobs. So, does this use of personal devices make it any easier to plan programs that incorporate performance and evaluation measures, such as the ones we talked about at the April 23 meeting?

That’s hard to tell. The measurement and performance efforts described by Brown & Stack could probably be viewed as organizational innovations that, by definition, require communication and collaboration to take place across existing organizational boundaries during the planning process. The beauty of personal devices such as smart phone and tablet computers — and the social networks they connect to — is that they are personal devices. People are comfortable using them. As someone who has been involved in the development or adoption of various networked corporate information services, anything you can do to improve not only ease-of-use but perceived value to the task at hand should be a good thing.

This is one of the major reasons I think it’s important to figure out how to integrate work systems with personal systems, especially when there is a lot of give-and-take and back-and-forth — as is the case when introducing and formalizing evaluation and measurement components into government programs. 

 The work of an agency such as OMB by definition has to traverse multiple organizational and network boundaries in order to corral participants into following a planning and reporting structure such as the one described in OMB’s CIRCULAR NO. A–11, PART 6: PREPARATION AND SUBMISSION OF STRATEGIC PLANS, ANNUAL PERFORMANCE PLANS, AND ANNUAL PROGRAM PERFORMANCE REPORTS. Why not use the tools people are comfortable using, assuming we can solve the problems of security and system compatibility?

Mobile technologies and operational performance measurement

So far I’ve been writing about using mobile technologies in the context of planning and developing programs that incorporate measurement and evaluation. What about the services these programs deliver? Aren’t communication with service delivery partners and with the public just as receptive to advantageous use of mobile technologies?

Mobile devices and measurement and evaluation of service delivery

Keeping in mind that we are focusing on measurement and evaluation, what impacts will mobile technologies have on service delivery?

One consideration is that, if the mobile devices are themselves used in the delivery of services, they provide an immediate opportunity for obtaining evaluative feedback on, at minimum, whether the service is actually delivered via the monitoring of transactions such as downloads, querying, or other interactivity. This could include direct measurement of the delivery of information based products, answers to queries, and even in some cases the usage of, say, financial assets or payments delivered through increasingly prevalent mobile “wallets” or other device based payment mechanisms.

We already have “pop up” surveys tied to use of specific web page assets. Now we can have immediate feedback when money is transferred via an authorized transaction conducted via a recipient’s mobile device. Why wait to evaluate the service 3 months from now if we can gather statistics immediately?

Impact measurement

Obviously it’s more complex than that, but keeping in mind we are concerned here with evaluation and performance measurement, the simple fact is that, the closer we can get to the point of service delivery and use to obtain accurate data, the better positioned we are to gather accurate data about the transaction and its impact from the actual recipient of the service.

Services such as Google Now already tie together delivery of information based on both past behavior and current device location; can’t we also begin to implement measurement and tracking services to generate better data on whether or not government services are actually getting to those they are intended to help? Clearly there are privacy, jurisdictional, and system integration issues that need to be addressed. Also, gathering data about a service instead of relying on a sample survey for evaluative purposes has potential storage, bandwidth, and other data management issues associated with it. But as personal devices become more powerful, and as they become more and more integrated both with communicating about and with consuming government services, the opportunity to combine delivery with government program effectiveness measurement is obvious.

From an accuracy perspective this is key as, the closer you are to the actual point of delivery, the more valid can be the data you gather on its use and on its impact. Commercial developers of smartphone and tablet apps that provide in-app purchasing and other interactivity already use such data as a normal part of business; extending this functionality to the measurement and evaluation of the delivery of government services is not that much of a stretch. Older methods where data collection about usage are separated from service consumption could become obsolete if they have not already become so.

Internet of things

So far this discussion has focused on data about programs that deliver services to people, i.e., can smart phones and tablet users be used to help determine if target recipients receive and use government services, and to what effect. Also to be considered is the viability of incorporating the “internet of things” into the mix of data sources associated with the delivery and consumption of government services.

As more devices come “online” and become addressable as potentially accessible sources of data, the possibility arises that events and conditions associated with the delivery of government services to individuals can be monitored in order to provide contextual information against which government program performance can be measured. Examples include traffic data, temperature and weather, energy consumption, environmental conditions, air quality, and other remotely-sensible conditions that can be associated with how individual programs are used at the local level. Even local food prices can be tracked as published by local markets on public web sites, as was done in a recent World Bank program examining the feasibility of using “big data” analytics to anticipate international market price trends. As such varied data sources become more available in real time, the implications for making corrections to government program operations rapidly and decisively also increase; why wait for a “quarterly report” to make adjustments?

Conclusion

There will always be a need to conduct formal evaluations of how well government programs perform. Such evaluations must take into account the complexity of programs and the need to distinguish among short term and long term impacts and the intervening conditions that also impact program effectiveness.

Evaluation methodologies will need to take all this into consideration, all of which requires careful planning and appropriate resources. Improving connectivity of government employees and members of the public also offers the potential for making such evaluations more timely and direct. With careful  planning, this improved connectivity also offers improvements in how programs are managed.

Copyright (c) 2013 by Dennis D. McDonald

Utilities and Cable Companies: What's a Monopoly To Do?

Utilities and Cable Companies: What's a Monopoly To Do?

More Data, Please - and Don’t Hold the Pickle