Dennis D. McDonald (ddmcd@outlook.com) is an independent consultant located in Alexandria Virginia. His services and capabilities are described here. Application areas include project, program, and data management; market assessment, digital strategy, and program planning; change and content management; social media; and, technology adoption. Follow him on Google+. He also publishes on CTOvision.com and aNewDomain.

Who Is Better At Making Government Data Useful?

By Dennis D. McDonald

Craig Thomler’s Make government data freely available neatly lays out, from an Australian’s perspective, a discussion of how the public can benefit if government agencies make raw data available for access by individuals and organizations who then analyze or present that data in a useful way. These points are from Thomler’s conclusions:

  • Government has a crucial role to play in the collection of data across the country. This is a task well suited to the public sector as it is in the public interest that this be available.
  • However government doesn’t have the systems or culture to be best suited to interpret and combine this data or make it useful for individuals and organisations.
  • Government should provide interpretations - however it should not hold an artificial monopoly over this. 
  • By allow[ing] other organisations to access the raw information innovation in its presentation can occur more rapidly, providing deeper insights for the public good.

Thomler’s discussion is partly based on practicality. That is, government agencies aren’t always equipped to make the best or most innovative use of the information they collect. It therefore makes sense, Thomler says, to involve others in making data useful as long as a “level playing field” exists for people to make use of the data.

The issues that Thomler touches on did not arise overnight based on increasing availability of tools for making data widely and easily available via web based technology and increasingly powerful “mashup” tools. In the United States, at least, private sector republishers of government-sourced data have always existed whose “added value” consisted of indexing, analysis, and publishing services that have arguable made government-sourced data more available and usable than it might otherwise have been if left totally to government efforts.

Tensions have arisen, though, when the prices of commercially developed tools appeared to place the availability of government data out of the hands of many taxpayers. Various rules and policies have arisen to help ensure — not always successfully — that government sourced data should always be available in some form to all.

A practical concern is how to create a “level playing field” such as the one described by Thomler. On the one hand, we want to provide an incentive for the private sector to innovate in how government sourced data are made available. On the other, we don’t want to skew the system so much that such innovations effectively restrict access to those who can afford such access. 

In general, I support Thomler’s view. I’m all for making government sourced data widely available (as long as privacy and security concerns are protected, of course). But one concern that should be addressed is how we make sure that government agencies do not artificially restrict their ability to carry out their legal mandates by stopping short of providing services that do require complete access to the data they collect.

For example, consider an agency that is responsible for providing assistance to poor citizens. Assume further that the process the agency follows in determining eligibility and in the provision of services requires access to income and geographically based cost data related to the service. Assume further that the data needed by the agency has traditionally been available in published tables and individual computerized spreadsheets that are difficult to maintain and use.

What if the most efficient system for accessing the needed data is a database application developed by a private sector firm that uses government sourced data to perform calculations and its own software in the performance of retrieval and calculation functions? Should the agency continue to use its hard-to-use “manual” resources? Or should it avail itself of the more easy-to-use — and accurate — commercially available source of the same data that is available for a price?

It’s easy to see how a situation like this can devolve into an argument about profiteering, outsourcing, and privatization. Such arguments have arisen around situations where there is an interface between public sector data and private sector publishing and distribution, as was the case recently when public availability of government funded research via professional journals became a public topic of discussion. (See my blog post How Involved Should the U.S. Government Be in the Scholarly Journal Publishing Business?)

One common refrain about data access is, “The public paid for it (the data) so the public should be able to have it.”  This is a pretty strong argument. At the same time, a practical result of using such an argument should not be the refusal or inability for a government agency to perform a mandated public service. Nor should a government agency be put into a situation where it lacks the knowledge and expertise necessary to manage the creation and use of the data it needs to carry out its mission.

Copyright (c) 2008 by Dennis D. McDonald 

 

 

Do Real People Care About Disruptive Technologies?

Initial Research on "Technological Literacy"