Back in December 2008 I wrote the following in Eight Reality Checkpoints for Using “New Social Media” In Government:
Personally, I’m quite optimistic about the use of new media to open up and make government more open, transparent, and responsive. The “genie is out of the bottle,” in my opinion, and there’s no way we’ll go back to the old ways. But the road will have some bumps:
- We need to accommodate citizens who are not digitally literate.
- We must recognize that resistance is real and potentially legitimate.
- We need honesty about the costs and complexity of engaging large numbers of people in meaningful dialog on potentially complex and time consuming topics.
So how are we doing in terms of increasing public involvement in creating, assessing, and commenting on public policies?
In many ways things have improved. It almost seems like every day that another federal agency or department is starting up a blog to communicate with its constituents. (I keep up with these via a subscription to USA.gov.)
While some agencies are still learning that there’s more to web based engagement than just opening up another one-way information broadcasting channel, there have been a number of sophisticated attempts to engage the public interactively on a variety of complex and high visibility topics; I referred to some of these in Can Online Public Dialogs Succeed With Anonymous Participants?
Still, I do have some concerns about the manner in which these online dialogs are being conducted. As laudable as these examples of public engagement have been, I am concerned that we still have a lot to learn about how to engage with a large number of people online concerning complex topics where we expect the “conversations” to involve significantly more than selecting a response to a question from a numbered list. The issues have to do with something I call “context” which needs to be addressed from two perspectives:
- The context of the interaction the particiapnt has with the online dialog
- The contxt of the manner in which the dialog management represents the information generated during the course of the dialog.
Participants to an online dialog that has a specific topic or goal come to that dialog with a wide range of knowledge and expectations.
Some will be experts in the topic being discussed, some will have axes to grind, and others will be attracted by an opportunity to make their voice heard about a topic they may know little about. They may arrive at different times and will each have a different perspective and set of experiences as they participate in the various discussion threads that can develop.
Some of the discussions and dialog will be very directed, for example, in the case where the dialog management presents a document or point of view for discussion and comment. Other discussions may arise spontaneously and may evolve in directions totally unexpected by management.
In either case, one of the most basic reasons for this variability is that how the participant interacts with the dialog cannot be totally controlled. In the post 14 Ways to Make Online Citizen Participation Work: “Keep Folks in the Loop!” “Intellitics” describes the reality of the dialog participant this way:
- Overall, the average participant will have a fairly limited amount of time to spend (participation is not their full-time job, far from it)
- She may only check for updates occasionally (not, say, several times a day)
- She prefers to “get in and get out” to make her contributions (not hang out and linger for hours on end)
The bottom line: Participant Context cannot be controlled. The best we can hope for is that some of this variety (or “chaos,” depending on your point of view) of online public dialog can be managed and represented by the dialog’s management. This brings us to Management Context.
Faced with a wide rang of conversations and participants whose experience and knowledge level may differ substantially, how can management best represent the dialog and draw from it guidance or conclusions relative to its original goals and objectives?
Possibly the most fundamental way to understand and represent the dialog will be to understand who participated. Do participants have experience in the topics being discussed? Are they affiliated with organizations that are engaged with the activities being addressed by the online dialog? Why do they care about the topics being discussed?
Collecting this type of information is not uncommon when a participant must register to participate in a dialog. This type of registration can also run headlong into the issue of whether the dialog can be engaged in by participants desiring to remain anonymous, as discussed here.
Assuming we can address the issue of who participates in the dialog, we must also discuss how the meaning and content of the dialog is summarized and represented.
This presents a technical challenge given that such dialogs frequently involve the manual typing of text based discussions by different participants where the variety of language, terminology, grammar, and slang usage may be quite high.
Time consuming (and expensive) human review and synthesis of large volumes of text is one approach. Another approach is to seek the aid of semantic technologies to discover relationships within the large volumes of dialog that can be reviewed and further interpreted. Semantic software approaches are being used, for example, in the analysis of procurement and acquisition related documents by the U.S. Air Force.
An intermediate approach to managing and representing dialog comments is to incorporate a mix of structured and unstructured techniques into the duration of the online dialog. This approach has the advantage of providing selected “anchors” of analyzable data while preserving the context of overall free text comments. The disadvantage, returning to the comments by Intellitics quoted above, is that dialog management may not have consistent level of control over how and when the dialog participant chooses to participate in either the structured or unstructured poertions of the dialog.
The above is not at all intended to reflect negatively on attempts made so far to engage online with members of the public on important topics such as national security, Defense Department use of social media, and public response to H1N1. I believe that it is a Good Thing that attempts are being made to engage with the public on such topics in ways that can represent the complexities of the real problems.
At the same time, a lot will depend on what is being attempted with the use of online public dialogs.
Are we trying to generate ideas through a government sanctioned form of “crowdsourcing”? Are we trying to measure how people respond to different public policy options? Are we experimenting with different ways to involve the public in legally mandated public policy reviews?
As long as we make sure (a) that there is a direct connection between such dialog and legitimate public policy issues, and (b) that people who are not able to participate in online public dialogs also have a way of making their voices heard, I look forward to these efforts continuing and expanding.
Copyright (c) 2009 by Dennis D. McDonald