Test it and they might come: Improving the uptake of digital tools in transparency and accountability initiatives

Christopher Wilson and Indra de Lanerolle
2016

Summary

Information and communications technologies (ICTs) and data play an increasingly visible role in transparency and accountability initiatives (TAIs). This article investigates TAI tool selection processes in South Africa and Kenya. It argues that in many cases, tools are chosen with only limited testing of their appropriateness for the intended users in the intended contexts, despite widespread recognition among practitioners, funders and researchers that this carries significant efficiency and sustainability risks. The article suggests a strategy for increasing investment and effort in tool selection, in order to conserve overall project resources and minimise the risk of failure.

This article draws on a landscaping survey and 38 in-depth, semi-structured interviews with Technology for Transparency and Accountability Initiatives (T4TAIs) in Kenya and South Africa that has recently selected or were currently selecting a tool for T&A programming. Findings include:

  • T4TAIs use a wide range of tools, including: social media platforms; off-the-shelf (OTS) software platforms such as Ushahidi or Frontline SMS; paid subscriptions to cloud services for managing data; hardware such as tablets for conducting surveys; or mobile apps and web interfaces which T4TAIs build or commission from the bottom up.
  • Less than 25 % of initiatives described their chosen tool as a success. Common problems include: tools not working as expected; low uptake by users; lengthier development or modification time than anticipated; and a struggle with finding or managing technical partners.
  • Knowledge gaps in key areas at the start. Initiatives had little information on what they needed their tool to do, or on which tool could do what they needed. Very few had detailed knowledge about how tools worked before they chose them. Although some had conducted research, it did not focus on tool availability or user needs.
  • Only 9 out of the 38 initiatives interviewed identified successful tool selection outcomes.

While the sample is too small to be generalisable, the prominence of failed tool selection reinforces anecdotal evidence and suggestions in the literature that many organisations undertaking T4TAIs lack the capacities and resources to make strong tool selections. To counter this, the paper recommends:

  • Gathering more information about potential and actual users in both the design and implementation phases.
  • Trialling as a good potential strategy for practitioners as it is more strongly correlated with success than research. Testing helps to understand the limitations of tools in context, identify obstacles to user uptake, and compare tools. Exploring existing technologies before building or commissioning the development could lead to less tool selection failures and better use of limited resources.
  • Expecting to fail, and learning from it. Some T4TAI’s learned from initial failure and improved their tools and projects in subsequent iterations, but allowing for such iteration requires planning and budgeting.

Source

Wilson, C., & Lanerolle, I.D. (2016). Test it and they might come: Improving the uptake of digital tools in transparency and accountability initiatives. Opening Governance, 47 (1), 113-126.