Categories
GovLab Blog

New Project: Data Collaboratives to Improve Children’s Lives

As public problems grow in complexity and increasingly require new insights, decision-makers both inside and outside government have begun exploring ways to be more data-driven and collaborative. Several of society’s greatest challenges—from addressing climate change to achieving the Sustainable Development Goals—require greater access to data, ability to analyze particular kinds of datasets, and collaboration between public- and private-sector entities.
However, much of the most useful, timely and comprehensive data resides with the private sector—in the form of, for instance, Web clicks, online purchases, market research, sensor data, and data generated from the use of mobile phones . With consumers connected to more and more platforms as well as the increasing prevalence of sensing technologies (i.e. the Internet of Things), data on how people and societies behave is becoming even more privately owned. Today, companies are exploring ways to make such  data available for the public good, as a form of corporate social responsibility – also, often called data philanthropy.
Today, the GovLab and UNICEF, in collaboration with the UN Global Pulse,  announce a new partnership to leverage the potential of private sector  data to improve children’s lives through the study and creation of “data collaboratives.” Data collaboratives are a new form of public-private partnership in which participants from different sectors — including private companies, research institutions, and government agencies — exchange data to help solve public problems. To accelerate solutions to the problems UNICEF works on and to bolster UNICEF’s efforts to become more open and data-driven, the GovLab will help UNICEF and the UN Global Pulse identify and craft data philanthropy collaborations with private sector companies in support of UNICEF’s mission.
Despite an increased awareness and experimentation in establishing data collaboratives, there exists little consensus about best practices, and only a provisional understanding of how, precisely, data can be shared and used to enhance the public good. In particular, companies and international organizations may still have limited knowledge about how to maximize the benefits of data sharing while minimizing its associated risks, such as potential threats to privacy and competition.
The 18-month initiative being launched today comprises a number of activities aimed at increasing insight on current data collaboratives practice, what works and what doesn’t, how to share data in a trusted manner using data governance frameworks, and what steps and conditions must be in place in order to ensure the exchange of value. In the coming weeks and months, the GovLab, UNICEF and the UN Global Pulse will collaboratively develop and share new resources and tools aimed at: mapping the current data collaboratives ecosystem, articulating policies and frameworks for responsibly sharing data for the public good, and helping practitioners operationalize this next generation of public-private partnership to solve big public problems, including but not limited to improving the lives of children around the world.

Categories
GovLab Academy GovLab Academy Guest Speakers Series GovLab Blog

Lessons Learned on Technology, Data Sharing & Data Infrastructure for Tech-Enabled Disaster Management by Julian Carver

Guest Post by Julian Carver
Julian Carver was the acting Chief Information Officer at the Canterbury Earthquake Recovery Authority (CERA) until June 2012. From March 2013 – April 2015 he led the Canterbury Spatial Data Infrastructure (SDI) Programme at Land Information New Zealand (LINZ), a $5m investment in a set of tech projects to support the $40 billion Christchurch rebuild.
I’m coaching a GovLab Academy course on Innovations in Tech-Enabled Disaster Management. Why do I think this topic is important? Technology and data play a growing and critical role in all phases of disaster response and recovery, from immediate, to short term to long term. What is required to support this and make it work? Here’s what I learned following the devastating earthquakes of 2011 in Christchurch, New Zealand:

  1. Data infrastructure (the technology, data and policy) that enable both open and trusted data sharing between organisations is crucial in a disaster recovery; it supports almost every operational and policy decision.
  2. Once the disaster has hit, technology staff in public and private sector organisations have very high levels of permission to innovate, share data, and use technologies in new ways, but they are also incredibly busy servicing immediate needs. That makes building out core data infrastructure “on the fly” quite difficult.
  3. Data infrastructure is therefore a key aspect of preparedness for disasters and an essential element in risk reduction.
  4. Knowing how to rapidly innovate on top of existing data infrastructure, using crowdsourcing, open and trusted data sharing, and agile deployment methods really helps when a disaster strikes.
  5. These lessons apply in non-disaster situations – building out core data sharing infrastructure (the technology and the policies) and having people inside and outside government able to experiment and innovate on top of it is a big part of open government movements worldwide. There are huge benefits in ‘everyday’ policy and management contexts.

In this post, I’ll share some stories of the huge challenges we faced, and the way we used technology, data sharing and data infrastructures to overcome them to support the recovery of my city.
In a moment our world changed
On February 22, 2011, 12:51pm, I’m working from home, lying on my bed, reading email on my iPhone. 30 seconds later, my city, my life, and my future had changed irrevocably. Anything not bolted down was on the floor and half of it was smashed. Computer monitors, TVs, bookshelves, food from the fridge. The power went off then stayed off for five days.
The earthquake’s epicenter was very close to the city, very shallow, and the quake had very high horizontal AND vertical acceleration. 185 people were killed due to building collapse and rock fall.
Mobile calls worked for a few minutes, then failed. Texts became patchy after an hour. The only thing that was semi-reliable was Twitter over 3G. It took until 9pm that night for me to know that my 10 year old son was OK as he was at school in the the most devastated part of the city, and his teachers couldn’t get hold of us.
Technology in the immediate response
Within three hours of the quake a group of volunteers had set up a crowdsourced open source disaster response platform to provide essential maps on sources of water, fuel and food, road closures, and offers and requests for help. Within a day, an information site was set up on WordPress.com by the official agencies.
As described in much more detail in these articles, the city became reliant on cloud, social media and mobile based technologies, leveraging existing non-government data infrastructures. These were largely new and unfamiliar approaches for the city government and emergency response agencies back in 2011.
Technology in the Recovery Phase
The Canterbury Earthquake Recovery Authority (CERA) was established just weeks after the earthquake. It was a central government agency tasked with setting policy, strategy and plans for the recovery, coordinating many central and local government agencies as well as the NGO and private sector efforts in providing community wellbeing, infrastructure, and economic recovery services, and managing the central city demolitions. I joined in the first week, and as the Acting Chief Information Officer, I became responsible for the information technology and acquiring, managing and sharing the data the agency needed to inform the myriad of tasks required of the agency both in the short, medium and long term phases of the recovery.
Here’s an example. In June, less than three months after being established CERA completed the initial process of land zoning based on detailed geotechnical investigations. This determined which houses on land were too badly damaged to make it cost effective to repair them. The government offered to buy around 8,000 of these properties from citizens at a fair recent valuation.
Like everything in the recovery, time frames were tight. CERA needed a convenient and trusted way to disseminate information and let people know exactly which zone their property was in. That required an interactive website, capable of taking a massive initial load of users, which could also be implemented in a very short time frame.
In stepped Trade Me, New Zealand’s equivalent of eBay.
Working with Tonkin & Taylor, the engineering firm that had done the geotechnical investigations and created the zoning maps, Trade Me built Landcheck in four days. The site was hosted from their server farms in Auckland and Wellington. In the first 24 hours of the site being live, there were 5 million page views, and 3.3 million individual property searches. Trade Me did this for free, for the people of greater Christchurch.
By September 2011, CERA had taken over the hosting of Landcheck following three further land zoning decision announcements. The functionality was migrated to the CERA website. Interactive mapping was added using open data sourced from the country’s official mapping agency, Land Information New Zealand, from a key piece of national data infrastructure, the LINZ Data Service.
In October 2011, the Department of Building and Housing, a central government agency, developed a new property classification – the ‘Technical Categories’, describing expected land performance and damage in further earthquakes, and the house foundation systems likely to be required to withstand future quakes.
The announcement generated huge interest, similar to the initial Landcheck one. This time, an Amazon cloud hosted solution, using mapping data deployed directly from government data infrastructure was deployed to take the load.
Technology in the long-term recovery: Data sharing for better decisions
In a post-disaster setting, creative solutions using existing data sources in new ways are needed to inform policy-making and to determine how resources and city services are deployed.
For example, tracking population movement is important, as knowing which houses are occupied supports prioritisation decisions on repair to power and water infrastructure, and the deployment of social support services.
In a non-disaster situation, a census or survey would be used to determine housing occupancy, but the speed at which displaced  populations move makes this nearly impossible. In the case of Christchurch, houses were being red zoned or condemned; people left the city because of lost jobs or psychological trauma from the ongoing aftershocks; and many new migrants arrived to support the rebuild.
As my colleagues Stephen and Martin describe in their white paper:

“Both NZ Post and local power companies had property (address) level data. Once combined, a property with both a postal redirection and zero electricity meter reading indicated a ‘potentially vacant’ property. A heuristic technique is an approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient to achieve the immediate goals, particularly where finding an optimal solution is impossible, impractical or too time-consuming. While not definitive at an individual property level, when combined with Statistics NZ population data these simple maps and underlying data presented a clear, suburb-level view of population movement that was invaluable to CERA and agencies involved in the recovery.”

Tracking population movement  quickly was challenging. But it was possible. Possible because of the existing underlying national data infrastructure, data standards, and legal frameworks. Without the open spatial data from the LINZ Data Service, and StatisticsNZ’s population geographies, it would have been difficult or impossible. Strong privacy laws and data aggregation methods meant this commercial sector data (power metering) could be combined with government data while protecting privacy and commercial confidentiality. Having a technology team that knew how to work with agility, while safely navigating the data licensing and data protection challenges as well as the technical hurdles was another element.
Data Infrastructure in the Rebuild
In late 2012, I left the CERA Information Services team to take up a new challenge. I moved to Land Information New Zealand to lead the Canterbury Spatial Data Infrastructure programme. Our job was to make it easier for those undertaking the $40 billion rebuild of the city to share data to coordinate their efforts.
The government agencies and private sector property developers had to demolish 1,200 commercial buildings, repair all below-ground infrastructure (wastewater, stormwater, water supply, power, and broadband), and begin the process of reconstructing new buildings, all within a small geographic area, in which people were now living and working, and all at the same time.
The Forward Works Viewer
The only way of viably doing that was to let everyone to see each other’s forward construction intentions well enough in advance to avoid expensive clashes and delays. So we worked together to build the Forward Works Viewer, a tool which gave those agencies and other public and private sector users a shared online view of horizontal infrastructure repair, planned buildings, and other construction. It let them manage and view projects and their impacts spatially and over time, and detect potential clashes and opportunities for collaboration (e.g. digging up the road once rather than several times). The Forward Works Viewer drew heavily on data infrastructure including open property and road data, open geospatial standards, and open data licensing including NZGOAL, our country’s implementation of Creative Commons.
Even so, we had to make improvements to the underlying data. The road network data was simply road center-line, it didn’t tell you anything about lanes, directions, and turn restrictions. We wanted to build those into the Forward Works’ viewer to assess the impact on the road network of road closures due to construction activity. Is it this lane closed, or this lane? Is it total closure or reduced capacity? This was needed for traffic modelling to ensure as many works could happen as fast as possible without disrupting peak traffic flows. But we didn’t have an open, freely reusable, routable roading network.
So we contracted four postgraduate GIS students for two weeks to bring OpenStreetMap for the relevant area fully up to date, and make it fully routable for Christchurch. This low cost open data initiative, and the availability of open source tools built for OpenStreetMap, let us quickly integrate the data into the Forward Works viewer and provide a more precise construction impact selector.
Building the Next Generation of Data Innovators
Hackfests/crowdsourcing had proven to be a successful, and low cost tool for solving public problems, so we used it again.
Residential building footprint databases didn’t exist for the satellite municipalities of Selwyn and Waimakariri, and the Christchurch dataset was incomplete. Jeremy Severinson, a LINZ employee who had been conducting postgraduate research assessing the trustworthiness of crowdsourced data proposed the creation of new building footprint data using a competition for school students.
Environment Canterbury, the regional council, created a Web app with instructions, registration, and login for participants, who digitized building outlines from open aerial photographs. Again this used existing data infrastructure, so could be quickly and cost effectively deployed.
The first participant to achieve 75 percent or better quality/trust score was awarded the point for that building, and the participant with the greatest number of points won. The competition ran for a month and generated 18,789 building footprints, which were integrated into the relevant council databases and OpenStreetMap. The winning kids got iPads and money for their school.
We did this as an experiment. It worked and gave us great data. Just as importantly, we got a group of kids, who might not have considered tech careers, engaged with spatial and open data. And who knows, maybe they’ll be part of the next generation of crisis mappers and disaster recovery innovators for events in other parts of the world.
What was Learned?
These experiences over the phases of disaster response, recovery and rebuild taught me the following.

The Important of Data Infrastructure

  1. Government policy and recovery decisions rely on sharing huge amounts of data between organisations, often at great pace.
  2. Disaster responses and recoveries are about people and things at places, so you need good spatial data infrastructure and open data policies to share non-personal data, and you need good privacy and data aggregation/data protection mechanisms to manage the use of personal data
  3. You can improve data infrastructure on the fly, as you go, BUT the better data infrastructure you have before a disaster, the easier it will be.
  4. The fundamentals of good data infrastructure and government policy such as foundational spatial data, technical standards, and open data licensing NEED to be in place before disaster strikes. Preparedness here is as important as building standards, lifeline infrastructures, and trained emergency responders.

Enabling Innovation

  1. As Dave Snowden says, the necessary preconditions for innovation are starvation, pressure, and perspective shift. Decision makers move from a peacetime view of “it has to be perfect and new approaches are risky” to “just get it done and use what works”.
  2. That means you’re under pressure, without clear certainty on exactly what you’re building and no time to plan it to the n-th degree. Being experienced in using agile and iterative approaches works here. You get to deploy quickly, get feedback from real users, and improve the solution. It’s possible for government agencies to be agile as we showed.
  3. If you let volunteers and young people contribute via crowdsourcing in a disaster recovery, and coordinate their efforts with the official response agencies, you can get the best of both worlds – innovation and agility, happening in harmony with safety and control
  4. These lessons apply in non-disaster situations – building out core data sharing infrastructure (the technology and the policies) and having people able to experiment and innovate on top of it is a big part of open government movements worldwide. There are huge benefits in ‘everyday’ policy and management context.

 
Attribution and References
More details on the examples above can be found in this case study: http://odimpact.org/static/files/case-studies-new-zealand.pdf
More information on the CERA Spatial Data Infrastructure can be found in this whitepaper by my colleagues who led much of the GIS and data work at CERA:http://www.stratsim.co.nz/s/The-CERA-Spatial-Data-Infrastructure-SDI-whitepaper.pdf
Information on the Forward Works Viewerhttp://www.linz.govt.nz/news/2014-07/online-tool-enhances-canterbury-rebuild
Information on the ‘Building Our Footprints’ competition: http://www.canterburymaps.govt.nz/buildingourfootprints/
Other Reading
Open Data for Resilience Initiative Policy Note and Principles: https://www.gfdrr.org/sites/default/files/publication/OpenDRI%20Policy%20Note.pdf
Code for Resilience: http://codeforresilience.org/

Categories
GovLab Blog Open Data 500

The GovLab Embarks to Expand Open Data 500 Study in Collaboration with Columbia Business School

Which companies are using Open Data, and how are they related?

The GovLab Embarks to Expand Open Data 500 Study in Collaboration with Graduate Business School of Columbia University

Two years ago the GovLab developed the first-ever census of open data companies in the US: the Open Data 500. Today, in collaboration with the Graduate Business School of Columbia University, the GovLab is launching the next iteration which will have two goals. One, we will update and expand the Open Data 500 company profiles. Two, we we will solicit new insights through additional survey questions with the goal of increasing our understanding of the effect of open data on generating new businesses and its impact on the entrepreneurial ecosystem of startups, investors and capital.
CONTEXT: As of 2016, the U.S. Government has released freely over 130,000 datasets on topics ranging from consumer complaints to trade to food access. Despite the increased supply of open data, little is known about the demand and who is using it for economic value creation; whether and how open data is fueling new areas of entrepreneurship.
OPEN DATA 500: In 2014, GovLab launched the Open Data 500 (OD500), the first comprehensive study to identify and analyze U.S. companies that are using open government data to develop new products and services. Our goal was to identify at minimum 500 companies that use open data as a key business resource. The result was the first ever census of open data companies in the US. The Open Data 500 U.S. has since surpassed the initial goal of 500, and the number continues to grow to nearly 700 companies. Leveraging the OD500 dataset, the GovLab also completed an additional study in 2015 of how small- and medium-sized enterprises are using open data as a business asset.
The OD500 Global Network: Several international partners have joined the GovLab to conduct similar efforts in their country within the context of the OD500 Global Network. Each partner organization uses the OD500 infrastructure and methodology, enabling them the to analyze the use of open data in their country in a manner that is both globally comparative and domestically specific. The network consists of Australia, Canada, Italy, Mexico and South Korea. If you have an interest in joining the Network, contact us at: opendata500@thegovlab.org.
NEW AND EXPANDED STUDY: Since the initial release of our findings we have seen a rapid increase in data supply and use. To take stock of latest developments and to increase our understanding of the impact of open data across the larger entrepreneurial community, the GovLab in collaboration with Columbia Business School is revisiting the existing OD500 dataset. During the summer of 2016, companies currently listed in our database will be sent a short survey. In addition, we hope to identify additional companies to add to our growing list of open data companies.
Led by Sheena S. Iyengar, S. T. Lee Professor of Business at Columbia Business School, and Patrick Bergemann, a Postdoctoral Research Scholar, the survey expansion takes place within the mandate of the MacArthur Research Network on Opening Governance.
Professor Iyengar: “Despite the release of tens of thousands of government datasets, there is currently no understanding of the impact this information is having on innovation, entrepreneurship and the economy in general. Through the expansion of the Open Data 500, we can finally begin to assess these potentially large effects and connect public policy with real economic outcomes.”
“The original Open Data 500 study showed that open data is already a major national resource and business driver. With our survey expansion, we want to better understand how and under what conditions open data really works. We plan to share insights with government, businesses and policymakers in order to improve the release of more and better data,” says Stefaan Verhulst, co-Founder and Chief Research and Development Officer of the GovLab at NYU Tandon School of Engineering, where he is responsible for building a research foundation on how to transform governance using advances in science and technology.
HOW YOU CAN HELP:
Does your company use open government data as a key business resource? If so, please contact us at opendata500@thegovlab.org. We are interested in hearing from you and including your company in our directory of open data companies.
If you are an open data researcher and/or you have a question you would like to pose to open data companies, please write to us at opendata500@thegovlab.org.
The GovLab is an action research organization based at NYU Tandon School of Engineering with a mission to improve people’s lives by changing the way we govern. Our goal is to strengthen the ability of institutions – including but not limited to governments – and people to work more openly, collaboratively, effectively and legitimately to make better decisions and solve public problems.

Categories
GovLab Blog Ideas Lunch

The Finance Innovation Lab – A Strategy for Systems Change

In the wake of the financial crisis of 2008, the effects of which are still being felt today, how can innovators from both sides of the financial system work toward reform?
Last Monday as part of The GovLab’s Ideas Lunch series, Rachel Sinha from The Finance Innovation Lab in the UK tackled this question head-on. Her organization brings together innovators from a variety of sectors, including civil society groups, government, business and mainstream financial firms, all determined to transform current financial systems for the good of society.
rachelsinha
In essence, The Finance Innovation Lab tries to capture the revolutionary spirit which emerged following the financial crisis in order to build communities to create change in practical and scalable ways. Sinha highlighted that this process of networking and community building was key to creating collaborative solutions to tackle large, imbedded problems within finance and other sectors.
The Financial Innovation Lab developed as a partnership between the World Wildlife Fund (WWF) and the Institute of Chartered Accountants. Despite their disparities, the two organizations were united by a common desire to create a more sustainable and democratic financial system, one that takes into account the needs of both people and the planet. The idea was to build upon this momentum for change to crowdsource ideas from within the financial system and develop these into implementable projects. The Financial Innovation Lab emerged to workshop and create these projects to lead to financial reform in the UK.
During her presentation, Sinha explained how she and the team behind the Finance Innovation Lab had to innovate their own organization before confronting innovation in the financial sector. By focussing on their strengths—in generating networks of motivated people, fostering conversations between activists within and outside of the finance sector, and empowering citizens—her team was able to better articulate the Finance Innovation Lab’s mission and harness their leadership potential in order to generate change. As a result, projects such as the Campaign Lab, a program which supports economic justice campaigners, have grown to be independent sites of change and innovation.

Key Takeaways

  1. Concentration of influence and power makes change difficult

The biggest barrier to change, explained Sinha, are concentrated centres of influence and power, which are often distant from the communities demanding reform. This can make transforming the status-quo seem like an unsurmountable task. But, by identifying these centres of power, and engaging these sectors in open dialogue with communities and civil society groups, Sinha suggested it was possible to crowdsource ideas from those in power to spur change from within the system.

  1. In order to create change, we have to “think systemically”

Sinha argued that change is never achievable if actors are not organized as a system. By thinking systematically—uniting people together to solve a problem, and leveraging existing communities—the spirit for change becomes a strategized and effective system capable of creating reform. As Sinha suggested, movements for change should “not just preach to the choir, but organize the choir.”

  1. Four Steps toward Reform and Innovation: Amplify, Demonstrate, Reform and Scale

Sinha outlined the 4 steps which formed a strategy for innovation, which is used by the Finance Innovation Lab to design and develop projects.

  1. Amplify: This involves bringing together diverse groups and building upon their ideas for change through dialogue and networking.
  2. Demonstrate: It is not simply enough to convene, but innovation leaders also need develop new tools and ways to function, and showcase improvements from these measures to reform. This is fundamental in creating a sustainable strategy for innovation.
  3. Reform: Leaders must be strategic about what areas can be changed, and where to target their efforts. By being specific, and understanding limitations, reform can be more implementable and effective.
  4. Scale: It is important to build strategies as the project grows, ensuring it remains dynamic and adaptable.

By building upon these core principles, the Finance Innovation Lab has become a space where radical ideas are created, articulated and developed so that perceptible change can take place. For Sinha, a key example of this iterative approach toward reform is the AuditFutures project. Launched by the Finance Innovation Lab, this project challenges existing approaches to accounting by running workshops and programmes to encourage practicing and future auditors to think about the social impact of their work.
In such a way, the Finance Innovation Lab has launched a variety of projects—some that falter, and others that flourish—to nurture innovation and radical reform in a variety of sectors of the finance community.

About Rachel Sinha

Rachel Sinha is a British award winning social innovator. She co-founded The Finance Innovation Lab with four other team members, and was named by the Guardian newspaper as one of 50 Radicals ‘changing the face of the UK for the better’. Sinha is an established thought leader in the field of social innovation and systems change and the co-author of Labcraft, a book on social Labs. She has written for publications including HBR and Fast Company, documented the work of systems leaders with Oxford University as well as published her experiences of running a Lab in A Strategy for Systems Change.

About the Financial Innovation Lab

The Finance Innovation Lab, first convened by The Institute of Chartered Accountants and the World Wildlife Fund, brought together accountants, activists, investors and citizens to work on transforming the future of finance. It launched several successful organizations as part of the strategy, from an accelerator program for economic justice campaigners (Campaign Lab), to a Rockefeller Foundation and World Bank funded ‘Natural Capital Coalition’ with a protocol for business to account for natural capital.

Categories
GovLab Blog

Building Data Responsibility into Humanitarian Action

Next Monday, May 23rd, governments, non-profit organizations and citizen groups will gather in Istanbul at the first World Humanitarian Summit. A range of important issues will be on the agenda, not least of which the refugee crisis confronting the Middle East and Europe. Also on the agenda will be an issue of growing importance and relevance, even if it does not generate front-page headlines: the increasing potential (and use) of data in the humanitarian context.
To explore this topic, a new paper, “Building Data Responsibility into Humanitarian Action,” is being released today, and will be presented tomorrow at the Understanding Risk Forum. This paper is the result of a collaboration between the United Nations Office for the Coordination of Humanitarian Affairs (OCHA), The GovLab (NYU Tandon School of Engineering), the Harvard Humanitarian Initiative, and Leiden University Centre for Innovation. It seeks to identify the potential benefits and risks of using data in the humanitarian context, and begins to outline an initial framework for the responsible use of data in humanitarian settings.
Both anecdotal and more rigorously researched evidence points to the growing use of data to address a variety of humanitarian crises. The paper discusses a number of data risk case studies, including the use of call data to fight Malaria in Africa; satellite imagery to identify security threats on the border between Sudan and South Sudan; and transaction data to increase the efficiency of food delivery in Lebanon. These early examples (along with a few others discussed in the paper) have begun to show the opportunities offered by data and information. More importantly, they also help us better understand the risks, including and especially those posed to privacy and security.
One of the broader goals of the paper is to integrate the specific and the theoretical, in the process building a bridge between the deep, contextual knowledge offered by initiatives like those discussed above and the broader needs of the humanitarian community. To that end, the paper builds on its discussion of case studies to begin establishing a framework for the responsible use of data in humanitarian contexts. It identifies four “Minimum Humanitarian standards for the Responsible use of Data” and four “Characteristics of Humanitarian Organizations that use Data Responsibly.” Together, these eight attributes can serve as a roadmap or blueprint for humanitarian groups seeking to use data. In addition, the paper also provides a four-step practical guide for a data responsibility framework (see also earlier blog).
The potential of data and information in humanitarian settings is only beginning to be understood. It will be some time before we more fully grasp under what conditions data is most effective, and what kinds of social, cultural and technological supports can best realize its potential. Yet, without any rigorous assessment of the risks involved the potential of the use of date to improve people’s lives may be limited if not negative. This paper can be seen as a step toward more evidence-based and responsible data use. It is targeted at humanitarian professionals, policymakers, journalists and average citizens interested in humanitarian issues. It is our hope that its release will mark the start of a discussion that will translate into more effective and efficient humanitarian actions on the ground, around the world.
Full Paper: Building Data Responsibility into Humanitarian Action