Only two industries refer to their customers as users, as Edward Tufte reportedly said: drugs and computers.

More and better data for decision-making is a rallying cry for the data for development community. But what happens when decision-making power is co-opted by a small number of corporate actors who are not accountable via technology? Earlier this month, we wrote about the difficulty of regulating Facebook due to its role as a de-facto public good and public utility. But there are other equally challenging issues in regulating Big Tech that must be addressed, namely, the role of technology companies in influencing and manipulating human behavior through data and in further entrenching systemic inequalities.

Facebook and other technology companies create feedback loops between users’ data and decision-making via AI, where a continuous stream of decisions are made by algorithms, defining users’ realities and driving their choices. As mathematician Cathy O’Neil says, “Algorithms are opinions embedded in code.” They are subject to the goals and biases of their creators. This requires us to think differently about the steps in the data value chain and the stakeholders involved. If algorithms are making decisions that impact users, shouldn’t such systems be created in the best interest of users and not companies? Yet, we’ve seen recently that decision-makers at Facebook with access to data and insights have taken steps to not only ignore them, but to obfuscate them from public view. What level of transparency or accountability in data and decision-making can users reasonably demand and how can they do that in the face of increasingly powerful companies?

This sophisticated AI and machine learning is fueled by both data and digital colonialism. Computer processing power has increased at a rate unlike any previous technologies, allowing data to be continuously extracted from our daily lives, driven by opportunity for increasing company profits. This level of data extraction and processing takes physical, financial, and human resources to maintain. This power is not evenly distributed, with the U.S. and China and companies based in those two countries dominating technology supply and impeding smaller economies from developing their own digital economies. This model of extracting and profiting from user data is enabled by a digital system that was built on and continues to drive global inequalities. While the negative impacts of the profits over well-being model are evident globally, existing digital and data colonial architecture puts historically marginalized communities at greater risk.

The development community needs to partner with technology companies, and so we cannot ignore these issues. We’re at a turning point, and change is clearly needed. The question is what, how, and who is responsible. More and better data are key to progress toward the Sustainable Development Goals, but simply having access does not guarantee data use, as we’ve written about before in the Digest. Development practitioners partnering with technology companies must take an honest look at the data processes that drive these companies’ business models and ask: What are the systemic harms this model poses and how do we balance leveraging the benefits of partnership with advocating for changes in data collection and use processes? What types of regulations can ensure responsible data collection and use within the context of unprecedented technological power, global reach, and potential for manipulation of human behavior?

We can’t simply apply the same blueprints used to regulate early industrial economies to new technologies. This is a problem of global scale, and these questions require collective action to address. Harnessing the good that such technologies make possible is also an international concern. Within the Data Values Project, we can start by taking an honest look at the trade-offs in achieving the SDGs and the technologies, data processes, and motivations we use to do so.