Report
Responsible Tech
April 12, 2022

Locating Responsibility in Socio-Technical Systems

The first in a series of working papers examining the different meanings and practices of responsibility in science and technology. This paper examines emerging narratives around responsible tech; the persistent and deliberate responsibility gaps in socio-technical systems; and the need for an ethic of care and protection.
Download PDF
Locating Responsibility in Socio-Technical Systems

In the wake of the Cambridge Analytica scandal many tech companies stepped forward to make amends for past wrongdoings. Recent discussions on responsible tech can be considered in the same register of regaining (the lost) social license to operate. Concerns remain, however, as to the sincerity of these efforts by tech companies, particularly since dangerous use-cases of digital technologies continue to persist, social-good initiatives are muddled with commercial ventures, and some of these amends are merely optics to stave-off state- led regulations. 

This paper examines the use of the language of responsibility and ethics by technology companies and cautions that it could be a way for them to avoid top-down regulatory measures. But, it may also provide an opportunity to have a more meaningful discussion around the values and capacities needed to steer technology trajectoriers toward the public good. The paper thus examines different meanings and traditions of responsiblity, as a first step toward articulating a framework for responsible technology.

The paper explores two key conceptualizations of responsibility, individual accountability and collective answerability, to understand the opportunities and limitations they present in the context of complex technical systems such as artificial intelligence (AI). With regard to the former, individual accountability, scholars have observed that a ‘responsibility-gap’ arises due to the discord between the requirements of legal culpability (individual agency) and the nature and affordances of AI systems. We posit, however, that the gap is not (always)a bug but a feature of the system. Corporations avoid responsibility by organizing themselves in ways to deflect responsibility away from the centres of decision-making.

In response, we suggest that looking at responsibility collectively is better attuned to the reality that complex technical systems implicate a network of actors.The movement from individual accountability to collective responsibility also allows for the possibility for a more relational approach to responsibility as well as the possibility of foregrounding the ‘moral patient’. Under the individual accountability model much of the focus is on retribution rather than answerability to the moral patient. In the absence of fiduciary duties, we argue that theories of collective responsibility and answerability can inject an ethic of care into socio-technical systems that will ultimately help foreground those impacted by the effects of technology, both in the present and the future.


Browse categories

Scroll right
Tara Anand

Locating Responsibility in Socio-Technical Systems

The first in a series of working papers examining the different meanings and practices of responsibility in science and technology. This paper examines emerging narratives around responsible tech; the persistent and deliberate responsibility gaps in socio-technical systems; and the need for an ethic of care and protection.

In the wake of the Cambridge Analytica scandal many tech companies stepped forward to make amends for past wrongdoings. Recent discussions on responsible tech can be considered in the same register of regaining (the lost) social license to operate. Concerns remain, however, as to the sincerity of these efforts by tech companies, particularly since dangerous use-cases of digital technologies continue to persist, social-good initiatives are muddled with commercial ventures, and some of these amends are merely optics to stave-off state- led regulations. 

This paper examines the use of the language of responsibility and ethics by technology companies and cautions that it could be a way for them to avoid top-down regulatory measures. But, it may also provide an opportunity to have a more meaningful discussion around the values and capacities needed to steer technology trajectoriers toward the public good. The paper thus examines different meanings and traditions of responsiblity, as a first step toward articulating a framework for responsible technology.

The paper explores two key conceptualizations of responsibility, individual accountability and collective answerability, to understand the opportunities and limitations they present in the context of complex technical systems such as artificial intelligence (AI). With regard to the former, individual accountability, scholars have observed that a ‘responsibility-gap’ arises due to the discord between the requirements of legal culpability (individual agency) and the nature and affordances of AI systems. We posit, however, that the gap is not (always)a bug but a feature of the system. Corporations avoid responsibility by organizing themselves in ways to deflect responsibility away from the centres of decision-making.

In response, we suggest that looking at responsibility collectively is better attuned to the reality that complex technical systems implicate a network of actors.The movement from individual accountability to collective responsibility also allows for the possibility for a more relational approach to responsibility as well as the possibility of foregrounding the ‘moral patient’. Under the individual accountability model much of the focus is on retribution rather than answerability to the moral patient. In the absence of fiduciary duties, we argue that theories of collective responsibility and answerability can inject an ethic of care into socio-technical systems that will ultimately help foreground those impacted by the effects of technology, both in the present and the future.


Browse categories

Scroll right