How can disruptive algorithms be tamed?

Alberta School of Business researcher shows how to reduce the undesirable effects of algorithms that affect so many aspects of our lives.

Algorithms are increasingly pervasive in modern society, but what can we do when they have troubling consequences? How can we control them, when their workings are often obscure? The inner workings of machine-learning algorithms, in particular, are often opaque even to their designers and curators, let alone to regulators and users. In a recent study, Alberta School of Business researcher Chris Steele and his co-authors looked into these questions.

They examined high-frequency trading algorithms in the U.S. securities market—which gave some traders “unfair” advantages over retail investors—and explored how these algorithms were assimilated into the market and regulatory structure during 2009–2016.

Limiting algorithms’ actions through “envelopes”

Drawing from this case—and the work of philosopher Luciano Floridi—Steele and colleagues argue that disruptive algorithms can be tamed by crafting an “envelope” of technologies and social practices to contain algorithms and limit their freedom of action. Much like a dishwasher case prevents water flooding onto the kitchen floor—enveloping the machine in a physical boundary, irrespective of its exact inner workings—the researchers argue that evolving, learning algorithms can be tamed by shaping their interface with the outside world: by delimiting their inputs and the acceptance of outputs, rather than intervening directly in their code.

Steele suggests that construction of an envelope involves three interrelated processes:

  • Morally problematizing specific impacts of the algorithm;
  • Identifying specific inputs and outputs that contribute to problematic impacts; and
  • Policing the actors that provide and receive these inputs and outputs.

AT A GLANCE

  • Three-layered approach: This research found that “institutional custodians”—people who wanted to protect valued parts of the status quo—were able to contain high-frequency trading algorithms by building normative, governance, and practice-based layers of constraints.
  • Layers collaborate: These layers worked together to restrict the influence of high-frequency trading algorithms.

  • Create boundaries: The process of creating multiple, layered boundaries around algorithms is referred to as constructing an ‘envelope’—this delimits algorithms’ interactions with the environment, and helps contain their impact.

What does this look like in practice? 

To bring this to life, let’s look at an imaginary example, such as an attempt to construct an envelope around controversial facial recognition algorithms.

In this case, the first process would involve the identification of the key negative impacts we want to limit—which might include intrusion into personal privacy, racial bias, potential for hacking or abuse, or potential for state control of individuals. Activists and politicians can play a role in identifying these issues and ensuring they receive sufficient social and political prominence to motivate action.

The second process would involve identifying the specific inputs and outputs that fuel these negative impacts. For example, mugshots, social media posts, CCTV feeds, or vending machine footage could be concerning inputs for those worried about privacy—and phone access, or entries into police databases for pre-emptive surveillance could be concerning outputs. This process may call for people with more specific technical expertise.

The third process would involve devising laws, technologies, and practices that limit the algorithms’ access to problematic inputs, as well as its acceptance of problematic outputs.

As Steele puts it, “when we alter the environment in which algorithms operate and evolve, we can funnel their impacts on the world—taming them, by reducing the consequences we find most troubling”.

The construction of such envelopes can be a useful tactic for all kinds of “institutional custodians”—those who seek to protect valued institutional or social structures—as they grapple with algorithms, or as they contend with other challenges. And, as algorithms become omnipresent and increasingly influential, we may all find things we wish to preserve through these kinds of custodial efforts.

None of this makes controlling algorithms suddenly easy, but the key advantage of this approach is that it does not call for a deep understanding of the rather inaccessible inner workings of algorithms. Instead, it focuses on the world in which the algorithms operate, and on things that we can understand and influence—even without deep technical expertise in algorithms themselves.

More:

Subscribe to UAlberta Business

Become part of our community. Get the latest news and event information from the Alberta School of Business in your inbox every month.

Sign Up Now