Date of Award

Spring 2022

Document Type

Open Access Dissertation

Department

Philosophy

First Advisor

Brett Sherman

Abstract

As people turn to AI driven technologies for help with everything from meal planning to choosing a mate, it is increasingly important for individuals to gauge the trustworthiness of available technologies. However, most philosophical theories of trustworthiness focus on interpersonal trust and are inappropriate for non-agents. What, then, does it mean for non-agents such as AI driven technologies to be trustworthy? I distinguish two different forms of trustworthiness: naive trustworthiness and robust trustworthiness. An agent is naively trustworthy to the extent that it would be likely to meet the truster’s expectations with respect to a given domain. An agent is robustly trustworthy to the extent that it would be likely to meet the truster’s needs with respect to a given domain. I argue that it is possible for AI driven technologies to be both naively and robustly trustworthy, but this trustworthiness is not a stable feature of trustees. Instead, it is relative to a trusters’ expectations and vulnerabilities.

In chapter one, I argue that current accounts of trustworthy AI obscure the dynamics of trust relationships that are important for individual decision-making. I then argue that unquestioning trust characterizes many of the risks associated with human-AI relationships and sets the bar for trustworthiness at the appropriate level.

In chapter two, I argue that vulnerability both precedes and follows from trust relationships. I demonstrate how these different sources of vulnerability create a tension for people seeking to be trustworthy. Sometimes, the actions that mitigate the vulnerabilities that follow from trust reinforce the vulnerabilities that precede trust. Does the trustworthy agent do what they have been trusted to do, even if that reinforces harmful vulnerabilities that motivated the trust? Or does the trustworthy agent break trust when that trust is ill-conceived? In addressing this tension, I distinguish two kinds of trustworthiness. Naive trustworthiness requires an agent to act as entrusted, regardless of the context or consequences. Robust trustworthiness requires an agent to act so as to minimize vulnerabilities that both precede and follow from trust relationships when those vulnerabilities are harmful.

In chapter three, I argue that robust trustworthiness is not a stable feature, but is sensitive to the particular trust-context. When people trust, that trust is limited to a particular domain, determined by the truster’s expectations of the trustee. Sometimes, however, a truster’s expectations are misplaced or too vague and meeting them may harm the truster. In cases like these, the robustly trustworthy agent may break trust in order to avoid such harm. However, it is not an easy matter to determine when the expectations comprising a trust domain are misplaced or otherwise inappropriate. In cases where the truster and trustee disagree regarding what the appropriate expectations are and how they should be met, I argue that it is not necessarily the case that either is making a mistake. I call these cases of “faultless broken trust”. When faultless broken trust occurs, the truster should not continue trusting the trustee. The robustly trustworthy agent, then, is trustworthy in contexts where trust breaking is not faultless.

In chapter four, I demonstrate how the concepts of naive trustworthiness and robust trustworthiness apply to human relationships with AI-infused technologies. I argue that black-box AI technologies pose a particular problem for naive trust- worthiness. I then argue that robustly trustworthy AI must be aimed at legitimate needs and must not require that people adapt to the limitations of AI technologies in harmful ways.

Rights

© 2022, Elizabeth K. Stewart

Included in

Philosophy Commons

Share

COinS