Document Type

Article

Subject Area(s)

Computer Science and Engineering

Abstract

We rely on computers to control our power plants and water supplies, our automobiles and transportation systems, and soon our economic and political systems. Increasingly, software agents are enmeshed in these systems, serving as the glue that connects distributed components. Clearly, we need mechanisms to determine whether these agents are trustworthy. What do we need to establish trust? Agents are often characterized by features such as autonomy, sociability, proactiveness, and persistent identity. This latter feature is key in determining trust. When agents operate over an extended period, they can earn a reputation for competence, timeliness, ease of use, and trustworthiness, which is something ephemeral agents cannot do. Along with persistence, we need a reliable way to identify an agent and ensure that its true identity is not concealed. How can we assess an agent's trustworthiness? As with other aspects of agents and multiagent systems, we can take our cue from the human domain. Our reputations for trustworthiness are determined and maintained by the people we deal with. Analogously, a software agent's reputation will reside within the other agents with whom it interacts. For some agent interactions, such as those involving commerce, agents will simply inherit the reputation of their human owner, sharing, for example, their owner's credit rating and financial capability. For other types of interactions, such as those involving information gathering, an agent will determine its own reputation through its efforts at gathering and distilling information. An agent with a reputation for conducting thorough searches will be trusted by other agents wishing to use its Web search results.

Rights

http://ieeexplore.ieee.org/servlet/opac?punumber=4236

© 2001 by the Institute of Electrical and Electronics Engineers (IEEE)

Share

COinS