Key Issues for Agent Technology

Several important areas must be addressed before a rich and robust agent technology can exist.1 Currently, one of the most important areas for standardization is agent communication. If every designer developed a different means of communicating between agents, our agent systems would be worse than a tower of Babel. Not only would the content and meaning of a communication likely be different, but also the means of communication could occur in a variety of ways. Agent mobility is also important if we wish to benefit from the relocation of agent processing. The issue of security must be addressed as well if we are to ensure that both the agents and their environment are free from danger. This column discusses each of these issues and indicates which standard services and specifications might support them.

AGENT COMMUNICATION LANGUAGES
When two people want to communicate they need to choose a common language and interchange medium—though even then misunderstandings can occur. Agents, too, need a standard language and a set of conventions that support them in identifying, connecting with, and exchanging information with other agents. However, for agents, it is even more important to minimize any possible misunderstandings; otherwise, our IT systems could get very confused. Agent communication languages (ACLs) enable agents to communicate in a clear and unambiguous manner. By standardizing these ACLs, different parties can build their agents to interoperate both intra- and intercompany.

An example of a simple point-to-point communication between two agents is illustrated in Figure 1. Here, one agent is asking another for the current price of IBM stock. This message, or communication act, specifies an ask speech act, the sender's identity (Joe), the message content (PRICE IBM ?price), the address of the communication (stock-server), the name of the reply expected from the responding agent (IBM-stock), the language in which the content is specified (LRPROLOG), and the set of the agreed-upon terms, or ontology, that will be used in the content exchange (NYSE-TICKS). The responding agent replies with the requested stock price, along with its associated ACL parameters.

Figure 1
Figure 1. Simple point-to-point communication using an ACL.

Currently, two primary standards for ACLs exist:

  • Knowledge Query and Manipulation Language (KQML) (http://www.cs.umbc.edu/kqml)—network environments that support plug-and-play processes are still quite rare. Most distributed systems are implemented with ad hoc interfaces between components. KQML is a language and set of conventions that support network programming specifically for knowledge-based systems and agents. It was developed by the ARPA-supported Knowledge Sharing Effort.
  • Foundation for Intelligent Physical Agents (FIPA) (http://www.fipa.org)—FIPA2 has been working to develop and promote standardization in the area of agent interoperability since 1996. FIPA's ACL is a high-level agent communication language that is based on speech acts and is perceived by many as an improvement on KQML.

The speech acts that may be specified (such as ask and reply in Figure 1) are defined by the ACL. Some examples of KQML speech acts (called performatives in KQML) include:

  • achieve—A wants B to perform a certain task.
  • advertise—A registers as suitable for a particular request.
  • ask—A requests information from B (ask-one or ask-all).
  • broker—A wants B to find help to answer something.
  • delete—A wants B to remove specific facts from knowledgebase.
  • recommend—A wants the name of an agent that supplies an answer.
  • recruit—A wants B to request an agent to perform a given task.
  • reply—A answers B.
  • sorry—A cannot provide the requested information.
  • subscribe—A wants any messages from B when they occur.
  • tell—A sends information.

ACLs provide a high-level format for expressing communication acts among agents. The communication detail, however, is embodied in the content parameter, where the agent expresses the actual question, reply, or request. The format of the content parameter must be agreed upon by both sender and receiver(s), otherwise effective communication will not be possible.

The language parameter helps to some extent, because it specifies a registered syntax form. For example, if Prolog were specified, the agent would know that the rules for content syntax must conform to the Prolog language. The Knowledge Information Format (KIF) is one standard for agent language syntax and was developed by a consortium at the University of Maryland. Another emerging approach is to use and extend XML.

ONTOLOGY COMMUNICATION
The syntax rules of a language are not enough to ensure clear communication; an agreed-upon set of terms is also required. Certainly, the syntax would define some terms, but there are also user-defined terms. For example, to ask for the number of clients that Fujitsu has could be expressed as: COUNT FUJITSU client ?integer. The syntax is well-defined, but if one agent uses the term "client" and the other only knows "customer," the two will not communicate effectively—even though both know that something is supposed to be tallied for Fujitsu. Agents can have different terms for the same concept and identical terms for different concepts. A common ontology, then, is required for representing the knowledge from various domains of discourse. The purpose of the ontology parameter in an ACL is to define the set of terms that will be used in an agent communication.

The need for terminology standards is not new, it is a key requirement for EDS and KQML. Because agent communications depend on ontology, such standards are now more critical than ever. As such, many organizations and consortia are now being set up to establish industry vocabularies, such as RosettaNet (http://www.rosetta.net), BizTalk (http://www.biztalk.org), CommerceNet (http://www.commerce.net) and Ontology.Org (http://www.ontology.org). Common ontology representations use UML and XML schema.

MESSAGE TRANSPORTATION MECHANISM
Agent communication can be achieved in two ways:

  1. Directly with each other (logical connection), which provides flexibility and freedom but bypasses control and security.
  2. Through the agent platform (physical connection), which resolves control and security problems but requires logical communications to be physically resolved via the agent-base software.

Most agent systems use the second option (see Fig. 2), because the agent platform enforces control and security at a system level. With this approach, the look and feel of agents directly addressing other agents can still exist, even though all communications are still processed through the agent platform-even communications to traditional, or legacy, systems.

Figure 2
Figure 2. Using the agent platform for communication and migration.3

In agent environments, messages should be schedulable, as well as event-driven. They can be sent in synchronous or asynchronous modes. Furthermore, the transportation mechanism should support unique addressing as well as role-based addresses (i.e., "white page" vs. "yellow page" addressing). Lastly, the transportation mechanism must support unitcast, multicast, and broadcast modes, and such services as broadcast behavior, nonrepudiation of messages, and logging.

Although message transportation does not yet exist for agent-based systems, it does exist in an OO form. With some enhancements for the requirements of agent-based environments (directly or indirectly), the following technology could be used: CORBA, OMG Messaging Services, JAVA messaging service, RMI, DCOM, and Enterprise Java Beans Events.

AGENT INTERACTION PROTOCOLS
Agents can interact in various patterns called interaction protocols (which are also known as conversation or communication protocols). Each protocol is a pattern of interaction that is formally defined and abstracted from any particular sequence of execution steps.

Figure 3 depicts a few more interaction protocols using multiple agents: requester, provider, and facilitator agents. The facilitator agent functions as a middleman. For example, in Figure 3(a) the facilitator receives a subscribe request communication from a requester who wishes to receive messages on a particular topic. Anytime a provider agent sends a communication to the facilitator that fits the subscriber's topic, the facilitator passes on the communication.

Figure 3
Figure 3. Interagent communication using facilitator agents.

The facilitator in the recruiter protocol in Figure 3(b) receives both recruit communications from requesters and advertise communications from providers. When the facilitator finds a match, it notifies the provider who then contacts the requester directly. In the broadcaster protocol in Figure 3(c), an agent requests the facilitator to broadcast a message to a number of agents.

Figure 4 illustrates a negotiation protocol, where a broker agent sends out invitations to bid on a job contract. If a provider agent wishes to participate in the negotiation, it can respond with a bid. Because many provider agents might respond, the broker agent has to decide which provider should be awarded the contract. Once the contract has been sent to the provider, the provider can choose to confirm. If the provider declines, the broker must choose another provider. Such a pattern could support various negotiation scenarios, such as ordering supplies, requesting equipment, or obtaining human resources. There are many patterns that provide basic communication protocols.

Figure 4
Figure 4. Agent interaction protocol for negotiation.

AGENT MOBILITY
Stationary agents exist as a single process on one host computer. Mobile agents can pick up and move their code to a new host where they resume execution. Mobile agents are able to change platforms and environments; stationary agents are not. From a conceptual standpoint, such mobile agents can also be regarded as itinerant, dynamic, wandering, roaming, or migrant. The rationale for mobility is the improved performance that can be achieved by moving the agent closer to the services available on the new host.

Figure 5
Figure 5. Stationary and mobile agents.3

Stationary agents must use the network to exchange information primarily using the remote procedure call (RPC) as illustrated in Figure 5(a). When a stationary agent requires processing on a different platform, it must employ the services of another agent. Here, a communication (or request) conveys the intention to invoke a specific operation (via an RPC). The operation is then executed and the result (or reply) is returned to the requesting agent. Using stationary agents, then

  • reduces the complexity required for mobility,
  • encourages specialization within platforms,
  • employs well-established protocols, and
  • supports closed-environment philosophy.

On the other hand, the stationary approach also

  • results in performance problems in those situations requiring high volume or frequency,
  • results in processing inefficiencies because having many specialized agents creates more work than having a single mobile agent, and
  • reduces effectiveness when a connection is lost.

In contrast, mobile agents use the network to exchange information primarily by changing platforms and environments using the remote programming (RP) technique. When a mobile agent requires processing on a different platform, it physically relocates to the desired server as illustrated in Figure 5(b). This requires that all structural and behavioral properties of the agent be transferred during migration, and that any environmental differences be changed or accommodated. The big issues here are how much time is required to prepare for migration, how much data is actually transferred, and the performance of the transfer communication.

The agent can handle migrations. While this reduces the complexity of the runtime environment, it increases agent complexity. In contrast, migrations can be transparent to the agent, which reduces agent complexity while increasing the complexity of the runtime environment. The advantages of mobile agents are that they:

  • Reduce network load
  • Reduce network-related delay
  • Reduce resource usage of clients
  • Enable distributed problem solving
  • Support asynchronous, autonomous processing
  • Promote reconfigurable or customized services
  • Make active behavior scenarios conceivable
  • Enhance decentralization options

The disadvantages of mobile agents are that they

  • involve a number of security issues such as the identification and authentication of agents, protection from destructive agents, as well as the assurance of the agent's willingness and ability to pay;
  • require transport/migration mechanisms be added to software environments, thus increasing their complexity;
  • have no industry standards for agent environments, migration approaches, or for measuring and billing resource consumption; and
  • have not yet been used in an environment containing a large number of mobile agents.

AGENT SECURITY4

Agents are software entities that often run in a distributed computing environment and interact with many other software entities, including other agents. When software runs in a distributed environment, security issues are numerous. The possibility of encountering security problems increases in open environments, such as the Internet or a virtual private network, or in any environment where all the entities are not known, understood, and administered by a single group.

Various types of security risks are described in the Types of Security Risks section. Many of these risks are inherent to distributed computing environments, particularly when software passes messages that can be intercepted, modified, or destroyed. While this is a threat to agent systems, it is also a threat to any software system that depends on messages being passed reliably. Another risk centers on whether or not the software can assume that it is using trustworthy services.

Generally, the word security refers to the actions taken to ensure that something is secure. If the item is free from danger, it cannot be taken, lost, or damaged. In practice, security is usually applied only to somewhat valuable items, because implementing security has associated costs. This is true both in the everyday world, where we protect our cars or homes from theft (but not a disposable pen) and in the world of computing, where we may protect some, but not all, company resources.

Security policy refers to how access to valuable resources is controlled. For example, a company may have a policy about which groups can access which data, or when certain types of processing jobs can run, or whether outside entities can connect to the company network. Agent systems will also require security policies, which may control where agents can run, what they can do, and with what other entities they can communicate.

Security policies are usually based on identity, which is simply something that serves to identify or refer to an entity. In this way, an agent could be referred to by its name, a role that it is playing, or the fact that it is a member of some organization, and so on. An agent, then, can have multiple forms of identity. For example, a particular agent could simultaneously be a purchasing agent working on behalf of user Rolf Smith; be playing the role of a bidder in a negotiation with E-Widgets; have its software composed of elements from company Exdeus; and have the serial number 98734501. Each of these identities might be important in different interactions.

Identity is based on a credential, which is a set of data that can be validated by a third party to prove that the entity is what it says it is. For example, when a user logs into a computer system, he often enters both a username and a password, which is the credential that is validated to indicate that he really is that username. Other common mechanisms for identity and credentials are X5.09 certificates and PGP keys.

TYPES OF SECURITY RISKS
Here are several security threats that could happen in multiagent systems:

  • Unauthorized disclosure—a breach in the confidentiality of an agent's private data or metadata. For example, an entity eavesdrops on the interaction between agents and extracts information on the goals, plans, capabilities, or other information that belongs to the agents. Or, an entity can probe the running agent and extract useful information.
  • Unauthorized alteration—the unauthorized modification or corruption of an agent, its state, or data. For example, the content of messages is modified during transmission, or the agent's internal value of the last bid is modified.
  • Damage—destruction or subversion of a host's files, configuration, or hardware, or of an agent or its mission.
  • Unauthorized copy and replay—an attempt to copy an agent or a message and clone or retransmit it. For example, a malicious platform creates an illegal copy or a clone of an agent, or a message from an agent is illegally copied and retransmitted.
  • Denial of service—an attack that attempts to deny resources to the platform or an agent. For example, one agent floods another agent with requests and the second agent is unable to provide its services to other agents.
  • Repudiation—an agent or agent platform denies that it has received or sent a message or taken a specific action. For example, a commitment between two agents as the result of a contract negotiation is later ignored by one of the agents. That agent denies the negotiation has ever taken place and refuses to honor its part of the commitment.
  • Spoofing and masquerading—an unauthorized agent or agent platform claims the identity of another agent or agent platform. For example, an agent registers as a directory service agent and therefore receives information from other registering agents.

MESSAGE PASSING
In systems where agents pass messages, the importance of avoiding message alteration or disclosure is described in the previous list. If a message is altered, it might provide incorrect information or transmit a dangerous action. If a message can be read by or disclosed to other entities, the other entity may use the acquired data inappropriately.

Message alteration is usually avoided by providing a mechanism for authenticating the message. Most of the techniques for doing this are based on public/private key pair technologies, such as X.509 certificates. Additional information is sent with the message that allows the receiver to validate that the message has not been changed. Message disclosure is avoided by encrypting the message, which again is based mostly on public/private key pair technologies.

For both threats, the authentication or encryption can occur either by encrypting the message itself or by sending it through a transport that provides authentication or encryption services.

Other threats related to message passing include copy and replay, spoofing and masquerading, and repudiation. Both in copy and replay, and in spoofing and masquerading, an agent may assume the identity of another agent. Using this false identity, it can communicate with another agent to request an inappropriate action. Many agent systems use relatively simplistic naming schemes (or identities) with no additional credentials. Therefore, a message claiming to be from "Joe" cannot be validated.

This set of problems can be solved in various ways. By tagging messages with credentials, the message can be sent in a way that ensures authentication. Thus, the message can be sent without the possibility of tampering by a third party. Tagging messages with credentials can also help avoid repudiation. If a message is signed using a credential, the signing agent cannot later deny that it sent the message.

SYSTEM COMPONENTS DEALING WITH ONE ANOTHER
Agents can use agent platforms to provide services. They can also interact with well-known services such as a directory service that helps them locate other agents or an ontology service that helps them look up ontologies. When two system components interact, several risks can occur—the two most likely being damage, or spoofing and masquerading.

In the damage scenario, the agent may do malicious or inappropriate things to the host system, such as corrupt or delete files. Therefore, the agent platform may want to control which agents can take which actions. Typically, the agent would offer a credential that identifies it to the agent platform. After validating the credential, the agent platform would use a security policy, based on the agent's identity to determine what actions the agent could take, and would enforce that policy. This is very much like the access control lists found in most operating systems. However, agent systems probably want to control much more than simply reading, writing, and running files. They might want to control message sending, utilization of various resources, when and where an agent can move, and whether a moving agent can run on the platform.

Just as the agent platform may want to validate what entity it is dealing with, an agent may want to validate that it is dealing with an agent platform it knows to be genuine. Agent platforms and services could pretend to be "legitimate" but in fact have some dangerous behavior, such as recording message transmissions before encryption, cloning copies of the agent for its own purposes, or providing false information.

OTHER RISKS TO WHICH AGENTS CAN BE EXPOSED
In constructing software for an agent, certain types of risks must be addressed—ensuring that the following things cannot occur:

  • viewing the private security key of the agent,
  • viewing the private data of the agent (i.e., the highest bid an agent is willing to make on a product),
  • invoking private methods in the agent, and
  • designing public methods in such a way that permits security risks.

MORE SECURITY CONSIDERATIONS
When designing agent systems, the following aspects of security, security policy, and identity should be considered:

  • Agents and agent platforms can have multiple credentials. Multiple credentials reflect the reality that we have multiple roles. Users may have credentials as part of several organizations, as an individual, as the owner of multiple credit cards, and so on.
  • Agents can have their own credentials. They may also have credentials for the user that they represent in an e-commerce application.
  • Agents should not be created that can act anonymously. For example, a user may want to get data about drug or alcohol treatment without revealing his or her identity. Obviously, sites can choose to reject these agents, if their security policies do not allow interaction with anonymous entities.
  • All aspects of security need to be managed.
  • Traceability of actions can be useful.
  • Using a lease model on any credential can be helpful. In a lease model, credentials expire after a certain period of time but can often be renewed from a credential authority. This control can be a very effective way to clean up credentials in a system that uses relatively short-lived agents. Requiring long-lived agents to renew their credentials is also useful, because when an entity with bad credentials is forced to renew, it will be rejected and shut down.
  • Identity and credentials are also useful for building reputation services. Such services provide a way of determining whether a particular entity has behaved responsibly.

References

  1. Odell, J. "Agents (Part 1): Complex Systems," Cutter Consortium, Executive Report, 3(4), 2000.
  2. Foundation for Intelligent Physical Agents FIPA98 Agent Management Specification, Geneva, Switzerland, Oct. 1998.
  3. Brenner, W., R. Zarnekow, and H. Wittig. Intelligent Software Agents: Foundations and Applications, Springer–Verlag, Berlin, 1998.
  4. Stout, K. Adapted from her contribution to the Agent Technology Green Paper, OMG Agents Working Group, 2000.