In a multiagent system (MAS), agents can have different opinions about a given problem. In order to solve the problem collectively they have to reach consensus about the ontology of the problem. A solution to probabilistic reasoning in such an environment by using a social network of trust is given. It is shown that frame logic can be annotated and amalgamated by using this approach which gives a foundation for collective ontology development in MAS. Consider the following problem: a set of agents in a multiagent system (MAS) model a certain domain in order to collectively solve a problem. Their opinions about the domain differ in various ways. The agents are connected into a social network defined by trust relations. The problem to be solved is how to obtain consensus about the domain.

To formalize the problem let

By modeling some domains of interest (using a formalism like ontologies, knowledge bases, or other models) a person expresses his/her knowledge about it. Thus the main concept of interest in modeling any domain is knowledge. Nonaka and Takeuchi once defined knowledge as a “justified true belief” [

The previously outlined definition of knowledge includes, intentionally or not, two more crucial concepts:

In an environment where a community of agents collaborates in modeling a domain there is a chance that there will be disagreements about the domain which can yield certain inconsistencies in the model. A good example of such disagreements is the so-called “editor wars” on Wikipedia the popular free online encyclopedia. A belief about the war in ex-Yugoslavia will likely differ between a Croat and a Serb, but they will probably share the same beliefs about fundamental mathematical algebra.

Following this perspective, our conceptualization of statements as units of formalized knowledge will consider the probability of giving a true statement a matter of justification. An agent is justified if other members of a social system believe in his statements. Herein we would like to outline a social network metric introduced by Bonacich [

In order to express knowledge about a certain domain, one needs an adequate language. Herein we will use frame logic or F-logic introduced by [

The syntax of F-logic is defined as follows [

The alphabet

a set of object constructors,

an infinite set of variables,

auxiliary symbols, such as,

usual logical connectives and quantifiers,

Object constructors (the elements of

A language in F-logic consists of a set of formulae constructed out of alphabet symbols. The simplest formulae in F-logic are called F-molecules.

A molecule in F-logic is one of the following statements:

an is-a assertion of the form

an object molecule of the form O [a “;” separated list of method expressions], where

Noninheritable data expressions can be in either of the following two forms.

A non-inheritable scalar expression

A non-inheritable set-valued expression

Inheritable scalar and set-valued expression are equivalent to their non-inheritable counterparts except that

Signature expression can also take two different forms.

A scalar signature expression

A set-valued signature expression

All methods’ left hand sides (e.g.,

As in a lot of other logic, F-formulae are built out of simpler ones by using the usual logical connectives and quantifiers mentioned above.

A formula in F-logic is defined recursively:

F-molecules are F-formulae;

F-logic further allows us to define logic programs. One of the popular class of logic programs is Horn programs.

A Horn F-program consists of Horn rules, which are statements of the form

Whereby

For our purpose these definitions of F-logic are sufficient, but the interested reader is advised to consult [

A formal approach to defining social networks is graph theory [

A

A graph can be represented with the so-called adjacency matrix.

Let

Matrix

The notion of directed and valued-directed graphs is of special importance to our study.

A

A

A social network can be represented as a graph

One of the main applications of graph theory to social network analysis is the identification of the “most important” actors inside a social network. There are lots of different methods and algorithms that allow us to calculate the importance, prominence, degree, closeness, betweenness, information, differential status, or rank of an actor. As previously mentioned we will use the eigenvector centrality to annotate agents’ statements.

Let

PageRank is a variant of the eigenvector centrality measure, which we decided to use herein. PageRank was developed by Google or more precise by Larry Page (from where the word play PageRank comes from) and Sergey Brin. They used this graph analysis algorithm, for the ranking of web pages on a web search engine. The algorithm uses not only the content of a web page but also the incoming and outgoing links. Incoming links are hyperlinks from other web pages pointing to the page under consideration, and outgoing links are hyperlinks to other pages to which the page under consideration points.

PageRank is iterative and starts with a random page following its outgoing hyperlinks. It could be understood as a Markov process in which states are web pages, and transitions (which are all of equal probability) are the hyperlinks between them. The problem of pages which do not have any outgoing links, as well as the problem of loops, is solved through a jump to a random page. To ensure fairness (because of a huge base of possible pages), a transition to a random page is added to every page which has the probability

A very convenient feature of PageRank is that the sum of all ranks is 1. Thus, semantically, we can interpret the ranking value of agents (or actors in the social network) participating in a given MAS as

As shown in Section

can be rewritten as corresponding atomic F-molecules

We will consider in the following that all F-molecule statements are atomic. Now we are able to define the annotation scheme of agent statements as follows.

Let

An extension to such a probability annotation is the situation when statements can have a negative valency. This happens when a particular agent disagrees to a statement of another agent. Such an annotation would be defined as follows.

Let

Such a definition is needed in order to avoid possible negative probability (the case when disagreement is greater than approvement).

In a concrete system we need to provide a mechanism for query execution that will allow agents to issue queries of the following form:

The solution of this problem is equivalent to finding the probabilities of all possible solutions of query

Let

The probability of a solution

If

If

If

The implications of these three definitions are given in the following four theorems.

If

Since

and due to Rule

If

Since the given F-molecule can be written as

If

Since any class hierarchy can be presented as a directed graph, it is obvious that there has to be at least one path from

For the statement

Since there is a probability that there are multiple paths which are alternative possibilities for proving the same premise, it holds that

Thus from Rule

what we wanted to prove.

If

Since the statement

A special case of query execution is when the social ontology contains Horn rules. Such rules are also subject to probability annotation. Thus we have

The query execution scheme has to be altered as well. Instead of finding only the solutions from formula

In order to calculate the probability of a result obtained by using some probability annotated rule we establish the following definition.

Let

This definition is intuitive since for the obtainment of result

In order to demonstrate the approach we will take the following (imaginary) example of an MAS (all images, names, and motives are taken from the 1968 movie “Yellow Submarine” produced by United Artists (UA) and King Features Syndicate). Presume we have a problem domain entitled “Pepperland” with objects entitled “Music” and “Purpose of life.” Let us further presume that we have six agents collaborating on this problem, namely, “John,” “Paul,” “Ringo,” “George,” “Max,” and “Glove.”

Another intelligent agent “Jeremy Hilary Boob Ph.D (Nowhere man)” tries to reason about the domain, but as it comes out, the domain is inconsistent. Table

Viewpoints of “Pepperland” agents.

Music | Purpose of life | |
---|---|---|

John | : harmonious sounds | Main purpose |

Paul | : harmonious sounds | Main purpose |

Ringo | : harmonious sounds | Main purpose |

George | Disagrees to (: evil noise) | Main purpose |

Max | : evil noise | Disagrees to (main purpose |

Glove | : evil noise | Main purpose |

Due to the disagreement on different issues a normal query would yield at least questionable results. For instance, if the disagreement statements are ignored in frame logic syntax, the domain would be represented with a set of sentences similar to the following:

Thus a query asking for the class to which the object entitled “Music” belongs

Nowhere man thinks hard and comes up with a solution. The agents form a social network of trust are shown in Figure

Social network of “Pepperland.”

The figure reads as follows: Ringo trusts Paul and John, Paul trusts John, John trusts George, George trusts John, Max trusts Glove, and Glove does not trust anyone. Using the previously described PageRank algorithm Nowhere man was able to order the agents by their respective rank (Table

Trust ranking of the “Pepperland” agents.

Agent | Ranking |
---|---|

John | 0.303391 |

Glove | 0.289855 |

George | 0.267724 |

Paul | 0.060667 |

Max | 0.043478 |

Ringo | 0.034884 |

Now, Nowhere man uses these rankings to annotate the statements given by the agents:

As we can see the probability that object

From this probability calculation Nowhere man is able to conclude that the formula

He can now conclude that

Nowhere man continues reasoning and calculates the probabilities for the other queries

From these calculations Nowhere man concludes that

Now we can complicate things a bit to see the other parts of the approach in action. Assume now that John has expressed a statement that relates the object entitled “Music” to the object entitled “Purpose of life” and named the attribute “has to do with.” We would now have the following social ontology:

Now suppose that Nowhere man wants to issue the following query:

The solutions using “normal” frame logic are

To calculate the probabilities Nowhere man uses the following procedure. The variables in the query are exchanged with the actual values for a given solution:

Now according to rule 1 the conjunction becomes

The second parts of the equations were already calculated, and according to Theorem

We already know the probabilities of the is-a statement, and since

To provide a mechanism for agents to query multiple annotated social ontologies we decided to use the principles of amalgamation. The model of knowledge base amalgamation which is based on online querying of underlaying sources is described in [

Since the local annotations of different ontologies that are subject to amalgamation do not necessarily hold for the global ontology, we need to introduce a mechanism to integrate the ontologies in a coherent way which will yield global annotations. Since the set of ontologies is a product of a set of respective social agent networks surrounding them, we decided to firstly integrate the social networks in order to provide the necessary foundation for global annotation.

The integration of

In particular

Let

What remains is to provide the annotation that is at the same time the amalgamation scheme.

Let

To demonstrate the amalgamation approach proposed here let us again assume that our intelligent agent “Jeremy Hilary Boob Ph.D. (Nowhere man)” tries to reason about the “Pepperland” domain, but this time he wants to draw conclusions from the domain “Yellow submarine” as well. The “Yellow submarine” domain is being modeled by “Ringo,” “John,” “Paul,” “George,” and “Young Fred” which form the social network shown in Figure

Social network of “Yellow submarine.”

Since Nowhere man wants to reason about both domains he needs to find a way to amalgamate these two domains.

Again he thinks hard and comes up with the following solution. All he needs to do is to integrate the two social networks together and recalculate the ranks of all agents of this newly established social network in order to reannotate the metainformation in both domains.

Since the networks of “Pepperland” and “Yellow submarine” can be represented as the following sets of tuples:

The newly established integrated social network is shown in Figure

The integration of two social networks.

Now Nowhere man calculates the ranks of this new network and uses the previously described procedure to annotate the meta information (Section

As we could see from the previous examples, in order to gain accurate knowledge and accurate probabilities about a certain domain, we had to introduce an all-knowing agent (Nowhere man). This agent had to be aware of all knowledge of each agent and all trust relations they engage in. Such a scenario is not feasible for large-scale MAS (LSMAS). Thus we need to provide a mechanism to let agents reason in a distributed manner and still get accurate enough results.

This problem consists of two parts; namely, an agent needs (1) to acquire an accurate approximation of the ranks of each agent in its network and (2) to acquire knowledge about the knowledge of other agents. The first part deals with annotation and the second with amalgamation of the ontology.

A solution to the first problem might be to calculate ranks in a distributed manner, as has been shown in [

The second problem could be used by the proposed algorithm for amalgamation. Each agent can ask agents it trusts about their knowledge and then amalgamate their ontology with its own. In this way the agent acquires continuously better knowledge about its local environment. We could have easily considered Nowhere man in the last example to be doing the just described procedure—asking one agent after another about their knowledge.

In order to provide a practical example, consider a network of store-and-forward e-mail routing agents in which spam bots try to send unsolicited messages. Some routers (agents) might be under the control of spam bots and send out messages which might be malicious to users and other routers. The domain these agents reason about is the domain of spam messages—for example, which message from which user forwarded by which router and what kind of content is spam and should be discarded.

This scenario can be modeled by using the previously described approach: agents form trust relations and mutually exchange new rules about spam filtering. An agent will amalgamate rules (ontologies) of other agents with its own but will decide about a message (using an adequate query) based not only on the given rules but also on the probability annotation given by the network of trust.

Alternative approaches to measuring trust in the form of the reputation inference and the SUNNY algorithm are presented in [

A different approach to a similar problem related to trust management in the Semantic Web is presented in [

When agents have to solve a problem collectively, they have to reach consensus about the domain since their opinions can differ. Especially when agents are self-interested, their goals in a given situation can vary quite intensively. Herein an approach to reaching this consensus based on a network of trust between agents has been presented which is a generalization of the work done in [

Still, there are open questions: how does the approach scale in fully decentralized environments like LSMAS? What are the implications of self-interest or could agents develop strategies to “lie” on purpose to attain their goals? These and similar questions are subject to our future research.