Probabilistic Multiagent Reasoning over Annotated Amalgamated F-Logic Ontologies

In a multiagent system (MAS), agents can have different opinions about a given problem. In order to solve the problem collectively they have to reach consensus about the ontology of the problem. A solution to probabilistic reasoning in such an environment by using a social network of trust is given. It is shown that frame logic can be annotated and amalgamated by using this approachwhich gives a foundation for collective ontology development in MAS. Consider the following problem: a set of agents in a multiagent system (MAS) model a certain domain in order to collectively solve a problem. Their opinions about the domain differ in various ways.The agents are connected into a social network defined by trust relations.The problem to be solved is how to obtain consensus about the domain.


Introduction
To formalize the problem let  = { 1 , . . .,   } be a set of agents, let  be a trust relation defined over  × , and let  = { 1 , . . .,   } be a problem domain consisting of a set of objects.Let further  be a set of all possible statements about , and let  be a relation over  × .We will denote by  the social ontology expressed by the agents.What is the probability that a certain statement from the expressed statements in  is true?
By modeling some domains of interest (using a formalism like ontologies, knowledge bases, or other models) a person expresses his/her knowledge about it.Thus the main concept of interest in modeling any domain is knowledge.Nonaka and Takeuchi once defined knowledge as a "justified true belief " [1] whereby this definition is usually credited to Plato.This means that the modeling person implicitly presumes that the expressed statements in his/her model are true.On the other hand if one asks the important question what is the truth?, we arrive at one of the fundamental philosophical questions.Nietzsche once argued in [2] that a person is unable to prove the truth of a statement which is nothing more than the invention of fixed conventions for merely practical purposes, like repose, security, and/or consistence.According to this view, no one can prove that this paper is not just a fantasy of the reader reading it.
The previously outlined definition of knowledge includes, intentionally or not, two more crucial concepts: justified and belief.An individual will consider something to be true that he believes in, and, from that perspective, the overall truth will be a set of statements that the community believes in.This mutual belief makes this set of statements justified.The truth was once that the Earth was the center of the universe until philosophers and scientists started to question that theory.The Earth was also once a flat surface residing on the back of an elephant.So an interesting fact about the truth, from this perspective, is that it evolves depending on the different beliefs of a certain community.
In an environment where a community of agents collaborates in modeling a domain there is a chance that there will be disagreements about the domain which can yield certain inconsistencies in the model.A good example of such disagreements is the so-called "editor wars" on Wikipedia the popular free online encyclopedia.A belief about the war in ex-Yugoslavia will likely differ between a Croat and a Serb, but they will probably share the same beliefs about fundamental mathematical algebra.
Following this perspective, our conceptualization of statements as units of formalized knowledge will consider the probability of giving a true statement a matter of justification.An agent is justified if other members of a social system believe in his statements.Herein we would like to outline a social network metric introduced by Bonacich [3] called eigenvector centrality which calculates the centrality of a node based on the centrality's of its adjacent nodes.Eigenvector centrality assigns relative values to all nodes of a social network based on the principle that connections to nodes with high values contribute more to the value of the node in question than equal connections to nodes with low values.In a way, if we interpret the network under consideration as a network of trust, it yields an approximation of the probability that a certain agent will say the truth in a statement as perceived by the other agents of the network.The use of eigenvector centrality here is arbitrary; any other metric with the described properties could be used as well.
In order to express knowledge about a certain domain, one needs an adequate language.Herein we will use frame logic or F-logic introduced by [4], which is an objectoriented, deductive knowledge base and ontology language.The use of F-logic here is arbitrary, and any other formal (or informal) language could be used that allows expressing an ontology of a given domain.Nevertheless, F-logic allows us to reason about concepts (classes of objects), objects (instances of classes), attributes (properties of objects) and methods (behavior of objects), by defining rules over the domain, which makes it much more user friendly than other approaches.

Introducing Frame Logic
The syntax of F-logic is defined as follows [4].
Object constructors (the elements of F) play the role of function symbols in F-logic whereby each function symbol has an arity.The arity is a nonnegative integer that represents the number of arguments the symbol can take.A constant is a symbol with arity 0, and symbols with arity ≥1 are used to construct larger terms out of simpler ones.An id term is a usual first-order term composed of function symbols and variables, as in predicate calculus.The set of all variable free or ground id terms is denoted by (F) and is commonly known as Herbrand Universe.Id terms play the role of logical object identities in F-logic which is a logical abstraction of physical object identities.
A language in F-logic consists of a set of formulae constructed out of alphabet symbols.The simplest formulae in F-logic are called F-molecules.Definition 2. A molecule in F-logic is one of the following statements: (i) an is-a assertion of the form  ::  ( is a nonstrict subclass of ) or of the form  :  ( is a member of class ), where , , and  are id terms; (ii) an object molecule of the form O [a ";" separated list of method expressions], where  is an id term that denotes an object.A method expression can be either a noninheritable data expression, an inheritable data expression, or a signature expression.
(a) Noninheritable data expressions can be in either of the following two forms.
(b) Inheritable scalar and set-valued expression are equivalent to their non-inheritable counterparts except that → is replaced with •→ and  with •.(c) Signature expression can also take two different forms.
As in a lot of other logic, F-formulae are built out of simpler ones by using the usual logical connectives and quantifiers mentioned above.Definition 3. A formula in F-logic is defined recursively: (i) F-molecules are F-formulae; (ii)  ∨ ,  ∧ , and ¬ are F-formulae if so are  and ; (iii) ∀  and ∃ are F-formulae if so are  and , and  and  are variables.
F-logic further allows us to define logic programs.One of the popular class of logic programs is Horn programs.( Whereby ℎ is an F-molecule, and  is a conjunction of F-molecules.Since the statement is a clause, we consider all variables to be implicitly universally quantified.
For our purpose these definitions of F-logic are sufficient, but the interested reader is advised to consult [4] for profound logical foundations of object-oriented and frame based languages.

Introducing Social Network Analysis
A formal approach to defining social networks is graph theory [5].A graph can be represented with the so-called adjacency matrix.Definition 6.Let G be a graph defined with the set of nodes { 1 ,  2 , . . .,   } and edges { 1 ,  2 , . . .,   }.For every ,  (1 ⩽  ⩽  and 1 ⩽  ⩽ ) one defines 1, if there is an edge between nodes   and   , 0, otherwise.
Matrix  = [  ] is then the adjacency matrix of graph G.The matrix  is symmetric since if there is an edge between nodes   and   , then clearly there is also an edge between   and   .
The notion of directed and valued-directed graphs is of special importance to our study.A social network can be represented as a graph G = (N, A) where N denotes the set of actors and A denotes the set of relations between them [6].If the relations are directed (e.g.support, influence, message sending, trust, etc.), we can conceptualize a social network as a directed graph.If the relations additionally can be measured in a numerical way, social networks can be represented as valued digraphs.
One of the main applications of graph theory to social network analysis is the identification of the "most important" actors inside a social network.There are lots of different methods and algorithms that allow us to calculate the importance, prominence, degree, closeness, betweenness, information, differential status, or rank of an actor.As previously mentioned we will use the eigenvector centrality to annotate agents' statements.Definition 9. Let   denote the value or weight of node   , and let [  ] be the adjacency matrix of the network.For node   let the centrality value be proportional to the sum of all values of nodes which are connected to it.Hence where () is the set of nodes that are connected to the th node,  is the total number of nodes, and  is a constant.In vector notation this can be rewritten as PageRank is a variant of the eigenvector centrality measure, which we decided to use herein.PageRank was developed by Google or more precise by Larry Page (from where the word play PageRank comes from) and Sergey Brin.They used this graph analysis algorithm, for the ranking of web pages on a web search engine.The algorithm uses not only the content of a web page but also the incoming and outgoing links.Incoming links are hyperlinks from other web pages pointing to the page under consideration, and outgoing links are hyperlinks to other pages to which the page under consideration points.
PageRank is iterative and starts with a random page following its outgoing hyperlinks.It could be understood as a Markov process in which states are web pages, and transitions (which are all of equal probability) are the hyperlinks between them.The problem of pages which do not have any outgoing links, as well as the problem of loops, is solved through a jump to a random page.To ensure fairness (because of a huge base of possible pages), a transition to a random page is added to every page which has the probability  and is in most cases 0.15.The equation which is used for rank calculation (which could be thought of like the probability that a random user will open this particular page) is as follows: where  1 ,  2 , . . .,   are nodes under consideration, (  ) the set of nodes pointing to   , (  ) the number of arcs which come from node   , and  the number of all nodes [7,8].
A very convenient feature of PageRank is that the sum of all ranks is 1.Thus, semantically, we can interpret the ranking value of agents (or actors in the social network) participating in a given MAS as the probability that an agent will say the truth in the perception of the others.In the following we will use the ranking, obtained through such an algorithm in this sense.

Probability Annotation
As shown in Section 2 there are basically three types of statements agents can make: (1) is-a relations, (2) object molecules, and (3) Horn rules.While is-a relations and Horn rules can be considered atomic, object molecules can be compound since object molecules of the form can be rewritten as corresponding atomic F-molecules We will consider in the following that all F-molecule statements are atomic.Now we are able to define the annotation scheme of agent statements as follows.
Definition 10.Let  = { 1 ,  2 , . . .,   } be a set of statements, let  = { 1 ,  2 , . . .,   } be a set of agents, let  :  ×  be a corresponding social ontology, let  be a trust relation between agents over  × , and let  :  → [0, 1] be a function that assigns ranks to agents based on .Then the annotation ∧ of the statements is defined as follows: An extension to such a probability annotation is the situation when statements can have a negative valency.This happens when a particular agent disagrees to a statement of another agent.Such an annotation would be defined as follows.
Definition 11.Let  = { 1 ,  2 , . . .,   } be a set of signed statements, let  = { 1 ,  2 , . . .,   } be a set of agents, let  :  ×  be a corresponding social ontology, let  be a trust relation between agents over  × , and let  :  → [0, 1] be a function that assigns ranks to agents based on .Then the annotation ∧ of the statements is defined as follows: Such a definition is needed in order to avoid possible negative probability (the case when disagreement is greater than approvement).

Query Execution
In a concrete system we need to provide a mechanism for query execution that will allow agents to issue queries of the following form: where  is any formula in frame logic and  a probability.The semantics of the query is: does the formula  hold with probability  with regard to the social ontology?
The solution of this problem is equivalent to finding the probabilities of all possible solutions of query   : . ( Definition 12. Let   = { 1 ,  2 , . . .,   } be a set of solutions to query ; then    is a subset of   consisting of those solutions from   which probability is greater or equal to  and represents the set of solutions to query   . The probability of a solution (  ) is obtained by a set of production rules.The implications of these three definitions are given in the following four theorems.
Proof.Since   in this case can be written as: and due to Rule 3 the probabilities of the components of this conjunction are min(( 1 ), (V 1 )),. . ., min((  ), (V  )).Due to Rule 1 the probability of a conjunction is the product of the probabilities of its elements which yields ∏  =1 min((  ), (V  )).Proof.Since the given F-molecule can be written as the proof is analogous to the proof of Theorem 13.
Proof.Since any class hierarchy can be presented as a directed graph, it is obvious that there has to be at least one path from  1 to  2 .If the opposite was true, the statement would not hold and thus wouldn't be in the initial solution set.
For the statement  1 ::  2 to hold, at least one path statement of the form has to hold as well.This yields according to Rule 1 that the probability of one path would be Since there is a probability that there are multiple paths which are alternative possibilities for proving the same premise, it holds that Thus from Rule 2 we get what we wanted to prove.

Theorem 16.
If   is a statement of classification of the form  : , then Proof.Since the statement   can be written as the given probability is a consequence of Rule 1 and Theorem 15.
A special case of query execution is when the social ontology contains Horn rules.Such rules are also subject to probability annotation.Thus we have rule :  ←  ∧ , where  is the annotated probability of the rule.In order to provide a mechanism to deal with such probability annotated rules, we will establish an extended definition by using an additional counter predicate for each Horn rule.Thus, each rule is extended as whereby  is a predicate which will count the number of times the particular rule has been successfully executed for finding a given solution.
The query execution scheme has to be altered as well.Instead of finding only the solutions from formula  an additional variable for every rule in the social ontology is added to the formula.For  rules we would thus have  :  ∧  (? 1 ) ∧  (? 2 ) ∧ ⋅ ⋅ ⋅ ∧  (?  ) . ( In order to calculate the probability of a result obtained by using some probability annotated rule we establish the following definition.
Definition 17.Let  be a result obtained with probability   by query  from a social ontology, let   be the probability of rule , and let  be the number of times rule  was executed during the derivation of result .The final probability of  is then defined as This definition is intuitive since for the obtainment of result  the rule  has to hold  times.Thus if a social ontology contains  rules ( 1 , . . .,   ) their corresponding annotated probabilities are  1 , . . .,   , and numbers of execution during derivation of result  are  1 , . . .,   , then the final probability is defined as (25)

Annotated Reasoning Example
In order to demonstrate the approach we will take the following (imaginary) example of an MAS (all images, names, and motives are taken from the 1968 movie "Yellow Submarine" produced by United Artists (UA) and King Features Syndicate).Presume we have a problem domain entitled "Pepperland" with objects entitled "Music" and "Purpose of life." Let us further presume that we have six agents collaborating on this problem, namely, "John, " "Paul, " "Ringo, " "George, " "Max, " and "Glove." Another intelligent agent "Jeremy Hilary Boob Ph.D (Nowhere man)" tries to reason about the domain, but as it comes out, the domain is inconsistent.Table 1 shows the different viewpoints of agents.
Due to the disagreement on different issues a normal query would yield at least questionable results.For instance, if the disagreement statements are ignored in frame logic syntax, the domain would be represented with a set of sentences similar to the following: (26) Thus a query asking for the class to which the object entitled "Music" belongs ?−  Music : ?
would yield two valid answers, namely, "evil noise" and "harmonious sounds." Likewise if querying for the value of the "main purpose" attribute of object  Purpose of life , for example, ?−  Purpose of life [main purpose → ?] (28) the valid answers would be "glove, " "love, " and "drums." But, these answers do not reflect the actual state of the MAS, since one answer is more meaningful to it than the others.Nowhere man thinks hard and comes up with a solution.The agents form a social network of trust are shown in Figure 1.
The figure reads as follows: Ringo trusts Paul and John, Paul trusts John, John trusts George, George trusts John, Max trusts Glove, and Glove does not trust anyone.Using the previously described PageRank algorithm Nowhere man was able to order the agents by their respective rank (Table 2).Now, Nowhere man uses these rankings to annotate the statements given by the agents: As we can see the probability that object  Music is and "evil noise" is equal to the sum of agents' rankings who agree to this statement (Glove and Max) minus the sum of agents' rankings who disagree (George).Note that if an agent had expressed the same statement twice with the same attribute name, his ranking would be counted only once.Also note that, if an agent would have agreed and disagreed to a statement, his sum would be zero, since he would be at the agreed and disagreed side.From this probability calculation Nowhere man is able to conclude that the formula  Music : evil noise holds with probability 0.065609.Likewise he calculates the probability of  Music : harmonious sounds  (harmonious sounds) = Rank (John) He can now conclude that  Music : harmonious sounds holds more likely than  Music : evil noise with regard to the social network of agents.From these calculations Nowhere man concludes that the final solutions to query ?−  Music : ? are ?= evil noise ∧ 0.065609 ? = harmonious sounds ∧ 0.398942.(38) The second parts of the equations were already calculated, and according to Theorem 14 the first parts of the equations become

Amalgamation
To provide a mechanism for agents to query multiple annotated social ontologies we decided to use the principles of amalgamation.The model of knowledge base amalgamation which is based on online querying of underlaying sources is described in [9].The intention of amalgamation is to show if a given solution holds in any of the underlaying sources.
Since the local annotations of different ontologies that are subject to amalgamation do not necessarily hold for the global ontology, we need to introduce a mechanism to integrate the ontologies in a coherent way which will yield global annotations.Since the set of ontologies is a product of a set of respective social agent networks surrounding them, we decided to firstly integrate the social networks in order to provide the necessary foundation for global annotation.
Definition 18.The integration of  social networks represented with the valued digraphs In particular V will be a social network analysis metric or in our case a variant of the eigenvector centrality.Now we can define the integration of ontologies as follows.
Definition 19.Let  1 , . . .,   be sets of statements as defined above representing particular social ontologies.The integration is given as What remains is to provide the annotation that is at the same time the amalgamation scheme.
be the integration of  social networks of agents, let  1 ∪ ⋅ ⋅ ⋅ ∪   be the integration of their corresponding social ontologies, let  be a trust relation between agents, and let  :  → [0, 1] be a function that assigns ranks to agents based on ; then the amalgamated annotation scheme ∧ of the metadata statements is defined as follows:

Amalgamated Annotated Reasoning Example
To demonstrate the amalgamation approach proposed here let us again assume that our intelligent agent "Jeremy Hilary Boob Ph.D. (Nowhere man)" tries to reason about the "Pepperland" domain, but this time he wants to draw conclusions from the domain "Yellow submarine" as well.The "Yellow submarine" domain is being modeled by "Ringo, " "John, " "Paul, " "George, " and "Young Fred" which form the social network shown in Figure 2. Since the contents of this domain as well as the particular ranks of the agents in it will not be used further in the example, they have been left out.
Since Nowhere man wants to reason about both domains he needs to find a way to amalgamate these two domains.
Again he thinks hard and comes up with the following solution.All he needs to do is to integrate the two social networks together and recalculate the ranks of all agents of this newly established social network in order to reannotate the metainformation in both domains.
Since the networks of "Pepperland" and "Yellow submarine" can be represented as the following sets of tuples: in [11,12], respectively.Both of these could have been used instead of PageRank in the approach outlined herein.A much more elaborated system of measuring reputation and likewise trust in MAS called the Regret system is presented in [13].It is based on three different dimensions of reputation (individual, social, and ontological) and allows for measuring several types of reputation in parallel.The approach is partly incompatible with our approach, but several adjustments would allow us to combine both approaches.
A different approach to a similar problem related to trust management in the Semantic Web is presented in [14].It provides a profound model based on path algebra and inspired by Markov models.It provides a method of deriving the degree of belief in a statement that is explicitly asserted by one or more individuals in a network of trust, whilst a calculus for computing the belief in derived statements is left to future research.Herein a formalism for deriving the belief in any computable statement is presented for F-logic.

Conclusion
When agents have to solve a problem collectively, they have to reach consensus about the domain since their opinions can differ.Especially when agents are self-interested, their goals in a given situation can vary quite intensively.Herein an approach to reaching this consensus based on a network of trust between agents has been presented which is a generalization of the work done in [15,16] which dealt with semantic wiki systems and semantic social networks, respectively.By using a network analysis trust ranks of agents can be calculated which can be interpreted as an approximation of the probability that a certain agent will say the truth.Using this interpretation an annotation scheme for F-logic based Horn programs has been developed which allows agents to reason about the modeled domain and make decisions based on the probability that a certain statement (derived or explicit) is true.Based on this annotation scheme and the network of trust an amalgamation scheme has been developed as well, which allow agents to reason about multiple domains.
Still, there are open questions: how does the approach scale in fully decentralized environments like LSMAS?What are the implications of self-interest or could agents develop strategies to "lie" on purpose to attain their goals?These and similar questions are subject to our future research.

Definition 4 .
A Horn F-program consists of Horn rules, which are statements of the form ℎ ← .

Definition 5 .
A graph G is the pair (N, A) whereby N represents the set of verticles or nodes and A ⊆ N × N the set of edges or arcs connecting pairs from N.

Definition 7 .
A directed graph or digraph G is the pair (N, A), whereby N represents the set of nodes and A ⊆ N × N the set of ordered pairs of elements from N that represents the set of graph arcs.Definition 8.A valued or weighted digraph G V is the triple (N, A, V) whereby N represents the set of nodes or verticles, A ⊆ N × N the set of ordered pairs of elements from N that represent the set of graph arcs, and V : N → R a function that attaches values or weights to nodes.

4 : 5 : 6 :
reasoning and calculates the probabilities for the other queries  (main purpose) = Rank (John) + Rank (Paul) + Rank (Ringo) + Rank (George) − Rank (Max) + Rank (Glove) = 0.913043,s 2 :  Music : evil noise [has to do with →  Purpose of life ] ∧  Purpose of life [main purpose → love], s 3 :  Music : evil noise [has to do with →  Purpose of life ] ∧  Purpose of life [main purpose → drums], s  Music : harmonious sounds [has to do with →  Purpose of life ] ∧  Purpose of life [main purpose → glove], s  Music : harmonious sounds [has to do with →  Purpose of life ] ∧  Purpose of life [main purpose → love], s  Music : harmonious sounds [has to do with →  Purpose of life ] ∧  Purpose of life [main purpose → drums].Now according to rule 1 the conjunction becomes  ( 1 ) =  ( Music : evil noise [has to do with →  Purpose of life ]) ⋅  ( Purpose of life [main purpose → glove]) ,  ( 2 ) =  ( Music : evil noise [has to do with →  Purpose of life ]) ⋅  ( Purpose of life [main purpose → love]) ,  ( 3 ) =  ( Music : evil noise [has to do with →  Purpose of life ]) ⋅  ( Purpose of life [main purpose → drums]) ,  ( 4 ) =  ( Music : harmonious sounds [has to do with →  Purpose of life ]) ⋅  ( Purpose of life [main purpose → glove]) ,  ( 5 ) =  ( Music : harmonious sounds [has to do with →  Purpose of life ]) ⋅  ( Purpose of life [main purpose → love]) ,  ( 6 ) =  ( Music : harmonious sounds [has to do with →  Purpose of life ]) ⋅  ( Purpose of life [main purpose → drums]) .
Ringo Paul Max Glove Figure 1: Social network of "Pepperland."

Table 2 :
Trust ranking of the "Pepperland" agents.