Presentation is loading. Please wait.

Presentation is loading. Please wait.

Why we need a non-reductionist approach to Trust Cristiano Castelfranchi (work with Rino Falcone) Institute for Cognitive Sciences and Technologies.

Similar presentations


Presentation on theme: "Why we need a non-reductionist approach to Trust Cristiano Castelfranchi (work with Rino Falcone) Institute for Cognitive Sciences and Technologies."— Presentation transcript:

1 Why we need a non-reductionist approach to Trust Cristiano Castelfranchi (work with Rino Falcone) Institute for Cognitive Sciences and Technologies - CNR

2 Human centered systems and how to support the user:
Machines should understand & incorporate human social interaction and organization for supporting and mediating them; Machines (Agents) should emulate human social interaction and organization for good MA or heterogeneous interactions, organizations, .. including: interpersonal aspects (trust, reputation, …) normative aspects , institutional aspects (roles,..) … Ex. The new AI ‘paradigm’: Artificial Social Intelligence

3 Challenging Questions on Trust in Cyber-Societies
What is the role of trust today? Is trust necessary in the new human-machine scenarios? Is trust useful in the advancing global, pervasive technologies of communication and information? Let me start from three main classes of questions we have to consider for understanding the relationships between trust and cyber-societies 1) What is the relevance of trust in the new technological relational scenarios 2) May we consider the possibility of not coping with this complex concept 3) Which is the right model of trust and how to build it. In other words technology is going to pervade the more strict social relationships. La prima serie di questioni fondamentali ha a che fare con il ruolo della fiducia nei nuovi scenari relazionali. Scenari sempre più evoluti tecnologicamente, in cui lo sviluppo della microelettronica, della scienza dei materiali, dell’informatica, delle tecnologie della comunicazione fanno ricadere materiali, dispositivi, metodologie sempre più innovative e pervasive (che invadono ambiti di natura strettamente sociale). Is it possible substitute trust with other concepts/mechanisms in the cooperation framework (short-cuts, tricks, “ad hoc” solutions, etc)? Is trust deeply analyzed and understood? May we just import models and theories from other disciplines (like Sociology, Economy, Psychology, etc.) without considering the new original domain in which we have to embed it?

4 What Trust is: Some Definitions
There are hundreds of trust definitions (and many of them very interesting), I just show you a couple of them for underlying some relevant concepts. Mettiamo ogni giorno, in ogni istante, alla prova la nostra competenza di fiducia: potremmo guidare in un’autostrada non fidandoci degli altri? Potremmo attraversare ad un passaggio pedonale? Potremmo mangiare nei ristoranti, o più in generale consumare cibo non direttamente prodotto da noi? Ogni giorno compiamo l’avventura di fidarci di migliaia di altri (visti o non visti) ad agire affidabilmente. L’individuo che si presenta disarmato ai suoi simili accorda loro una certa fiducia …. Come può essere ottenuta la fiducia attraverso la socialità asociale dell’uomo Il caso unabomber e la rottura del vincolo sociale di fiducia alla sua base 1) Expectation and Vulnerability (that expresses the risk concept) 2) Subjectivity and Dependency 3) Different levels of Expectations Trust is the bond of society, the vinculum societatis, and we need not doubt him (John Locke) The willingness of a party to be vulnerable to the actions of another party based on the expectation that the other party will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party (Mayer et al.) Trust is the subjective probability by which an individual, A, expects that another individual, B, performs a given action on which its welfare depends (Gambetta) B. Barber breaks trust down into three components: 1) an expectation of the fulfillment of the natural social order; 2) an expectation of component role performance on the part of trustee and 3) an expectation that a trustee will fulfill any fiduciary obligations

5 Different approaches to Trust
- cognitive approach aimed at characterizing trust basically as expectations, beliefs, desires, attitudes, feelings or whatever, thus modeling mind or personality - Other approaches that are basically aimed at operationalize this notion for some use in economics or in application. These approaches are merely quantitative. - Other approaches reducing trust to some measurable index are used for modeling learning trustworthiness of others through repeated interactions or the dynamics of trust on the basis of experience and sources, and so on. All those approaches are relevant and useful for progress. They compete with each other but they are in fact also cooperating in a long run enterprise about a very rich and complex phenomenon that must be modeled under different perspectives - Game Theoretical approach - Security oriented approach - Modal logics approach - Other operational approaches - Socio-Cognitive Approach

6 A multi-facet Trust I will illustrate:
Perceived security and safety as an aspect of trust Security in this context is mainly used for qualifying the communication infrastructure; network; etc. In general these approaches are not sufficient neither to define trust nor to produce it or create it Trust building is more than secure communication via electronic networks. For example, the reliability of information about the status of your trade partner has very little to do with secure communication To identify the sender of a message (for example by verifying the origin of a received message; by verifying that a received message has not been modified in transit; by preventing that an agent who sent a message might be able to deny later that it sent the message). With sophisticated cryptographic techniques it is possible to give some solutions to these security problems. Consider the variety of cases in which it is necessary or useful to interact with agents whose identity, history or relationships are unknow, and/or it is only possible to make uncertain predictions on their future behaviours. I will illustrate: how trust can be a disposition, but also is an 'evaluation', and also a 'prediction' or better an 'expectation'; and how it is a 'decision' and an 'action', and 'counting on' (relying) and 'depending on' somebody; and which is the link with uncertainty and risk taking (fear and hope); how it creates social relationships; how it is a dynamic phenomenon with loop-effects; how it derives from several sources.

7 A multi-facet Trust Trust is a complex cognitive (and affective) construct, and a social act, and a social relation It is not always necessary in all uses and applications a complex and complete Th and model of it. There are natural tendencies to reduce the theory and the implementation of trust to the specific practical aspects needed in each application, without a real perception of the complexity of the phenomenon. Also in human T is (or becomes) routine-based and automatic, but… >> I will underlying the real complexity of trust (not for mere theoretical purposes but for advanced applications), and I will criticize some simplistic solution of complicated issues

8 Different Kinds of Trust
Different kinds of trust should be modeled: The expansion of EC magnifies the problem of trust, because agents reach out far beyond their familiar trade environments and may often interact with others whom they never met and who might have only acquired a temporary identity. TTP (introduced in an agent community) take care of trust building among the other agents in the network TTP: authentication server; key distribution server, etc. - trust in the environment, infrastructure, procedures - trust in personal agents and/or in mediating agents - trust in potential partners - trust in information sources - trust in the warrantors and authorities

9 Mc Knight’s model is OK (several important aspects and some interaction), but it is just a black-boxes model; a much more analytic and processing model is needed, and relational (not only mental)

10 Socio-Cognitive model of Trust
Our model tries not only to individuate the basic mental ingredients (responding to the question: which are them?) but it also tries to give an interpretation of the kind of interaction among these elements (responding to the question: how do they interact?) and how the final decision to trust or not is taken (on the basis of all the internal and external constraints). There are a set of components that have an influence on trust but that are not directly linked with the explicit and rational reasoning (affectivity, belongigness, care, etc.) A basic theory of trust: in terms of the necessary mental ingredients and decision to delegate/rely-on The quantitative dimensions of trust are based on the quantitative dimensions of its cognitive constituents Let’s present the “cold” model let’s put aside the analysis of the emotional and affective components of trust Very important in cyber-society, in HMI,..

11 Ambiguity of Trust The word trust is ambiguous:
Simple evaluation, prediction towards another agent or something (a simple disposition): How much would you trust him? Reliance (an intention to delegate and trust): I decided to trust him but I have not still made it Delegation (which makes the trustor vulnerable): I am trusting him Each of these notions is based on a complex mental state of the trustor The trust disposition is a determinant and precursor of the decision which is a determinant and a precondition of the act; however the disposition and the intention remain as mental constituents during the next steps In the case of autonomous agents who have no external constraints conditioning their freedom of making reliance, we can say that a sufficient value of core trust is a necessary but not sufficient condition for a positive decision to trust, and the decision to trust is a necessary but not sufficient condition for delegation; vice versa delegation implies the decision to trust, and the decision to trust implies a sufficient value of core trust. The word trust is ambiguous: - simple evaluation of the potential trustee (trust disposition) - evaluation + decision (decision to trust) - the (inter)action of trusting (act of trusting)

12 Ambiguity of Trust Trust Disposition (TD) Decision to Trust (DtT)
The trust disposition is a determinant and precursor of the decision which is a determinant and a precondition of the act; however the disposition and the intention remain as mental constituents during the next steps In the case of autonomous agents who have no external constraints conditioning their freedom of making reliance, we can say that a sufficient value of core trust is a necessary but not sufficient condition for a positive decision to trust, and the decision to trust is a necessary but not sufficient condition for delegation; vice versa delegation implies the decision to trust, and the decision to trust implies a sufficient value of core trust. Trust Disposition (TD) Decision to Trust (DtT) TD Act of Trusting (AoT) TD DtT

13 Trust as a Mental Attitude

14 First, one trusts another only relatively to some goal
- Second, trust itself consists of beliefs Trust is a mental state, a complex attitude of an agent x towards another agent y about the behavior/action a relevant for the result (goal) g Trust(x y t) We mean: only an agent endowed with goals and beliefs Just informational or also motivational attitudes? For something it wants to achieve, that it desires. If I potentially don’t have goals, I cannot really decide, nor care about something. When I speak about goals I do not intend just goals about specific tasks but also about classes of task (consider trust about friendship, etc.) In other words, we think that trust on the trustee is always relative to some interest, need, concern, desire, expectation of the trustier. Consider that in our model we use the term goal not necessarily as a specific objective currently pursued. Goal is the general, basic teleonomic notion: any motivational representation in the agent: desires, motives, will, need, objectives, duties, utopias, are kinds of goals.

15 Trust Arguments Trust(x y t)
x is the relying agent, who feels trust; it is a cognitive agent endowed with internal explicit goals and beliefs y is the agent or entity which is trusted; (y is not necessarily a cognitive agent) x trusts y about g/a and for g/a; x also trusts “that” g will be true We will use the operator with three argument -Trust(x y t)- to denote a specific mental state compound of other more elementary mental attitudes (beliefs, goals, etc.). It might also be an object or a tool involved in an action of x, or a natural force or event. a) it is something able to cause some effect g through some action a; b) this effect is useful for x (a goal of x) and x is relying on y for it;

16 Basic Beliefs of Trust In order to delegate a task to some agent y, x has to believe that They will be enriched and supported by other beliefs depending on different kind of delegation and different kind of agents; However they are the real cognitive kernel of trust. As we will see later even the goal can be varied (in negative expectation and in aversive forms of 'trust'), but not these beliefs. - y is able to do what x needs (competence), and that (EVALUATION) y will actually do it (motivation/availability) (EVALUATION & PREDICTION) - no harm from y (good-will)

17 Disposition Belief x should think that y not only is able and can do that action/task, but y actually will do what x needs For a cognitive, intentional trustee: When applied to a cognitive, intentional agent, this belief is articulated in and supported by a couple of other beliefs Thanks to these two Beliefs cognitive agents are predictable - Willingness/Motivation Belief x believes that y has decided and intends to do a. In fact, for this kind of agent to do something, it must intend to do it - Persistence Belief x should also believe that y is stable enough in its intentions, that has no serious conflicts about a (otherwise it might change its mind) So trust requires modeling the mind of the other

18 Basic Ingredients of Trust
Using logics of Meyer, van Linder, et al. And introducing some new ad hoc predicates we can write ... G0: Goalx(g) B1: Bx Cany(a,g) positive evaluation B2: Bx <WillDoy(a)>g prediction/expectation

19 Other Beliefs for Trust
Dependence Belief: X believes -to trust Y and delegate to it- that - either X needs it, X depends on it (strong dependence) - or at least that it is better to X to rely rather than do not rely on it (weak dependence) In other terms, when x trusts on someone, x is in a strategic situation: x believes that there is interference and that its rewards, the results of its project, depend on the actions of another agent y. In strong dependence X is not in condition to achieve the task without the help of Y While In Weak dependence X would be able to achieve the task but it is more convenient to achieve it through Y

20 Trust as Beliefs Belief Credibility and Source Reliability
The Credibility of a piece of knowledge (possibly a Belief) is a function of its sources The Basic Principles governing the Credibility: If the source is reliable its information is credible and is believed; if it is not reliable its information is not credible and not believed. (In quantitative terms: the more reliable the source the more credible the information provided) The many the converging (independent) sources, the more credible the information provided

21 Trust as Internal Attribution and Theory of the other’s Mind

22 Internal and External Trust
Internal trust: trust ‘in’ someone or something that has to act and produce a given result thanks to its internal characteristics (Bacharach & Gambetta’ ‘kripta’) External trust: trust on such factors as opportunities, absence of obstacles, positive interference, etc. Global trust = Internal trust + External trust Let me say that we could reduce (synthesize) trust to a subjective probability index (it is on the basis of this subjective perception/evaluation of risk and opportunity that the trustier decide to rely or not). The fact is that this index hides a set of beliefs To cognitively ground such a probability (which after all is subjective i.e. mentally elaborated) is relevant because: …. Reciprocal influences …. There could be logical and rational meta-considerations about a decision even in these apparent indistinguishable situations. Two possible examples of these meta-considerations are: - to give trust increases the experience of an agent (so comparing two different situations -one in wich we attribute low trustworthiness to the agent and the other in which we attribute high trustworthiness to him; obviously, the same resulting probability- we have a criteria for deciding); - the trustier can learn different thinghs from the two possible situations; for example with respect to the agents; or with respect to the environments. Includes both motivational and competential powers/attitudes Includes both positive conditions (opportunities) and interferences (adversities)

23 Internal and External Attribution
Why is this decomposition (internal Vs external) so important? Let me say that we could reduce (synthesize) trust to a subjective probability index (it is on the basis of this subjective perception/evaluation of risk and opportunity that the trustier decide to rely or not). The fact is that this index hides a set of beliefs To cognitively ground such a probability (which after all is subjective i.e. mentally elaborated) is relevant because: …. Reciprocal influences …. There could be logical and rational meta-considerations about a decision even in these apparent indistinguishable situations. Two possible examples of these meta-considerations are: - to give trust increases the experience of an agent (so comparing two different situations -one in wich we attribute low trustworthiness to the agent and the other in which we attribute high trustworthiness to him; obviously, the same resulting probability- we have a criteria for deciding); - the trustier can learn different thinghs from the two possible situations; for example with respect to the agents; or with respect to the environments. a) within the same global probability or risk context the decision of the trustier might differ according to the trust composition (different trustier’s personality) b) trust composition produces completely possible different intervention strategies: to manipulate the external variables (circumstances, infrastructures) is completely different than manipulating internal parameters This cognitive embedding is fundamental for deciding if and how relying, influencing, persuading, etc.

24 Internal Attributions of Trust
The weaker form of this component of trust is the belief of ‘unharmfulness’: there are no danger, no hostility, no defeating attitudes, no antiphaty, etc. Doubts, suspects, can separately affect these three facets of our internal trust ‘in’ y. He is not so expert, or so skilled, or so careful, ...or he is not enough smart or reactive to recognize and exploit opportunities or to cope with interference, ....; he is quite voluble, or not enough engaged and putting effort, or he has conflict of preferences: he will and will not at the same time,.. ; there is some unconscious hostility, some antiphaty, he does not care so much of me, he is quite selfish and egocentric, Resuming we can distinguish three internal attributions: the ability of y’s (skills, know how, carefulness and accuracy, self-confidence, ..); the non-social generic motivational aspects (intention, persistence, commitment, effort,..) the social attitudes, basically consisting in the belief/feeling that there is a pro-attitude, a ‘goodwill’ and that there are not conflicting (anti-social) motives or, in any case, an adoptive attitude; morality

25 Trust as a Theory of Mind
As usual in arguments and models inspired by rational decision theory or game theory, with rationality also "selfishness" and "economic motives" are smuggled. When X trusts Y in strong delegation (goal-adoption and social commitment by Y) X is not assuming that he acts irrationally, i.e. against his interests. Perhaps he acts "economically irrationally" (i.e. sacrificing his economic goals); perhaps he acts in an unselfish way, preferring to his selfish goals some altruistic or pro-social or normative motive; but he is not irrational because he is just following his subjective preferences and motives, including friendship, or love, or norms, or honesty, etc.Thus when X trusts Y, X is just assuming that other motivations will prevail over his economic interests or other selfish goals. X not only believes that Y will intend and persist (and then he will do) but X believes that Y will persist because of certain motives of his that are more important that other motives inducing him to defection and betrayal. And these motives are already there -in Y’s mind and in their agreement- X has not to find new incentives, to think of additional prizes or of possible punishments. If X is doing so (for ex. by promising or threatening) X does not really trust Y (yet). trust cannot be reduced to the frequency of a given behavior and requires 'causal attribution' and a model of the 'kripta' (hidden, mental) (Bacharach & Gambetta) features determining the certainty and the quality of the expected behavior Why should in fact Y do as expected?! - Non-deontic base, non committed Y - Deontic base; ex. Y committed, or Norm Not just ‘regularities’. Prediction is (implicitly or explicitly) based on some Model of the Mind of Y; on his values, motives, beliefs, preferences: for which “reasons” should he keep his promises? Or respect Norms? Or obey orders? Prevailing motives of Y

26 Trust as a Theory of Mind
As usual in arguments and models inspired by rational decision theory or game theory, with rationality also "selfishness" and "economic motives" are smuggled. When X trusts Y in strong delegation (goal-adoption and social commitment by Y) X is not assuming that he acts irrationally, i.e. against his interests. Perhaps he acts "economically irrationally" (i.e. sacrificing his economic goals); perhaps he acts in an unselfish way, preferring to his selfish goals some altruistic or pro-social or normative motive; but he is not irrational because he is just following his subjective preferences and motives, including friendship, or love, or norms, or honesty, etc.Thus when X trusts Y, X is just assuming that other motivations will prevail over his economic interests or other selfish goals. X not only believes that Y will intend and persist (and then he will do) but X believes that Y will persist because of certain motives of his that are more important that other motives inducing him to defection and betrayal. And these motives are already there -in Y’s mind and in their agreement- X has not to find new incentives, to think of additional prizes or of possible punishments. If X is doing so (for ex. by promising or threatening) X does not really trust Y (yet). In fact, trust is also a theory and an expectation about: - the kind of motivations the agent is endowed with, - which will be the prevailing motivations in case of conflict

27 Internal External Decision to Trust y To count on y Know-How
Intends & Persists Self-Confidence Ability Unharmfull Motivation Morality Fear of Authority Dangers Internal Competence Opportunities Willingness Obstacles Decision to Trust y To count on y External Why is this decomposition (internal Vs external) so important? a) within the same global probability or risk context the decision of the trustier might differ according to the trust composition (different trustier’s personality) b) trust composition produces completely possible different intervention strategies: to manipulate the external variables (circumstances, infrastructures) is completely different than manipulating internal parameters This cognitive embedding is fundamental for deciding if and how relying, influencing, persuading, etc. Let me say that we could reduce (synthesize) trust to a subjective probability index (it is on the basis of this subjective perception/evaluation of risk and opportunity that the trustier decide to rely or not). The fact is that this index hides a set of beliefs To cognitively ground such a probability (which after all is subjective i.e. mentally elaborated) is relevant because: …. Reciprocal influences …. There could be logical and rational meta-considerations about a decision even in these apparent indistinguishable situations. Two possible examples of these meta-considerations are: - to give trust increases the experience of an agent (so comparing two different situations -one in wich we attribute low trustworthiness to the agent and the other in which we attribute high trustworthiness to him; obviously, the same resulting probability- we have a criteria for deciding); - the trustier can learn different thinghs from the two possible situations; for example with respect to the agents; or with respect to the environments.

28 Ambiguity of Trust Trust Disposition (TD) Decision to Trust (DtT)
The trust disposition is a determinant and precursor of the decision which is a determinant and a precondition of the act; however the disposition and the intention remain as mental constituents during the next steps In the case of autonomous agents who have no external constraints conditioning their freedom of making reliance, we can say that a sufficient value of core trust is a necessary but not sufficient condition for a positive decision to trust, and the decision to trust is a necessary but not sufficient condition for delegation; vice versa delegation implies the decision to trust, and the decision to trust implies a sufficient value of core trust. Trust Disposition (TD) Decision to Trust (DtT) TD Act of Trusting (AoT) TD DtT

29 Decision to Trust G0: Goalx(g) B1: Bx Cany(a,g) G1: Goalx (Cany(a,g))
Using logics of Meyer, van Linder, et al. And introducing some new ad hoc predicates we can write ... positive evaluation of competence B2: Bx <WillDoy(a)>g G2: Goalx (<WillDoy(a)>g) positive expectation of willingness/ motivation B3: Bx DependenceXY(a,g) G3: Goalx not(<WillDoX(a)>g)

30 Degrees of Trust Generally the quantification of trust is quite ad hoc the quantitative dimensions of trust are based on the quantitative dimensions of its cognitive constituents In common sense, in social sciences, in AI Since no real definition and cognitive characterisation of trust is given …. 1) The actions of the agents depend on what they believe, i.e. they are relying on their beliefs 2) In addition agents act on the basis of the degree of reliability and certainty they attribute to their beliefs In other words, trust in an action or plan (reasons to choose it and expectations of success) is grounded on and derives from trust in the related beliefs For us trust is not an arbitrary index with an operational importance, without a real content, but it is based on the subjective certainty of the relevant beliefs, which support each others and the decision to trust the degree of trust is a function of the subjective certainty of the pertinent beliefs (Josang) & of the possible degree of the quality “I’m pretty sure that Y is sufficiently skilled” vs. “I’m sufficiently sure that Y is very skilled”.

31 Degrees of Trust where DoTXXt is the selftrust of X about t
In order that Ag1 trusts Ag2 about t, and then it delegates that task, it is not only necessary that the DoTAg1, Ag2, t exceeds a given (Ag1’s) threshold, but also that it constitutes the better solution (compared with the other possible solutions) It is possible to determine a trust choice starting from each combination of credibility degrees of the main beliefs of Ag1, and from a set of Ag1’s utilities. It is possible that -once fixed the set of utilities and the kind and degree of control- different combinations of credibility degrees of the main beliefs produce the same choice. However, more in general, changing the credibility degree of some beliefs should change the final choice about the delegation (and the same holds for the utilities and for the control) where DoTXXt is the selftrust of X about t As we saw we are able to establish what decision branch is the best on the basis of both the relative (success and failure) utilities for each branch and the probability (trust based) of them But in several situations and contexts it should be important to consider the absolute values of some parameter independent of the values of the others This fact suggests to introduce some saturation-based mechanism to influence the decision, some threshold U(X), agent X's utility function, and specifically: U(X)p+, the utility of X’s success performance; U(X)p-, the utility of X’s failure performance; U(X)d+, the utility of X’s success delegation; U(X)d-, the utility of X’s failure delegation; For delegating we must have: DoTXYt * U(X)d+ + (1 - DoTXYt ) U(X)d- > DoTXXt * U(X)p+ + (1 - DoTXXt) U(X)p- Not necessarily we delegate the most trustworthy agent

32 Risk, Investment and Bet
Two risks are implied in x’s trusting y: As we have said: always when there is trust there is some risk. Moreover there might be not only the frustration of g, the missed gain, but there might be additional damages as effect of failure, negative side effects: the risks in case of failure are not the simple counterpart of gains in case of success. a) the risk of failure, the frustration of g (possibly for ever, and possibly of the entire plan containing g); b) the risk of resource waste and possible harms, not only x risks to miss g (missed gains) but also to waste its investments (loss) or incur in damages.

33 A variable threshold for risk acceptance/avoidance
For example, it is possible that the value of the damage per se (in case of failure) is too high to choose a given decision branch, and this is independent of the probability of the failure (even is very low) and from the possible payoff (even is very high) In other words, that danger might seem an unbearable risk to the agent

34 A variable threshold for risk acceptance/avoidance
Such a threshold can vary, not only from one agent to another (personality) but also according to several factors in the same agent (the greater the damage more it grows sH; while the greater the utility the more sH is reduced) sH = f(U(X)d-,U(X)d+) sH is the value computed by the function f. In particular we claim that the acceptable hazard varies with the importance of the threat-damage and with the expected reward. More precisely: the greater the damage (U(X)d- ) more it grows sH; while the greater the utility (U(X)d+) the more sH is reduced. One might also have one single dimension and threshold for risk (by using the formula ‘damage * hazard’). However, we claim that there could be different heuristics for coping with risk (for sure this is true for human agents). For us, a great damage with a small probability and a small damage with a high probability, do not represent two equivalent risks. They can lead to different decisions, they can differently pass or not the threshold. In the same way, we may also introduce an ‘acceptable damage’ threshold sd and say: f(U(X)d-,U(X)d+) = 1, if U(X)d- < sd = some unspecified comutation, otherwise

35 Ambiguity of Trust Trust Disposition (TD) Decision to Trust (DtT)
The trust disposition is a determinant and precursor of the decision which is a determinant and a precondition of the act; however the disposition and the intention remain as mental constituents during the next steps In the case of autonomous agents who have no external constraints conditioning their freedom of making reliance, we can say that a sufficient value of core trust is a necessary but not sufficient condition for a positive decision to trust, and the decision to trust is a necessary but not sufficient condition for delegation; vice versa delegation implies the decision to trust, and the decision to trust implies a sufficient value of core trust. Trust Disposition (TD) Decision to Trust (DtT) TD Act of Trusting (AoT) TD DtT

36 Delegation and Trust Trust
Trust is the mental counter-part of delegation Trust and reliance denote the mental state preparing and underlying delegation There may be trust without delegation: either the level of trust is not sufficient to delegate, or the level of trust would be sufficient but there are other reasons preventing delegation (for example prohibitions). So, trust is normally necessary for delegation, but it is not sufficient: delegation requires a richer decision. There may be delegation without trust: these are exceptional cases in which either the delegating agent is not free (coercive delegation) or he has no information and no alternative to delegating, so that he must just make a trial (blind delegation). Suppose that you dont trust at all a drunk guy as a driver, but you are forced by his gun to let he drive your car. The decision to delegate has no degrees: either X delegates or X does not delegate. Indeed trust has degrees: X trusts Y more or less relatively to a . And there is a threshold under which trust is not enough for delegating. The decision to delegate has no degrees: either I delegate or I do not delegate. Indeed trust has degrees: I trust y more or less relatively to a . a plan a 1 + a 2 a 3 agent2 Could produce agent1 agent1’s plan Trust a a 1 a 3 agent1 a 2 agent2 agent1 agent1 There may be trust without delegation and There may be delegation without trust

37 Trust and Dependence Before the Act of Trusting
After the Act of Trusting As w Dependence on Dependence on

38 Trust as Expectation

39 TRUST is (also) an Expectation, and
this explains its relationships: with possible ‘Disappointment’, with ‘Hope’, with possible monitoring, and with the counter-part and possible complement of ‘Anxiety’(risk).

40 What are ‘expectations’?
We characterize ‘expectation’ not as just ‘predictions’ ‘forecasts’ (that is more or less certain beliefs about the future) but as a composed mental representation where the agent is not simply a predictor (like a computer) but is ‘interested’ in the truth of the assumption, is actively searching for or waiting for evidences confirming or invalidating it (so, the belief is under scrutiny or open to scrutiny). Usually this is because the agent is also ‘concerned’, that is, some goal of it is involved and either satisfied or threatened by the prediction.

41 Cognitive Anatomy of Expectations :
Expectations are compositional states (and in part activities). Their ingredients are: a belief that p about the future (prediction) with its degree of certainty (%b) (how much one is sure that…); a whish, desire, need,.. (in our vocabulary a generic “goal”, not in the sense of an objective to be pursued), with its degree of value (%v), of importance (and perhaps also of urgency, which is a different thing) an epistemic goal (and activity): the goal to know, to check whether p happens to be true (“expecting”, “waiting for”) Let’s put aside for the moment the last component.

42 The quantitative aspects of mental states - EXPECTATIONS
Fear (worries), hope, surprise, disappointment, relief and frustrations have an intensity, can be more or less severe. the dynamics and the degree of the Macro-states is strictly function of the dynamics and strength of their micro-components EXPECTATIONS Two (independent) dimensions: - The degree of the belief credibility - The degree of the goal value

43 the intensity of the reaction is function of its components
Positive Expectation Negative Expectation Bel (x pfut) Bel (x pfut) Goal (x pfut) Goal (x Not pfut) Bel% = Beliefs have degrees of certainty: one can be pretty sure that, enough sure, 50%, etc. GoalV = Goals have value, they can be more or less important Hope & Fear Given this and given the cognitive ingredients postulated in ‘expectation-based’ emotions, we assume that the intensity of the reaction is function of its components

44 The hybrid nature of expectations
Two (independent) dimensions in expectations it happens that: P it happens that: Not P Bel x P & Goal x P Positive expectation no surprise + achievement surprise + frustration disappointment Bel x Not P & Goal x P Negative expectation “How much are you disappointed?” “I’m very disappointed: I was sure to succeed” “How much are you disappointed?” “I’m very disappointed: it was very important for me” “How much are you disappointed?” “Not at all: it was not important for me” “How much are you disappointed?” “Not at all: I have just tried; I was expecting a failure ”. surprise + non-frustration relief no surprise + frustration The degree of disappointment seems to be function of both dimensions and components. It seems to be felt as a unitary effect and a global state.

45 Trust Sources

46 The SOURCES of Trust > Previous Direct Experience of Y in task 
Perceived security and safety as an aspect of trust Security in this context is mainly used for qualifying the communication infrastructure; network; etc. In general these approaches are not sufficient neither to define trust nor to produce it or create it Trust building is more than secure communication via electronic networks. For example, the reliability of information about the status of your trade partner has very little to do with secure communication To identify the sender of a message (for example by verifying the origin of a received message; by verifying that a received message has not been modified in transit; by preventing that an agent who sent a message might be able to deny later that it sent the message). With sophisticated cryptographic techniques it is possible to give some solutions to these security problems. Consider the variety of cases in which it is necessary or useful to interact with agents whose identity, history or relationships are unknow, and/or it is only possible to make uncertain predictions on their future behaviours. On which bases /Evidences X build a given Expectation (Trust) about Y’s performance? > Previous Direct Experience of Y in task  > Inference from Y’s Class or Group or category: B1: Y is a member of C (ex. A medical doctor) B2: C are trustworthy (competent, reliable, safe) as for   B3: Y (as a C) is trustworthy (competent, reliable, safe) as for  >> and other Inferential processes (Everybody here stops at the red light  Y will stop at the red light)

47 The SOURCES of Trust > (Pseudo-)Transitivity: > Reputation
> Analogy from Y to Y’; from  to ’ trust cannot just be based on 'norms' (statistical or deontic) and their respect; Analogy; Case-based reasoning > (Pseudo-)Transitivity: Since B1: Z Trust Y as for  & B2: Z is trustworthy as a judge for  performances and competences  B3: Y is trustworthy as for  > Reputation Since they say that Y is OK  I Trust Y > Norms & Control Since I trust the authority  I Trust Y Perceived security and safety as an aspect of trust Security in this context is mainly used for qualifying the communication infrastructure; network; etc. In general these approaches are not sufficient neither to define trust nor to produce it or create it Trust building is more than secure communication via electronic networks. For example, the reliability of information about the status of your trade partner has very little to do with secure communication To identify the sender of a message (for example by verifying the origin of a received message; by verifying that a received message has not been modified in transit; by preventing that an agent who sent a message might be able to deny later that it sent the message). With sophisticated cryptographic techniques it is possible to give some solutions to these security problems. Consider the variety of cases in which it is necessary or useful to interact with agents whose identity, history or relationships are unknow, and/or it is only possible to make uncertain predictions on their future behaviours.

48 The SOURCES of Trust Perceived security and safety as an aspect of trust Security in this context is mainly used for qualifying the communication infrastructure; network; etc. In general these approaches are not sufficient neither to define trust nor to produce it or create it Trust building is more than secure communication via electronic networks. For example, the reliability of information about the status of your trade partner has very little to do with secure communication To identify the sender of a message (for example by verifying the origin of a received message; by verifying that a received message has not been modified in transit; by preventing that an agent who sent a message might be able to deny later that it sent the message). With sophisticated cryptographic techniques it is possible to give some solutions to these security problems. Consider the variety of cases in which it is necessary or useful to interact with agents whose identity, history or relationships are unknow, and/or it is only possible to make uncertain predictions on their future behaviours. > Generalized Trust; Trust atmosphere; Trust by Default - Personality or mood trait: except I have reasons for non-trusting, I trust - Environmental, contextual: since it seems that without special control everybody here trusts everybody, by default I will trust everybody. > Reciprocation: Since Y trust me, I trust him. > Emphatic and Affective Trust

49 Do Contracts & Norms replace Trust?

50 Trust as a three-party relationship (Institutional Trust)
Do we overstate the importance of trust? It might be argued that people put contracts in place precisely because they do not trust the agents they delegate tasks to People want to be protected by the contract What would be important is not trust but the ability of some authority to assess contract violations and to punish the violators But This view is correct only if one adopts a quite limited view of trust in terms of beliefs relative to the character or friendliness, etc. of the trustee (delegated agent) in the cases of contracts, organizations, etc. we just deal with a more complex and specific kind of trust. Trust is always crucial We put a contract in place only because we believe that the agent will not violate the contract, and this is precisely trust the belief that the trustee worries about law and punishment

51 Trust as a three-party relationship (Institutional Trust)
This level of trust is a three party relationship: - a client X, - a contractor Y - the authority A So, we are considering different tasks, but related with each others three trust sub-relations in it: X trusts A and its ability to control, to punish etc. and relies on A for this; X trusts Y by believing that Y will do what promised because of its honesty and/or because of its respect/fear toward A Y trusts A (both when Y is the trustier and when it is the trustee) the same beliefs being the bases of its respect/fear toward A (that is a paradoxical form of trust: trusting a threatening agent!) Without these three trust relations the contract would be useless

52 Trust as a three-party relationship
In contract and organization it is true that "personal" trust in y may not be enough, but what we put in place is a higher level of trust which is our trust in the authority but also our trust in y as for acknowledging, worrying about and respecting the authority In contracts and organization we have a higher level of trust which is our trust in the authority but also our trust in y as for acknowledging, worrying about and respecting the authority

53 Trust Dynamics 3 examples

54 Three interesting cases of Trust Dynamics
The first case (i) considers the well known phenomenon about the fact that trust evolves in time and has a history, that is A’s trust in B depends on A’s previous experience and learning with B itself or with other (similar) entities Since trust is not simply an external observer’s prediction or expectation about a matter of fact, in fact we have to consider that in one and the same situation trust is influenced by trust in several rather complex ways   The marketing problem: Trust from task to task The traditional problem of the trust reinforcement on the basis of the successful experiences and vice versa, its decreasing in case of failures iii) The fact that in the same situation trust is influenced by trust in several rather complex ways: the Trust ‘loop’

55 Trust Dynamics: ‘Volkswagen Tomato sauce’?
Would you buy a ‘Volkswagen Tomato sauce’? And what about a FIAT tomato sauce? And a Wolkswagen water pump? Would you trust a surgeon for prescribing you a pharmac? Or a medical doctor for a serious surgical intervention? If X trusts Y for a given , will she Trust Y also for ’?? Goods/services taxonomy or analogy-affinity: the underlying theory of abilities and competences (T as evaluation)

56 Trust Dynamics: ‘Wolkswagen Tomato sauce’?
Y is trustworthy for  because of his qualities q’, q’’, q”’ (not because o q””) are q’ or q’’ or q”’ useful/necessary for ’? YES ==> Trust transition from  to ’

57 Trust Reinforcement/Decrement: The classical view
This is a trivial and also not very explicative view of trust dynamics. In general, … Of course, there is a set of problems to solve and concepts to clarify (when we really can speak about success or failure, and how much and in which way the trust is changed, and so on…) but this assumption is generally accepted without any more interesting analysis. + Success Trust Reliance Failure -

58 A more correct view Problem: trust is modelled as a simple index, a number, a dimension (a mere subjective probability) A cognitive attribution process is needed! Following the “causal attribution theory”, any success or failure can be ascribed to different factors: - Internal versus External - Occasional fact versus Stable properties The main problem with this approach is that ………. Success and failure are referred to the whole object of the reliance act: this approximation loses some very relevant characteristics we are going to analyse. Internal: to the trustee External: depending from the environment , etc Occasional: not foreseen nor predictable Stable: characteristics of the trustee or the environment in which he is playing

59 Causal Interpretation of the Failure
IS FAILURE DUE TO EXTERNAL OR INTERNAL FACTORS? The cognitive, emotional, and practical consequences of a failure (or a success) strictly depend on this causal interpretation. For example a failure will impact on the esteem of a trustee only when attributed to internal and stable characteristic of the trustee itself. In the same way a failure is not enough for producing a crisis of trust: it depends of the causal interpretation of the outcome ARE THEY OCCASIONAL OR STABLE? EXTERNAL ARE THEY OCCASIONAL OR STABLE? INTERNAL REDUCE THE TRUST IN THE EXTERNAL COMPONENT OF THE GLOBAL EVENT STABLE MAINTAIN CONSTANT THE TRUST IN THE EXTERNAL/INTERNAL COMPONENT BUT CHECK FOR SIMILAR CIRCUMSTANCES OR ATTITUDES OCCASIONAL OCCASIONAL REDUCE THE TRUST IN THE INTERNAL COMPONENT OF THE GLOBAL EVENT STABLE

60 Trust Dynamics in the same interaction
My decision to trust and my trust is affected by the effects of my decision to trust We define as trustworthiness of B about t in Ω Trustworthiness Trust Trust Trust attribution

61 Forms of Trust ‘Transmission’
X trusts Y as for   X trusts Z as for  X trusts Y as for   X trusts Y as for ’ X trusts Y (as for  ) Y trusts X (as for  (pseudo-)transitivity: from Z to X about Y from class to individual: from Y’s class to Y individual from individual to individual by analogy: from Z to Y, since Z =Y from task to task (by class or by analogy): from task  to task ’ generalized Trust: X trusts Y as for   X trusts Y as for any  in a given class generalized Trust: X trusts Y as for   X trusts anybody in … reciprocation: from ‘X trust Y’ to ‘Y trust X’

62 Trust and Security

63 Trust and Security Security techniques can guarantee identification of individuals and privacy of transmissions, but they cannot guarantee that an interaction partner has the competence he claims or that he is honest about his intentions (security can be useful in the case of intrusivity, identification, etc.) A conceptualization of trust and how trust can be used in artificial societies is a different subject of study than the techniques applied for secure network protocols and cryptography Trust must give us tools for acting in a world that continue to be in principle insecure where we have to make the decision to rely on someone in risky situations Perceived security and safety as an aspect of trust Security in this context is mainly used for qualifying the communication infrastructure; network; etc. In general these approaches are not sufficient neither to define trust nor to produce it or create it Trust building is more than secure communication via electronic networks. For example, the reliability of information about the status of your trade partner has very little to do with secure communication To identify the sender of a message (for example by verifying the origin of a received message; by verifying that a received message has not been modified in transit; by preventing that an agent who sent a message might be able to deny later that it sent the message). With sophisticated cryptographic techniques it is possible to give some solutions to these security problems. Consider the variety of cases in which it is necessary or useful to interact with agents whose identity, history or relationships are unknow, and/or it is only possible to make uncertain predictions on their future behaviours.

64 Trust and Security TRUST
we cannot reduce trust to safety and security, since on the one side what matters is first of all the 'perceived' safety, and, on the other side, building a trust environment and atmosphere and trustworthy agents is one basis for safety, and vice versa. SECURITY & SAFETY TRUST

65 Trust and Reputation

66

67 issues Perceived security and safety as an aspect of trust Security in this context is mainly used for qualifying the communication infrastructure; network; etc. In general these approaches are not sufficient neither to define trust nor to produce it or create it Trust building is more than secure communication via electronic networks. For example, the reliability of information about the status of your trade partner has very little to do with secure communication To identify the sender of a message (for example by verifying the origin of a received message; by verifying that a received message has not been modified in transit; by preventing that an agent who sent a message might be able to deny later that it sent the message). With sophisticated cryptographic techniques it is possible to give some solutions to these security problems. Consider the variety of cases in which it is necessary or useful to interact with agents whose identity, history or relationships are unknow, and/or it is only possible to make uncertain predictions on their future behaviours. - why trust cannot be reduced to the frequency of a given behavior and requires 'causal attribution' and a model of the 'kripta' (hidden, mental) feature determining the certainty and the quality of the expected behavior; - why trust cannot be just reduced to subjective probability; and why a simple 'number' is not enough for managing trust; - why trust cannot just be based on 'norms' and and their respect; - why it is not true that where there are contracts and laws there is no longer trust; - why trust has not (only) to do with cooperation (as economists assume);

68 issues Perceived security and safety as an aspect of trust Security in this context is mainly used for qualifying the communication infrastructure; network; etc. In general these approaches are not sufficient neither to define trust nor to produce it or create it Trust building is more than secure communication via electronic networks. For example, the reliability of information about the status of your trade partner has very little to do with secure communication To identify the sender of a message (for example by verifying the origin of a received message; by verifying that a received message has not been modified in transit; by preventing that an agent who sent a message might be able to deny later that it sent the message). With sophisticated cryptographic techniques it is possible to give some solutions to these security problems. Consider the variety of cases in which it is necessary or useful to interact with agents whose identity, history or relationships are unknow, and/or it is only possible to make uncertain predictions on their future behaviours. - why need a non simplistic theory of Trust 'transmission' beyond its pseudo-transitivity; - why failure and disappointment doesn't necessarily decrease trust; - why we have to build trust on various sources not only on direct experience and reputation; - why we cannot reduce trust to safety and security, since on the one side what matters is first of all the 'perceived' safety, and, on the other side, building a trust environment and atmosphere and trustworthy agents is one basis for safety and vice versa!

69 Conclusions 1) e’ tanto più rilevante quanto più i sistemi tecnologici invadono gli ambiti e gli spazi sociali ordinari; se si vuole che gli sviluppi tecnologici possano produrre ricadute di successo è necessario tener conto dei fattori umani e sociali (e la fiducia è tra i primi in assoluto) 2) non funziona vendere la sicurezza per fiducia: è un concetto diverso; il problema è la gestione del rischio, non l’idea che si possa operare in una sua assenza completa 3) interazioni/integrazioni con altre discipline sono fondamentali, come fondamentale è tener conto delle peculiarità che lo sviluppo tecnologico porta con sé (specifico tecnologico) 4) Si ritiene oggi che il futuro dell’interazione uomo-computer ed uomo-tecnologia sia in una umanizzazione della tecnologia dal punto di vista della interazione e di come essa e’ vissuta dall’utente. Molti puntano ad introdurre in questa interazione fattori affettivi: la percezione e manifestazione reciproca di stati d’animo ed emozioni; la macchina monitora, riconosce e reagisce adeguatamente alle mie emozioni; io attribuisco personalita’, atteggiamenti mentali, forse emozioni alla macchina o all’agente simulato (character), il quale puo’ anche appropriatamente “esprimere” emozioni con atti, gesti, facce, frasi, voci. Vi e’ tuttavia inadeguata base sperimentale circa il reale gradimento e la reale efficacia (o intralcio) di questo tipo di interazione, ne’ si sa’ quale atteggiamento e quale espressivita’ sia piu’ appropriata e persuasiva in un dato contesto (es. commercio; ufficio informazioni; didattica ed e-learning) per un dato utente (cultura, eta’, sesso, paese). E’ quello che in USA si ciama “Affective Computing” (Picard), ed in Giappone approccio Kansei cioe basato sul “sentire”. Today Trust is a very relevant topic; it is at the same time a complex notion and an extraordinary tool for reducing the complexity of the world Short-cuts do not work (security approaches are limited and unable to cope with the main problems of open world). Interactions/Integration from different disciplines (Sociology, Economy, Social Psychology, Organization Theory, etc.) are very important, and foundamental is to consider the specific features of the technological domain Affective Computing (…) in order the complexity of our future - due to technology - becomes bearable, we have to expect that TRUST will play a very relevant and increasing role (Niklas Luhmann)

70 Thank you for your attention

71

72

73

74

75 Trust in Information Sources
A very relevant problem in the WWW Which is the relationship with Trust in a partner, for an action? A twofold relation 1) Trust in Information Sources is just a sub-case of general trust X relies on Y as for the goal g (to know the truth) and for the action a (Y correctly informs X). Relative to this action and goal (to be informed) both our basic components and facets of Trust are crucial: Competence and Willingness (Intention, Goodwill) Another main approach to trust is about the study of the information sources of trust. In particular the reliability or trustworthiness of the information sources. Sources may not be reliable. Incorporating reliability information into belief revision mechanisms is essential for agents in real world multi-agent systems.

76 Trust in Information Sources
Y can be Sincere (saying what he believes to be true) and/or Competent (what he believes is true) In order X believes to Y’s information usually X must believe (TRUST) that Y is both Sincere and Competent If 1) Y informs X that p and 2) BelX (SincereY p) <=> BelX (BelY p) and 3) BelX (CompetentY p)<=>BelX (BelY p => p) then TrustX Y p <=> BelX p

77 Trust in Information Sources
2) Trust is based on/ consists of Beliefs, and derives from their credibility (i.e. confidence in beliefs) the strength in believing, the confidence is based on sources (number, credibility) then: Trust in Y for action a is based on a meta-trust: trust on the Sources of those Beliefs Which is the relationship with Trust in a partner, for an action?

78 Trusting by Weak Delegation
In Weak Delegation, there is no agreement, no request or even influence: agent1 is just exploiting a fully autonomous action of agent2 in agent1’s plan Using logics of Meyer, van Linder, et al. And introducing some new ad hoc predicates we can write ... to delegate an action implies to delegate some result of that action to delegate a goal state always implies the delegation of at least one action that produces such a goal state as a result Then, the object of delegation/adoption is: t = (a, g) that we will call task Goal Skills: Big blocks Bel Small blocks a A B C World A B GOAL: a Blue's action C Yellow's action

79 Trusting by Strong Delegation
Strong delegation: it is based on agent2's adopting agent1's task in response to agent1's request/order Using logics of Meyer, van Linder, et al. And introducing some new ad hoc predicates we can write ...

80 Causal Interpretation of the Success
IS SUCCESS DUE TO EXTERNAL OR INTERNAL FACTORS? ARE THEY OCCASIONAL OR STABLE? EXTERNAL INTERNAL MAINTAIN CONSTANT THE TRUST IN THE EXTERNAL/INTERNAL EVENT BUT CHECK FOR SIMILAR CIRCUMSTANCES OR ATTITUDES OCCASIONAL INCREASE THE TRUST IN THE EXTERNAL COMPONENT OF THE GLOBAL EVENT STABLE INCREASE THE TRUST IN THE INTERNAL COMPONENT OF THE GLOBAL EVENT The same problem must be considered in the case of a success: ……………

81 Trust in the Information Sources
Beliefs Trust Another main approach to trust is about the study of the information sources of trust. In particular the reliability or trustworthiness of the information sources. Sources may not be reliable. Incorporating reliability information into belief revision mechanisms is essential for agents in real world multi-agent systems. Trust Trust

82 Perceived risk Beliefs can be well justified, warranted and based on reasons. In this case they represent the rational (reason based) part of the trust in y Irrationality in trust decision can derive from unjustified beliefs, i.e. on mere faith The decision to trust/delegate necessarily implies the acceptance of some perceived risk (a trusting agent is a risk-acceptant agent) Risk is represented in the quantification of DoT and in criteria for decision However, we believe that this is not enough A specific risk policy seems necessary to trust and bet, and we should explicitly capture this aspect As we saw, the decision to trust is based on some positive trust, i.e. on some evaluation and expectation (beliefs) about the capability and willingness of the trustee and the probability of success Trust is never certainty: always it remains some uncertainty (ignorance) and some probability of failure, and the agent must accept this and to run such a risk To be more precise, non-rational blind trust is close to faith. Faith is more that trust without evidences, it is trust without the need for and the search for evidences But those beliefs can also be not really warranted, not based on evidences, quite irrational, faithful. We call this part of the trust in y: “faith”

83 A variable threshold for risk acceptance/avoidance
As we saw we are able to establish what decision branch is the best on the basis of both the relative (success and failure) utilities for each branch and the probability (trust based) of them But in several situations and contexts it should be important to consider the absolute values of some parameter independent of the values of the others This fact suggests to introduce some saturation-based mechanism to influence the decision, some threshold

84 Trusting the Trustee’s Motivation and Morality
It is crucial to specify the trustier’s beliefs about the adoptive (helping) attitude of the trustee and its motives Motives inducing to adoption are of several different kinds: from friendship to altruism, from morality to fear of sanctions, from exchange to common goal (co-operation). This is why for example is important to have common culture, shared values, the same acknowledged authorities between trustier and trustee. The belief in shared values or in acknowledged authority is an evidence and a basis for believing that y is sensible to certain motives and that they are important and prevailing. In particular beliefs about y's morality are relevant for trusting y. In this sense trust is a mind theory Why does the trustier think that the trustee intends to do t and will persist in this? Motivation Belief: x believes that y has some motives to help x (to adopt its goal), and that these motives will probably prevail -in case of conflicts- over other motives

85 The Importance of Trust
Let me now, consider the trust phenomenon Alla fiducia -riconosciuta come principale componente del capitale sociale- viene ricondotto il successo economico di un Paese (Fukuyama, 1995). Si occupano di fiducia: economisti, psicologi, sociologi: in particolare gli studiosi dell’organizzazione. Competenza di fiducia: Identificazione di un articolato e complesso apparato di ragionamento e di percezione (ma anche di protocolli comunicativi e interazionali) che gli umani hanno sviluppato e che permette loro di decidere se e come affidarsi ad altri umani, ad artefatti, ad oggetti naturali, eventi naturali e non per la realizzazione di compiti o per essere supportati in qualche modo. In Internet, nella rete globale compaiono nuove modalità di interazione, nuovi soggetti: c’è la necessità di adeguarsi a queste nuove modalità; Change the identification of the other, the perception of the other, changes the space-temporal context in which the interaction happen; change the authorities and guarantees, changes the nature of the interaction traces (tracks). So, we have to rebuild this articulated and complex apparatus of reasoning and perception that permit to human to rely on others or things …. Trust is one of the most important social concepts that helps human agents to cope with their social environment and is present in all human interaction (Gambetta, 1990) How do human societies deal with trust problems? Humans have learned to cooperate in many ways and environments; on different tasks; and for achieving different goals Diverse cooperative constructs: > purely interactional, > technical-legal, > organizational, > socio-cognitive, etc., have been introduced or spontaneously emerged What in the new scenarios: new channels and infrastructures (i.e. Internet); new artificial entities, new environments ?

86 Competence Belief A sufficient evaluation of y’s abilities is necessary; x should believe: - how useful is y for x’s goal; - that y can produce/provide the expected result; - that y can play such a role in x’s plan/action; We do not.

87 Epistemic control & Intentions
Epistemic Control (EpC ): checking whether the expected result has been realized or not. In our analysis intentions imply expectations (as we define them) and expectations entail EpC INT  EXP  EpC EpC has at least three functions in the practical reasoning: - testing conditions for the execution of plans, - testing for intermediary results (sub-goals), - testing for achievement. >>> Different from Delegation & Reliance


Download ppt "Why we need a non-reductionist approach to Trust Cristiano Castelfranchi (work with Rino Falcone) Institute for Cognitive Sciences and Technologies."

Similar presentations


Ads by Google