Removing Trust

About two years ago I made a conscious effort not to use the term “trust” and encouraged those I was engaging for work and social interactions not to use the term. The problem is not the concept of trust, but the use of the term trust, or more accurately the overuse of the term trust. Trust gets used quite often as it is a word that has high value in our society. There are roughly seven definitions or contextual uses of the term trust, which is problematic when trying to design, develop, or evaluate ways forward from understandings gaps and potential problems.

Initially, I started a deep dive into reading everything I could on trust to get a better grasp of the term and underlying foundations. I thought this may provide better understanding and bring me back to using the term and with more clarity of understanding. While, this helped increase my understanding of the use of trust as a term it also confirmed the broad fuzzy use of the term, even within attempts to clarify it.

Why the Use of the Term Trust is Problematic

When I was working with people to help improve their social software deployments or use of social sites, as well as engagements in B2B and B2C arena the term trust was used a lot. I would ask people to define “trust” as they were using it, and they would describe what they meant by trust, but with in a sentence or two they had moved onto a different contextual definition. Sometimes I would point this out and ask them to redefine what they meant, pointing out the shift in usage. When I asked one group I was talking with to use other words as proxy for the term trust things started moving forward with much more clarity and understanding. Also gone were the disagreements (often heated) between people whose disagreement was based on different use of the term.

Once I started regularly asking people to not use trust, but proxies for the term I started keeping rough track of the other words and concepts that were underlying trust. The rough list includes: Respected, comfort, dependable, valued, honest, reliable, treasured, loved, believable, consistent, etc. Many found the terms they used to replace trust were more on target for what they actually meant than when using the word trust. There are some sets terms that nicely overlap (dependable, reliable, consistent and valued, treasured), but one term that came up a lot and generated a lot of agreement in group discussions is comfort.

Social Comfort Emerges

Within a few months of stopping use of the term trust, comfort was the one concept that was often used that seamed to be a good descriptor for social software environments. It was a social comfort with three underlying elements that helped clarify things. Social comfort for interacting in social software environments was required for: 1) People; 2) Tools; and 3) Content (subject matter). I will explain these briefly, but really need to come back to each one in more depth in later posts.

(A presentation to eXention last year turned what was publicly one slide on the subject into a full 60 minute plus presentation.)

Social Comfort with People

Social comfort with people is one essential for people interacting with others. Some of the key questions people bring up with regard to social comfort with people are: Knowing who someone is, how they will interact with you, what they will do with information shared, reliability of information shared, are they safe, can I have reasonable interaction with them, and why would I interact with this person. One of the biggest issues is, “Who is this person and why would I connect or interact with them?” But, most social software tools, particularly for internal organization use provide that contextual information or depth needed to answer that question in their profiles (even in the organizations where most people have relatively “complete” profiles, the information in the profiles is rarely information that helps answer the “Who is this person and why should I listen or interact with them?” question.

Social Comfort with Tools

Social comfort with tools is often hindered by not only ease of use, but ease of understanding what social features and functionalities do, as well as with whom this information is shared. There is an incredible amount of ambiguity in the contextual meaning (direct or conveyed) of many interface elements (ratings, stars, flags, etc.) fall deeply into this area. This leads to the social reticence of a click, where people do not star, flag, rate, or annotate as the meanings of these actions are not clear in meaning (to the system or to other people) as well as who sees these actions and what the actions mean to them. Nearly every organization has a handful if not many examples of misunderstanding of these interactions in actual use. The problems are often compounded as sub-groups in organizations often establish their own contextual understandings of these elements for their use, but that may have the opposite meaning elsewhere (a star may mean items a person is storing to come back to later in one group and another it means a person likes the item starred and can be construed as a light approval). Even services where this is well defined and conveyed in the interface this conflict in understandings occurs. (This is not to ward people off use, but the to understand lack of consistency of understanding that occurs, although the 5 star (or other variations) are really universally problematic and needs a long explanation as to why.)

Social Comfort with Content

Social comfort with content or subject matter can hold people back from using social software. People may have constructive input, but their lack of their own perceived expertise may be (and often is) what inhibits them from sharing that information. The means for gathering this constructive feedback is needed along with the ability for others to ask questions and interact, which usually rules out anonymous contributions (additionally anonymous contributions rarely help mitigate this problem as that doesn’t really provide comfort, as well inside most organizations it is quite easy to resolve who is behind any anonymous contribution, so it is false anonymity). People often have contributions they believe are helpful, but may not be fully fleshed out, or are need to have the information vetted for internal political reasons or put in context (terminology and constructs that are most easily understood and usable) through vetting of others (whom there is social comfort with).

Improving Outcomes with Focal Shift

One of the outcomes of this shift from the term trust to others, including social comfort is areas that need to be addressed are more easily seen, discussed, considered, and potential solutions identified. The end results are often improved adoption through improved community management, improved interfaces and interactions in the services, better tools through iteration, and improved adoption.



2 responses to “Removing Trust”

  1. larry.irons@customerclues.com Avatar

    Thomas,

    These are great distinctions. I’ve been thinking this was your perspective for some time. In a 2009 post I noted the following:

    “Shared experience, not just shared information, is fundamental to the social networks underlying collaboration and innovation. Many, if not most, employees don’t only need to get to know one another through reputation systems, like who people tag as possessing expertise. As Thomas Vander Wal continues to point out, comfort with one another is needed to develop a shared experience that encourages the open sharing of information.”

    http://skilfulminds.com/2009/07/07/social-business-design-and-multichannel-team-collaboration/

    I’m pleased that my reading of your work seems in synch with your own.

    Regards,

    Larry

  2. gordonr@thoughtfarmer.com Avatar

    Thomas,

    I’m going to challenge you on tossing out the use of the word trust. And I hope that our familiarity and chats about social comfort will hope foster some… uh… trust… that I mean well in doing so.

    I think one of the most robust definitions of trust that I’ve come across in my reading on the subject (and agreed there are many definitions and riffs on the term), is Charles Green and David Maister’s use of the term in the book The Trusted Advisor. In an exploration of the term in a professional services / consulting context (something that you are also familiar with), they present the trust equation. They claim the fundamental elements of trustworthiness are credibility, reliability, intimacy and self-interest (or a distinct lack there-of).

    In going through your social comfort description, I think the the different aspects of trustworthiness come into play in different areas, some more strongly than others.

    The triad of people, tools, and content is great; I’d also challenge you to use the term “environment” or “system” instead of tools, as I think we need to fundamentally shift the language around the use of “tools” which implies something far more simple than what’s happening when we’re using social software.

    So what does trust in people look like: I think you’ve focused on intimacy in your slides on familiarity. Is this a safe place for me to share my ideas? Do I *know* the other people on the end of the screen. Would I have something to chat to them about if I did meet them in person at the proverbial water cooler.

    Content: credibility is the keyword here, the “truthiness” of the stuff I’m reading. How factual is it? How authoritative is it? How much of this can I rely upon to make a really important decision? After all, decision making is another key aspect of this social software puzzle.

    The tool or system or environment: reliability is probably one of the most important elements of trust you can have in a technology. If I’m making all of this effort to contribute to this system, it better behave and be reliable. This means the obvious connotations of lack of downtime and corrupt files, but also reliable in a “predictable” sense — technologies that don’t behave according to the users mental models are seen as unpredictable, not reliable, and erode trust / social comfort.

    The statement “I don’t trust the system” probably relates more the its reliability than it does to its credibility (with a system that captures messy narrative content and not a system that captures raw data — we often don’t trust systems of record with bad data).

    Finally, self-interest cuts across people, content, and the environment. If you perceive the system is being used by people who are “in it for themselves” and the content is self-serving and wonder just how your usage is being tracked, then it’s done.

    Reminds me of a story I was told at KM World (did you tell it to me?) about a microblogging system that took off in one organization and had everyone posting away madly, until someone realized that the boss was on there watching (who they obviously didn’t trust) and then the whole thing ended very quickly. As quickly as it had started.

    People suddenly realized that their posting could be used against them in the interest of someone in the organization who didn’t believe in the fundamentals of allowing staff to communicate and collaborate in that manner. And it was over. Full stop. No-one dared post anything else.

    So I know you’ve decided to eliminate use of the word trust in your work, but I still think there’s lots of utility there, especially the dimensions outlined in the Green/Maister context. I’ve found them super useful and they really succinctly cover a lot of ground you’ve covered in your concepts of social comfort.

Leave a Reply

Your email address will not be published. Required fields are marked *