About Facebook conception of “Trustworthiness”

Since Facebook was created in 2004, I have been obsessed with one question, although it may seem anecdotal. Enter a first or last name and you will get a list of ten, twenty or hundred profiles. Why does one profile appear before another ? What is the reason that says this one is more relevant than the next ?

The silence of the lambs and the relevance of the profiles.

Of course we know or in any case we guess by observing them, some of the probable or possible reasons for such a scheduling: the first profiles are often those of our friends, or those of people who live in the same city or in the same region, or people with whom we have friends in common. We can also imagine that Facebook first displays profiles with whom we share a certain number of "interests". But these results lists also contain a whole bunch of profiles that do not seem to correspond to any of the previous reasons and yet appear in a good place in this "ranking", in the ordered list.

Like the "relevance" of the web pages that appear in the ranking of search engine results lists, the question that obsesses me is quite simple: it is that of the relevance of human profiles. And you can't dodge it. On a search engine, on Google in particular, "relevance" is essentially a measure of popularity. But transposed in this pan-catalogue of profiles that is Facebook, the list returned following a request is not the "popularity" of such or such profile but a strange "relevance". The question of the relevance of human profiles. This question obsesses me as much as the simple fact of being in a position to ask it could, in the more or less short term, make us fall into an obsessive regime of copying the least of our behaviours which, combined with authoritarian forms of populism, could sink many of our democracies more quickly than we imagine today.

This question, I had begun to formalize it in a short text published on this blog in November 2007 and entitled "Welcome to the World Life Web" and two years later in a scientific article entitled: "Man is a document like the others" until recently alarming me of the emergence of what seems to me to be characterized as a documentary neo-fascism through a fetishism of the record unfortunately more and more standardized and accepted. Here is what I wrote in 2007 :

"More and more social networking sites are "opening" the huge catalogue of human individualities that make them up to indexing by search engines. This necessarily raises the question of the relevance of human profiles. A question which is still in its infancy but the extent of the problems raised may rightly cause shudder."

We are in August 2018 and the Washington Post tells us that Facebook has just set up a rating system for the "trustworthiness" of its users.

Why a trustworthiness index is a tale.

In addition to already being "ranked" and indexed according to our activities, our geo-localisations or our centres of interest, we will now also be indexed according to a credibility score or more precisely a "trustworthiness" score ("Trustworthiness") calibrated between 0 and 1 and which only the platform will have.

Facebook actually doesn't care whether we are "credible", that is, whether what we write or relay refers to some form of objectifiable truth; what Facebook cares about and wants to measure now is our "trustworthiness", and the nuance is important.trustworthiness" is a metric that is connected with the platform engine which is the notion of "engagement". Thus a neo-Nazi user who relays information announcing that migrants eat their children and that he is going to throw them at the end of the earth which is flat, this neo-Nazi is a user who is not a second credible but who is on the other hand totally reliable in the constancy – moreover distressing – of his convictions. It is this trustworthiness that Facebook seeks to quantify in order – at least that is what they claim – to better identify false accounts or those that most often relay false information.

Basically, we can be pleased that Facebook prefers to evaluate our "trustworthiness" rather than our "credibility", the opposite would indeed indicate that the platform adopts a moral position to define what is true / credible and what is not. And on a scale of 2.5 billion users this would be excessively dangerous. And at the same time, the example of the neo-Nazi user I have chosen clearly shows to what extent this "trustworthiness" will in no way solve the problem of the logic of disinformation or "fake news", which, as I have often tried to explain, have nothing to do with information but are linked to toxic technical architectures that promote and maintain certain modes of circulation and dissemination that guarantee an economic model.

Yet for decades we have learned to analyse information in terms of its content and context in order to understand its effects on the public: this is the field of "media studies". Thus, in order to "understand" Berlusconi's control of Italian television in the 1980s, it was necessary to watch the content of the programmes, the content of the news programmes, the editorial dimension put forward, and to cross the whole with the economic interests of the King of Bunga Bunga through his other sectors of activity. Same in France with Martin Bouygues' TF1. Or Dassault's Figaro. Etc. But it is impossible today with this approach to understand anything about the different digital platforms and what is at stake. This would make as much sense as if, on the other hand, we relied on the geographical arrangement of television transmitters to understand how Arte's cultural project differs from that of TF1. If we want to understand something about the phenomena of virality and disinformation at the scale of platforms and large digital ecosystems, we need to ignore content in order to focus on technical architecture. In short, reinvent a structuralist approach to these platforms.

But can we just blame Facebook ?

Facebook-Shaming is certainly in fashion. I don't deprive myself of that right here. The platform is currently developing an impressive series of countermeasures aimed at "cleaning up" its ecosystem both in terms of business (removal of racial and religious categories for advertising targeting) and social interactions (priority given to friends' news) or on the scale of third-party applications. But it is confronted with the fact that its system of truth (engagement) is, by nature as much as by function, a machine to reinforce the beliefs of each one (because of its technical architecture therefore).

For Facebook, as for others, the only way to return to healthy interactions on a collective scale is to abandon its economic model, which will inexorably continue to generate various speculative forms of hate speech. Which he'll never do. This economic model defining its relationship to the truth through the sole vector of "engagement" (appears to be "true" what generates the most engagement and therefore interactions), and Facebook cannot and does not want to abolish this model, the platform then "logically" attacks the relationship to the credibility of its own users through this notion of "trustworthiness". That is, once again and in accordance with a diluted form of libertarianism that feeds the platform's ideology, the platform considers that the best response is not at the collective level but at the individual level. This had already been demonstrated in Zuckerberg's letter to the Facebook nation in March 2017, in which to solve problems related to the depiction of violence or nudity he had each referred to his own criteria of tolerance or acceptability.

Goodhart's law.

In macroeconomics Goodhart's law states that "when a measure becomes an objective, it ceases to be a good measure." Is it necessary to develop? On a digital scale, the question of "quantification" and the accompanying metrics is dependent on Goodhart's law. When Facebook sets itself the goal of improving the "trustworthiness" of the information that circulates by measuring each user's trustworthiness score, measuring trustworthiness ceases to be a good measure.

In every "metric" it's "me" you "Trick".

Many of the articles that have appeared dealing with Facebook's rating system naturally make the fictional analogy with the Nosedive episode of the Black Mirror series, and the analogy, real this time, with China's Social Credit, while indicating that the latter system is much more scary than Facebook's "trustworthiness score". I don't believe that. I think there are equivalent problems with both. And I'm trying to explain why.

If Facebook's approach seems to me at least as alarming as the implementation of Social Credit in China, it is of course because it can potentially apply to 2.5 billion individuals.

Secondly, because most of these 2.5 billion people live in what are still known as democracies, and a democracy prepared to tolerate this kind of practice is no longer far from a dictatorship or an authoritarian government establishing them. "If you want peace prepares for war", and if you want the advent of an authoritarian government accustoms people to being constantly scrutinized and quantified.

It is also because nobody is able to say at what real scale this scoring is implemented nor of course what the complete and exact criteria are. Criteria which, if they were made public, would be immediately diverted so that everyone could better comply with them or better divert from them. As this Anglo-Saxon lawyer points out :

"Not knowing how[Facebook is] judging us is what makes us uncomfortable. But the irony is that they can't tell us how they are judging us – because if they do, the algorithms that they built will be gamed."

The classic dilemma of a non-public algorithm that is only accountable to the economic model it feeds and that legitimizes it in return. Classic… but more and more problematic and toxic for all our interactions, connected or not.

And it is finally because behind this score of "trustworthiness" no longer even seeks to hide the crazy and worrying idea of an automated or automatable rationalization of an individual relationship to information absolved of any collective relationship to any form of objectifiable truth(s). Technological solutionism is intrinsically linked to a form of individualistic moral relativism.

Even if, in the first place, we can reassure ourselves by saying, as Numérama reminds us, that it is just for the platform to "determine the degree of confidence that the site can reasonably have in the actions of each registrant", even if this "scoring" is only one "behavioural index among thousands of others", it is also and above all a completely perverse and biased way of making individual and collective social representations while defending itself from doing so, representations that are most often based only on the impulsive dimension of the relationship to information. As Numérama indicates it again according to the information of Washington Post :

"According to the community site, the need for this rating of Internet users became obvious when it was found that members of the service, as they had access to new options, were not making expected use of them. Some have thus marked news as false, which was not true in fact, but was a mark of disagreement with the articles."

Le Monde also reports on this aspect by quoting Tessa Lyons, project manager in charge of the fight against misinformation on the social network, and also interviewed by the Washington Post :

"One of the signals we use is the way people interact with articles. For example, if someone has reported an article as false, and an independent verifier (sic) agrees, we might better consider that person's future reports, as opposed to someone who spends his or her time denouncing the truthfulness of indiscriminate articles, when some are true."

This is the heart of the matter as much as of the danger. Facebook is a diffracted informational and social reality. Each user can freely and individually decide that such or such information is "true" or "false" and mark it as such. Total primacy of the individual over the collective. First amendment.

Economical and political libertarianism + technological solutionism + moral relativism = the holy trinity of the Bible's platforms.

Each individual is a priori the only one to know why he chooses to do so. The diffraction point is not that Facebook can evaluate or judge this choice as sincere or insincere (it could already do so occasionally if it wished). The diffraction point that poses a problem is the fact that it is the technical architecture that now operationalizes and rationalizes this relationship to insincerity through a trustworthiness score. A Taylorism of insincere in the service of a trade of opinion.

Morality ?

I would like to share two concluding thoughts here.

"The difference between individuals choosing the content they read and companies choosing that content instead of individuals affects all forms of media," warned Anil Dash, pointing to the paradigm shift from the perspective of the traditional information model (transmitter – signal – receiver).

This change also affects all forms of democracy and sociability.

And recently about the Cambridge Analytica case, Cory Doctorow wrote:

"Facebook doesn't have a mind-control problem, it has a corruption problem. Cambridge Analytica didn't convince decent people to become racists; they convinced racists to become voters."

We are, and this is fortunate, most often still irreducible to the mere sum of our profiles and the interactions and behaviours that constitute them on this or that social network. It would be dangerous to deny the current pregnancy of these social platforms in the industrialization of a consent factory.

A digital platform of 0 and 1 decides to rate from 0 to 1 the level of trustworthiness of its users in their relationship to information. A credibility score, a confidence rate, a trustworthiness index, a metric of sincerity and a calculation of insincere.

"Sincerity is a calculation like any other" wrote Jean Anouilh in the play "Becket or the honor of God". He didn't know how right he was once again.

===================================================

Automatically Translated with www.DeepL.com/Translator

 ===================================================

 

Translated with www.DeepL.com/Translator

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.

Articles similaires

Commencez à saisir votre recherche ci-dessus et pressez Entrée pour rechercher. ESC pour annuler.

Retour en haut