This is a thought experiment for a system that would provide every individual in a society with a trust score that governments, institutions, and individuals could use for many things. One example that comes to mind is how to select juries from the electoral roll, but the possibilities are almost endless. This would include adding probabilistic and believability-weighted scoring for witnesses in court. But this is not just limited to anything to do with the criminal justice system.
China is currently trying to do something similar with its social credit system, which punishes citizens with throttled internet scores and even flight bans.1Business Insider on China’s Social Credit System
However, one does not have to have much imagination to understand how this could go terribly wrong.
Firstly, the system is centralized and thus open to abuse by the government itself. Even if it’s not the current government, we have no idea who will be in charge twenty years from now, and history is littered will well-meaning policies that subsequent governments used to nefarious ends.
If the government suddenly decides that a particular behaviour is “wrong”, you can instantly see your score plunge in such a system. This could be as simple as adhering to a specific religion or capping scores based on race. When the rules change, your scores could be recalculated based on your previously tracked behaviour since the system was implemented.
So, my pitch for this is much simpler and does not require constant state surveillance. I will not review the ethics around this system, purely the initial principles, not whether it is desirable.
I take inspiration from the original insight that created Google, that you could use the number of links to a specific website to rate Web pages “objectively and mechanically, effectively measuring the human interest and attention devoted to them.” 2The PageRank Citation Ranking: Bringing Order to the Web
The system for this is called Pagerank, which itself took inspirations from the practice of referencing in academic papers. I wonder if we could apply the concept of Pagerank to human beings?
Using the links and knowledge we all have of each other to build trust and responsibility ratings that are developed bottom-up instead of centralized by the state. This could be of tremendous help when getting accurate knowledge, such as hiring candidates for a job position, is expensive.
This expense means that humans use heuristics, which often results in biases against marginalised groups. A small badly-behaved contingent of a subgroup can affect the livelihoods of the entire subgroup. A tiny minority of individuals in predominantly black neighbours in American cities commit crime, but that is enough to significantly raise the price of doing business there for small grocery stores, resulting in higher prices for everyone who lives there. 3Thomas Sowell :The economics of ghetto markets.
An example of this is identical CVs whose only difference is the name. Those CVs with names more often associated with African-Americans tend to have lower response rates than other names. 4Job Applicants With ‘Black Names’ Still Less Likely to Get Interviews
This would also automatically reflect society’s evolving values instead of having a top-down imposed set of values.
This would also allow us to trust strangers with a high degree of accuracy, which is great for situations such as babysitters, school teachers, etc.
So what would be the first principles of such a system?
- A method to identify an individual uniquely.
- A method to identify if someone is alive or dead.
- A method for determining and validating relationships between individuals.
- A method for individuals to provide feedback and rate each other.
- A method to build an individual’s trust index based on how others rate that individual and the validity of their ratings compared to other people’s ratings.
- The ability to view ranks based on personal priorities: i.e. do you need to know the most reliable person, or the most trustworthy, or the most likely non-violent?
- A method to stop large-scale group attacks on individual ratings and various other forms of abuse.
- The system should be visible but anonymous. This is the same logic that voting with elections has. There should be no way to verify that a specific individual has rated someone in a particular way.
- A specific individual should have a “profile” with all ratings given and received. An open question about being able to tie in an individual to a specific anonymous account opens us up, so perhaps this is not possible.
- Unchangeable and logged. All ratings, even if then changed, should be visible.
- Rank decay. People change — thus, every rank should have a half-life where it becomes less relevant and weighted.
I believe that such a system would leverage the information that we already have and use to navigate the world with the people that we know, and extend it to the people that we don’t know. We already collaborate with millions of strangers on a daily basis, almost without noticing.
It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.Adam Smith
The critical question, of course, is whether such a system should be opt-in, opt-out, or mandatory.
Next year I plan to study this further and perhaps provide a mathematical model and explore the viability of this.