Trust in AI? A Reshaping of Our Social Contract

aisocial contracttechnologyethics

“A dive into how AI is reshaping the frontier of our social contract”

Due to recent events in my life as well as the rampant emergence of AI have made me start to reflect on trust. It plays a vital role throughout my whole personal and professional life.

“Am I able to trust this contractor with my money and — more importantly — my valuable home?”

“Am I able to trust the developers within my team to tell me when they’re stuck, so we can find a solution together?”

“Am I able to trust the bank with my savings during periods of uncertainty?”

“Is the customer able to trust me to design, develop and implement a solution that will solve their problems?”

“Do I trust whatever results I get from a search engine?”

“Do I trust whatever result I get from an LLM?”

With this post I put my thoughts into words so that I can share, learn and reflect out in public instead of within the confines of my head.

Trust, as cost-benefit mechanism

Trust is a central part in the interactions we have throughout our everyday lives. It helps us determine whether or not to rely on others in order to achieve something, to help us solve a problem. We tend to do things ourselves whenever there is more trust in ourselves than in the other, given certain conditions. This can be costs, risks, consequences, capabilities, constraints, etc. This is why DIY is a thing. Especially for people who’ve increased the trust in themselves enough through practice in order for it to cross a certain threshold.

“I trust myself enough to paint this wall. It’s not worth it to pay a painter for just one wall.”

Compared to:

“I don’t trust myself enough to paint the exterior of my house. I trust a painter more and they’ll do a better and safer job.”

Sometimes we have no choice but to trust another party because they provide something we can hardly get ourselves. I can choose to get my own drinking water if I don’t trust our regional provider, but I would have to perform miracles to get there disregarding the practicality of achieving it.

Trust, a life without it

If trust is simply part of our lives, will we ever not require it? Like many of these questions the answer starts with: “it depends”.

Take financial networks for example. We can find a well-known solution there in the form of distributed ledger technology. In particular Bitcoin eliminates trust within its protocol by facilitating transactions between two parties that have to follow a strict set of rules that others are incentivised to follow.

Since there’s no owner of the network, the code is open-source and maintainers work on it — mainly — out of their own virtue. You have a solution that in principle is able to work without trust. Another example of a concept eliminating trust is a zero-knowledge proof.

This is an algorithm that’s able to determine the validity of a statement without sharing any details relating to the proof. Therefore, there is no longer a need to unwillingly trust third parties with your personal details in theory. This would mean that in order to prove you are of an eligible age, you would no longer need to provide the exact details of your date of birth.

These tools can help us in a way. But how far are we willing to go in relying in technology to eliminate trust?

Trust, the freedom of AI

This brings me to AI.

With the increase of agency we appoint to AI we will come to a point where we have to delegate larger amounts of trust to these actors within our society.

  • What would this look like?
  • How will it impact our own sense of agency?
  • What will be the impact on our social contract when we dedicate such a vital and humane part of what we are to a system?

A thought experiment where we fast forward into the future…

You have had a hard time lately when come across an advertisement relating to summer trips during your break. Dreaming about getting out of here you ask your AI assistant to schedule a trip for the upcoming summer.

As your AI assistant has access to virtually any information like your agenda, your mental wellbeing — through an app like Apple’s Health — , your banking information, it is more than capable of booking this trip for you.

When booking the trip according to your preferences the assistant encounters a problem: Insufficient Funds on the spending account…

You have set a policy to not go into debt. But the assistant infers from the different sources that you’re having a hard time and are at risk of burning out if you don’t take a break soon.

It determines to transfer the required funds from your savings account and continues to book the trip for you.

When in Italy that summer you sip on your drink as you throw your legs up during your well-deserved break.

This may seem like an irrelevant case of booking a vacation in which our protagonist even had a happy ending.

But haven’t we crossed some worrying lines along the way?

Don’t get me wrong. I see myself as a tech-optimist and I see a bright future ahead of us with all the ongoing developments, especially in the field of AI and the added capabilities that quantum computing may provide as well.

But we do have to be aware of our trust shifting, and whether it might be misplaced at certain times.

  • Will we always be fully checking the results to our prompt?
  • Will we always keep fact-checking if an AI is hallucinating?

As with many technological advancements being aware of whatever lies at the core tends to become a commodity. The majority no longer wants to know where their drinking water comes from, how it gets to their house and how it is cleaned after they’ve used it. I am curious to see how our trust will be shifting in the coming years.

How far would you go in trusting an AI?

More on this topic 👇