We talk a lot about AI these days or, as some call it, A1 or Al (yes, Betty, when you call me, you can call me Al) and we have now moved on to other terms including Generative AI, Agentic AI and AGI which stands for Artificial General Intelligence.
The thing is that these terms are macro terms for how the latest technologies are developing. A bit like talking about Big Data, Cloud, Mobile and such like, these are massive buckets of developments. It is the specific micro use cases of such developments that interest me, particularly as they apply to money and finance.
The main themes seem to be around internal advantage – fraud, credit risk, trading and such like – but some have moved to discussions around external advantage from chatbots to avatars to maximise customer journeys. It can work to an extent, but often is overlooked or underserved. For example, a staple in my presentations is this CB Insights chart of AI Fintech firms:
And what strikes me is that the chart does not mention customer or client as a term. Noted.
But the other thing that struck me the other day is that the customer or client is becoming the AI itself. When discussing agentic AI, I always think of bots talking with bots on our behalf. We can delegate our financial needs to the network and let our bot talk to your bot get the best deal. Why should I search for insurance or flights or deals? Let my bot do it.
Developing on this theme, it intrigued me to see that machines can not only talk to other machines but they are now creating their own societies. In an experiment by City St George’s, University of London and the IT University of Copenhagen, lead author lead author Ariel Flint Ashery states:
“Most research so far has treated LLMs (Large Language Models) in isolation but real-world AI systems will increasingly involve many interacting agents. We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can’t be reduced to what they do alone.”
The researchers found that AI bots talking to other AI bots started to behave in a society forming way. For example, they experimented with the idea by asking each LLM to select a name and, if they both chose the same name, they were rewarded but, if they chose different names, they were punished. The result was that the AI engines started same name conventions really fast but, as Professor Andrea Baronchelli points out: “The agents are not copying a leader. They are all actively trying to coordinate, and always in pairs. Each interaction is a one-on-one attempt to agree on a label, without any global view.”
Other things started to happen such as collective biases which could not be attributed to any individual agent, as well as groups of agents creating peer pressure to form new naming conventions. Baronchelli concludes that “we are entering a world where AI does not just talk – it negotiates, aligns and sometimes disagrees over shared behaviours, just like us.”
Wow!
Looking forward, if AI LLMs can create their own societies, what will be the networks they form and will they create their own value systems? Will AI engines need banks? Will AI societies trade in crypto?
It reminds me of the vision of a former Google engineer, Mike Hearn, from ten years ago. His idea was that machines, such as self-driving cars, will have intelligence and currencies to manage themselves and build their own virtuous economies. So the car is now owned by a person, but by the network. Here is a bit of what he said:
“The funny thing about a car that owns itself is that we can encode whatever rules we like into its software. We can program it to make a little bit of profit, so it’s got some money for a rainy day, but not excessive amounts. We can make it the most moral, socially minded capitalist possible.”
In particular, the idea is that these driverless vehicles create a networked economy of to offer services to humans. This would replace the eisting system of Ubers and Free Nows and replace them with robotic vehicles and, importantly, each vehicle would save a portion of the payments made for its pickups and drop-offs to fund its successors.
“After it rolls off the production line… the new car would compete in effect with the existing cars, but would begin by giving a proportion of its profits to its parents. You can imagine it being a birth loan, and eventually it would pay off its debts and become a fully-fledged autonomous vehicle of its own.”
The end of life of the parent vehicle is also pre-planned.
“If there were too many cars and the human population drops, for example, then some of those cars could put themselves in long-term parking and switch themselves off for a while to see if things improve. Or you could get immigrant vehicles driving to another city looking for work. Ultimately, they could just run out of fuel one day. They would go bankrupt, effectively, and become available for salvage.”
And what is the currency that these vehicles use? Well, as banks struggle with such ideas, it would probably be bitcoin or one of its siblings.
Well, that’s Mr Hearn’s view anyway.
It just struck me linking these ideas together that, one day, there will be an AI company established that becomes a worth more than a billion dollars by sending humans into space whilst they take over Earth. Imagine that!