Technology can assist in guiding us through our complex world but this will require an evolution in thought.
I have recently joined a group of international people (mainly from the USA) who have formed a thinktank: Future of Thought Consortium. Based on their first meeting and the documentation provided I have started to formulate my thoughts on this with the aim to further the discussion.
As big data technologies become more prevalent and AI capabilities continue to evolve, there is great potential for these two areas to intersect and create significant value. Especially in relation to the complex society and complex environment humanity has shaped around it. This complexity has now moved beyond human intellectual capacity. Our education systems, experts and leaderships are no longer able to keep up with this. We know what the problems are but we are unable to come up with the right solutions (democracy, environment, technology, equality). The result is that we now also have to change our thought processes. We can no longer rely on thought processes that we have developed during previous stages of humanity.
To address this complexity, we need to be able to create a reliable and balanced knowledge system on which we can base future decision-making processes. Technology can assist in creating knowledge (data) based systems, using quantum computing for its processing and make executable outcomes accessible with the assistance of AI. However, with this intersection come both pros and cons that must be carefully considered. Science and technology can assist, but it are us, humans who will have to take the lead in designing the best possible system around this.
AI offers the unique capability of providing our society with the combined knowledge of humanity. Over the last century we have increasingly become more specialised. We have great detailed knowledge on often small areas. Large parts of this knowledge has over the past decades been digitised. AI offers the opportunity to unite that expertise.
One of the key features of big data technologies is their ability to decentralise power. With data becoming more freely available and accessible, and technology platforms democratising the application of data, individuals and communities will have greater access to relevant information. This shift in power, from environmental scientists, consultants, academics, and government agencies towards individuals and communities, will have both positive and negative consequences.
On the positive side, this shift in power could lead to enhanced engagement by society in decision-making processes. With more information available, individuals and communities can be better informed, leading to more meaningful consultation processes and greater involvement in decision-making. Ultimately, consensus decision-making might arise, whereby communities decide for themselves on an issue rather than governments. This could lead to new collaborative approaches in decision-making involving a wider range of stakeholders.
However, the potential for ‘fake data’ to infiltrate AI could create negative consequences for this power shift. Issues observed to date include fake and manipulated data in social media metadata and fake images. Deep fakes are going to make AI more susceptible to the broader public application of this technology.
Power decentralisation is also likely to impact government regulators and, in turn, impact the political system. Regulators make the best decisions when they have access to real-time, reliable, and accurate data. Conversely, the empowerment of communities could make the regulator’s job more difficult, particularly when overlaid with political pressures from communities who have access to high-quality data.
At an industry level, the disruption AI will have could cause significant changes to supply chains and the business within them. It could also lead to rapid changes in the skills required by the industry. At the lowest end of that change would be in-house reskilling, through to external retraining including further education, and right through to cuts in the workforce.
AI is fundamentally about intelligence, which in the words of the evolutionary biologist George John Romanes in 1882: “Intelligence, then, I take to be the faculty of adapting our mental representations of the environment to the requirements of the moment by means of flexible combinations of the simpler forms of behaviour which have been previously acquired.” In other words: Intelligence is the ‘capacity to do the right thing at the right time’.
Hence, AI is fundamentally about ethics. As AI is based around the transfer of knowledge, a key ethical consideration is the biases contained in that knowledge. One of the issues with the current levels of AI is that it is built on mostly historic data. This data inherently has biases linked to the time it was collected. AI relies upon training the machine learning algorithms. Algorithm bias has recently had some high-profile victims, such as Amazon’s resume analyzer. Therefore, the design of AI algorithms must be done with full awareness of potential biases.
AI’s use for knowledge-based outcomes relies upon its ability to be dynamic, accurate, and personal. Accuracy is critical to allowing AI to become an authoritative tool in decision-making. Precautions around fake and deep fake data and images will be important to build into AI. Detection processes need to be well-developed and reliable. If trust in the accuracy of the data is harmed, this could break down the potential benefits of AI. Concurrently, we must accept that some inaccuracy is inevitable, and a solid understanding of the technology and algorithms behind AI is required.
AI has the potential to both help and harm people by collecting and representing their data. Privacy concerns around digital data collection, storage, and use are already well-understood. For AI to be successful, it must not only rely on real data but also on synthetic data. Synthetic data, generated through computer algorithms rather than being collected from real-world observations or experiments, has both positive and negative impacts on data ethics. Synthetic AI can be trained to be non-identifiable, which could help resolve data ethics concerns currently persisting in the community.
In the end what we would like to achieve is better, faster and more reliable knowledge that we can use to manage our complex society. To guide such a process, we need experts across all elements of society to collaborate, to work together and to think through how to shape our future with the aim to create reliable and balanced systems. Only humans can judge issues such as values, ethics and emotionality. Simply creating a ‘rational’ system is not going to work and will make the situation worse rather than better. In many ways this is a philosophical process rather than scientific one.
Paul Budde