One day last week, I received several emails and links to stories that form the basis of this article. It started with an ABC News article I read: ‘Twitter is becoming a ‘ghost town’ of bots as AI-generated spam content floods the internet‘ I am part of an international network of ICT experts, and I forwarded the article to the group. This led to an avalanche of responses with further information on the massive social, political, economic, and technical changes that are cascading as a result of the commercialisation of Artificial Intelligence.
As mentioned before, the potentials of AI are enormous. In our increasingly complex world, we need all the intelligence we can gather to find solutions. AI can be extremely helpful in addressing these issues and finding better and faster solutions. AI will also be very disruptive, most likely significantly more disruptive than previous developments such as the arrival of the internet, the mobile phone, social media, etc.
The following video clip – AI Generated Videos Just Changed Forever – discusses the new AI video service from OpenAI, Sora. Showing both the advantages and potential disadvantages of this new development. The impact for the film and video industry will be massive. Why use studios if you can make your own video for free? The fallout can also be read in this article: ‘Tyler Perry, fearful of AI advances, halts $800M Atlanta film studio expansion.‘ And why do you need the news media if you can generate your own AI news? That seems to be the conclusion of Facebook (Meta) as it abandons Australian news content deals, a loss of potentially hundreds of millions of dollars to the industry.
In an older presentation (2019) titled ) Hacking Ten Million Useful Idiots: Online Propaganda as a Socio-Technical Security Project,’ Sociotechnical systems (STS) are discussed. It emphasises the need to address hacking, misinformation, and conspiracy theories in a multifaceted way. From an organisational point of view, this is an approach to address complex organisational work design that recognises the interaction between people and technology in workplaces. The term also refers to coherent systems of human relations, technical objects, and cybernetic processes that adhere to large, complex infrastructures. Social society, and its constituent substructures, qualify as complex sociotechnical systems.
The negative developments of AI were forecasted, and in the presentation, security experts are discussing what is needed to protect our society from the misuse of emerging AI-based technologies. As shown, this is not an easy task, as creating misinformation and fake news is far easier than unraveling it. By the time the latter happens, that misinformation is already replicated by bots and gullible people (‘the ten million idiots’). Already, without AI, we have seen how easy it is to spread conspiracy theories and how easy it is for people to benefit from this due to their ideology, political bias, or pure criminality. Add AI to it, and it is clear where this development is going. Targeting gullible people has also become far easier thanks to big data in combination with data science. Targeting these people will automatically spread the misinformation further into society and infiltrate other groups, and suddenly that misinformation is taken up throughout society.
This was another response from one of my American colleagues: ‘I was recently asked to serve on the board of our local hospital and now get a front-row seat to watch health care issues unfold. One of the biggest financial issues for healthcare providers in the US is the tendency for health insurance carriers to reject claims almost automatically, which triggers a paperwork avalanche between patient, provider, and insurance company. Most of the major insurance carriers are now using AI to reject claims. AI is far faster and better than humans at finding small issues in a claim that can cause it to be rejected. We are looking at new hospital management software that will use AI to help pre-qualify the claims before they are sent in, but it is going to be 9-12 months before we can get it fully implemented. In the meantime, the paperwork avalanche continues to grow.‘ However, AI is also used to make fraudulent claims, so the above message was followed by the following reaction: ‘It’s a nightmare across the board. Financial healthcare institutions are trying to protect themselves from AI-generated fraud.‘
AI has also infiltrated Google Search. A well-documented example is the rise of fake airline customer services. Replicating legitimate airline service, hackers try to get you to rebook flights, which are, of course, fake. LinkedIn has been infiltrated by bot messages, bringing certain fake services to the top of a search. All of these developments are very worrying. It is important that fewer people fall into the category of gullible. The reality, however, will be that most people will become victims in one way or another as most people will increasingly be unable to distinguish what is true and what is fake. As the truth is fundamental to the way our society works, undermining this will lead to social chaos.
Paul Budde”