© 2023 by Informational Structures Research Lab - office[at]istrela.org

AI - the next level Digital Divide

In the early days of the internet,
the term "digital divide" referred to the gap between those who had access to computers and the internet and those who did not. This divide led to significant differences in opportunity, education, social interaction and economic advancement. This "digital divide" widened in terms of digital competences and now seems to be heading for a new peak with the hype around AI.

As artificial intelligence (AI) becomes more integrated into our daily lives, there is a growing concern that a new level of the digital divide will emerge – an "AI divide." This could manifest in several ways:

To address these issues, it's important to advocate for policies that ensure equitable access to AI technology, promote AI literacy and transparency, protect against job displacement, enforce strict standards to prevent bias in AI, and regulate the use of AI in surveillance. Public and private sector collaboration, as well as international cooperation, will be crucial in ensuring that the benefits of AI are shared broadly, rather than contributing to greater inequality.


AI and Reliability

The reliability of AI is a complex issue that depends on a number of factors, including the quality of the data used to train the AI system, the complexity of the task that the AI system is being asked to perform, and the environment in which the AI system is being deployed.

In general, AI systems are more reliable when they are trained on large, high-quality datasets that are representative of the tasks that they will be asked to perform. AI systems are also more reliable when they are designed to be robust to noise and outliers in the data.

However, even the most reliable AI systems can make mistakes. This is because AI systems are trained on data that is collected from the real world, and the real world is a complex and unpredictable place. As a result, AI systems can sometimes make mistakes when they are asked to make predictions or decisions in new or unexpected situations.

It is important to remember that AI systems are tools, and like any tool, they can be used for good or for bad. The reliability of AI systems depends on the people who design, develop, and use them. It is important to design, develope, and use AI systems in a responsible and ethical manner, and to be aware of the potential limitations of AI. Improving the reliability of AI systems needs to take into account these tasks: