© 2023 by Informational Structures Research Lab - office[at]istrela.org

Is AI getting dumber over time used?
Any plausible arguments?

Was there a redesign of the model OpenAI is using or might there be other reasons why the chatbot's performance seems to drop? Questions - not to answer without knowing some OpenAI secrets?
see -> Is ChatGPT getting dumber? or Stanford scientists find that yes, ChatGPT is getting stupider

Possible answers arise from complexity and systems theory in conjunction with deep learning, which might lead to self-referential effects on the data used. Since AI is described as imitating human intelligence and the models used are based on data input and output, there is inevitably some interaction between AI systems and humans. This may indicate that human based self-reference can / must be reflected by an AI.

Quod demonstrandum est!

AI - the next level Digital Divide

The Digital Divide has been an issue since the increased use of computers and the associated ability to communicate and obtain information freely. In the 1980s, access to the nascent new medium was reserved for a few insiders who were able to type endless codes to launch certain applications on the devices available at the time. Establishing a network connection via a modem required expertise and a lot of effort. Graphical interfaces with menu navigation were a long way off in the technical field at that time. This changed significantly with the introduction of the World Wide Web (Dijk, 2005, 97).

With the simplification of the previously very complicated entry into the various network services, it was also possible for technical laypersons to gain access. What had previously been a mostly technical hurdle thus became one that was more or less socially conditioned. This is how this kind of exclusion first became an issue, because previously the use of network services had been an activity reserved for very few specialists.

With the opening up of the network world by means of browsers and links - the World Wide Web is based on the work of the British scientist Tim Berners Lee at CERN in 1989 - the WWW began to become a phenomenon for the masses. Although very slow at the beginning, after an almost linear rate of increase in the first few years, the development of the frequency of use of the WWW showed an exponential growth curve from around 1996, both in the corporate and private sectors.

Access to the Internet, the opportunities to obtain information, possibly also to provide it, to communicate and to participate in a discourse, thus became an important asset in the sense of Max Weber’s interpretation, who had stated: “Not only material goods do belong to ’assets’. But rather: all opportunities ...”57(Weber, 2017). Thus, all those who were not granted this asset, who were too poor or too old to participate in the new medium, became victims of the Digital Divide.

This interpretation of the Digital Divide is still a topic in the socio-political context, although no longer to the same extent that it was twenty years ago, but noticeable, for example, in the discussion about broadband expansion for the fastest possible connection to the network, and in the emphasis on the importance of notebooks or tablets in schools.

Observing the development of AI, the idea that AI represents a larger "digital divide" than the inequalities in the use of network services and social media in recent decades is strongly suggested - and this kind of divide is growing much faster than previous social inequalities and differences - perhaps one of the most important issues to be discussed for a future, socially aware society.