After a brief period in which the early “machine learning” dream of eventually developing a human-like and even super-human-like, Artificial General Intelligence (AGI) was more or less put to rest, and subsumed by the development of more targeted and less ambitious computational and robotic technologies, with the global release of Generative Pre-Trained Transformer based software such as Chat GPT and DALL-E to the public “in 2023, everything changed” in the both the professional and in the public discourses surrounding Artificial Intelligence in general, and AGI in particular, as Audi Viidalepp reveals so cogently in her recent doctoral dissertation, The Expected AI as a Sociocultural Construct and its Impact on the Discourse on Technology (2023).
As Viidalepp shows, such discourse “is saturated with reified metaphors [of anthropomorphism and technological determinism] which drive connotations and delimit understandings of technology in society” (2023: 13). Too, along with reviving ancient fears, hopes and debates on the role of “technology” in the lives of human beings, the current discourse surrounding these technologies once again makes salient the foundational biosemiotic question of “How are we to understand the nature of an ‘intelligent’ system per se?” Is it the result of algorithmic computation and stochastic probability functioning? The “imprinting of meaning” via the creation of embodied and enacted “functional circles” in an otherwise unlabeled world? Or the navigation of co-ordinates within a cultural (or even natural) matrix of possibility, actuality and lawfulness, as described by Peircean triadic sign logic?
Such questions have always been, and remain, at the heart of the biosemiotic research agenda, and in this talk, I will take the opportunity to reflect a bit on the implications that a serious consideration of the biosemiotic perspective on organismic intelligence might bring to the current discourse surrounding the possibilities and perils of AI and AGI, as they are currently conceived.