In recent discussions about generative AI, critical research has largely focused on the technology's notorious tendency to dream, or more accurately *hallucinate*, its output rather than produce content based on actual knowledge and understanding. AI is mainly organized along correlations rather than casualties, leading machine learning algorithms to generate text, images, sounds and videos according to what seems most probable. As a result, sexist, ableist, racist and classicist content and other biases that contaminate datasets might be amplified, appearing in social media, news and elsewhere across the World Wide Web. In addition, deep fakes and their rapid spread across multiple platforms has increased exponentially, fuelling international conflicts with misinformation, from war to protests, and becoming a serious threat to open and democratic societies. This is accompanied by a general information overload and the inability to truly verify the origin of any content daily.
On the flip side, image-generating AI tools seem to have considerable difficulty dreaming up tiny symbols, letters and text. Therefore, AI fakes can often be exposed by relying on text elements in images, resulting in most persistent deep fakes lacking these revealing elements. These aspects of AI image generation can be facilitated by engaging with a hereby proposed concept called *Interface Hallucinations*.
A smartphone is a device capable of transmitting, receiving, and sensing. It uses Bluetooth and Wi-Fi to connect to networked technologies in its immediate proximity, GPS to act as a geospatial datapoint, and cameras and microphones to record its surroundings. Installed applications and user settings can personalize information: such as the user's language, real-time weather information, time-sensitive notifications to personalized backgrounds. All these aspects are displayed as textual and symbolic information on the smartphone's graphical user interface (GUI), according to a particular device, its operating system and software version. Interface Hallucinations explores this complex assemblage of operational images on smartphone screens as a reference point for non-AI-generated content.
Based on text-to-image experiments, the presentation shows how AI fails to produce images that represent the smartphone GUI and successively introduces *screenshots and screen recordings taken while photographing with a smartphone* as a low-tech method of hacking the distribution of misinformation. A screenshot would not only show the object from the camera lens but would also include the graphical elements of the smartphone's interface and camera app - showing, for example, the clock, language, shutter button, zoom bar, settings, photo album. Even more elaborate, a screen recording can intentionally move between applications to show the camera image, as well as the exact GPS location, connected devices, relevant time-sensitive notifications, and so on. Screenshots and screen recordings thus become a strategy for producing trustworthy images, focusing on the user interface as a catalyst for identifying images that are not generated by AI, but also images that are actually related to the exact context in which they are circulating. The images produced will be poor in quality, but strong in meaning. In this way *Interface Hallucinations* intersects the disciplines of AI criticism and interface studies with critical and participatory journalism and activism, raising the question of what it means to produce true images, broadening the discourse beyond the field of artificial intelligence.