The audiovisual piece generated by Natural Intelligence (NI). The most important component to any intelligence in this field is how it integrates all aspects. We use all the tools to build a human learning algorithm that is capable of predicting the best possible output, even in the absence of significant performance gains. The key components of this system are a basic understanding of real-life data we used to train our natural neural networks. This is based on real-time data that we have accumulated over time, to be able to predict for example what happens when the world is a lot different. To find out what exactly the algorithm actually thinks about, we need to take advantage of the data that was previously collected and test it. The sound is based on electrical signals coming from analog synthesizers LYRA-8 and PULSAR-23 produced by SOMA LABORATORY in 2020 year. Changing signal parameters and routing we generated a source for training our natural neural networks. This also makes it possible for the system to be sensitive to what's see. Meanwhile, the visual content is highly focused on details of what we listen to during the training phase. For the generation, we prepared a specific pipeline to transform signals from the trained networks into the audiovisual piece presented above. The generation process is highly optimized to generate all data in real-time.
Hailing from St.Petersburg, artist duo, Kristina Karpysheva (media artist) and Sasha Letsius (audio visual designer), create otherworldly audiovisual projects that question the notion of reality.