Odyssey
First fashion story shot on Mars after humans successfully terraformed the red planet. Soundscape generated by AI. 3232x2160 // 18 sec // 30fps // mp4.
Thor Elias Engelstad (1975) is a Norwegian born Australia based visual artist, exploring the space between fashion, art and technology and has exhibited in New York, LA, Paris, London, Sydney, Tokyo and Dubai. His work has been featured in publications internationally such as a Cultural History of Fashion in the 20th century, Harper’s Bazaar, W magazine, Cosmopolitan, Oyster, NOI.SE Mag, Vulture Magazine, Carbon Copy, Black Magazine, Laud Magazine, Fucking Young!, Art World and Australian Art Review. Some of his clients include Warner Bros, Village Roadshow, Samsung, Volvo, Pandora, HP, Polaroid, Valley Eyewear and The Anti-Order.
First fashion story shot on Mars after humans successfully terraformed the red planet. Soundscape generated by AI. 3232x2160 // 18 sec // 30fps // mp4.
Raw photograph, all effects by Mother Nature. Maldives, 2017, 8k res, 20mb. Part of Art Innovation’s Miami exhibition ‘Floating Pixels’ during Art Basel, displayed on 60 ft digital billboards on boats. The artwork is verified by Verisart and includes a Certificate of Authenticity on the blockchain transferrable to the collector
Exploring the notion of machine evolution towards a sentient state, enabling synthetic entities to experience feelings and sensations like human beings. Harnessing the power of an ensemble of generative tools the visuals and music were created portraying sentient machines performing a conscious dance with mindful movement and authentic self-expression and self-discovery, a celebration of singularity day. Keeping the generated frames raw I felt put an emphasis on the beauty of imperfection and furthermore acting as a timestamp to the current state and fleeting moment in the evolution of generative neural networks which are exponentially evolving. The music was generated with two generative models, an image to text model analysing the visuals converting them into written descriptive words, then utilising a text to music model generating sound based upon the text prompt output from the prior model, well fit for the title of this exhibition, The Sound of Pixels.