It all started with using AI Image Generation capabilities to (re)generate from minimal data.
The recording of metaverse experiences supports various use cases in collaboration, VR training, and more. Such Metaverse Recordings can be created as multimedia and time series data during the 3D rendering process of the audio–video stream for the user. To search in a collection of recordings, Multimedia Information Retrieval methods can be used. Also, querying and accessing Metaverse Recordings based on the recorded time series data is possible. The presentation of human-perceivable results of time-series-based Metaverse Recordings is a challenge. This paper demonstrates an approach to generating human-perceivable media from time-series-based Metaverse Recordings with the help of generative artificial intelligence. Our findings show the general feasibility of the approach and outline the current limitations and remaining challenges.