TechArt Forum – MetaCity Oulu & Experiences 30-minutes, in English
1. Windwalker Echo searches for Risto, finds Virtual Helsinki ruins and then Risto 2 minuuttia
Keuda 2022 alun mukaan, äänitettävä englanniksi
WWE1: Hi, you all – Risto was supposed to give this talk, but he got immersed to some gaming world,
This is quite a sand desert!
Lets try if we can find him remembering his old pioneering times in the ruins of Virtual Helsinki.
I am Windwalker Echo.
Usually I hunt giants. Now I search for Risto.
No, not here, let us go to the new frontiers.
Yes, here he is at the edge of the Metaverse, but he seems stuck. Lets go and poke him. Hope he boots up.
1WWERisto: Thank you Echo, this bleeding edge freezes me sometimes. Kind of you to come and wake me up. And hello you all. I will talk about Metaverse both from the artistic and civil engineering viewpoints. Let us move to our first example town.
2. Game artists – RDR2 Town and explanation 1 min – aktiivinen kaupunki, palataan myöhemmin
Pelitaiteen/maisema/virtuaalituotanto-selitys
2RDR2Risto: This is a screen capture from my gameplay. The game is Red Dead Redemption two, a major production. There are several small towns, wilderness with all typical american ecosystems and rural surroundings. Lots of beautiful settings. This is just a very small corner of the game. As you see, lots of people move in the streets and shops. They may be other players of AI moving NPC’s that belong to the game.
Let us leave this game environment – I only wanted to show a finished experience before I show you the quick and dirty ones I have prepared myself to show what can be done quickly. And by a person who is not an artist but appreciates those who are. Lets return to the company of Windwalker Echo.
2B Scenery artists, Isle w. Echo, 30 sek
2Scenery: I bought this island from Unreal Engine Marketplace for 50 euros. Unreal Engine is free to use for all video production and even game production unless your yearly per product revenue exceeds one million. I transferred Windwalker Echo here and directed her to roam the island. If I was an artist, I could desing my own scenes or scan them from real scenes and objects and sell them in any similar marketplace. This is again a direct capture from real time screen. Next one is rendered.
3. Dalle Gallery, change in EU legislation + Dalle Oulu, Helsinki, Mountains 2 min
3Dalle: This place requires more explaining. The tool here is iClone8, mainly intended for animating human like characters. Character Creator is part of the package and my own character is done with that. Paintings in the wall I have done with Dalle2, each has its own story and purppose. Dalle is an AI that produces pictures that you ask for. After doing the animation and the camerawork, this is rendered into a video. iClone animations can be transferred to other environments for better results, but this is all inside iClone.
There is a crucial change in copyright law coming into force after this years end. It is required by EU directive change. There are many alterations but crucial here is that AI is allowed to read and use publicly available materials and parody, caricature and pastises can be freely produced. Thus we need not concern us any longer on AI-assisted works copyright issues. If anyone has copyright on these, it is me, in spite of what all works my AI assistant was using as inspiration. This is true as long as it is not pure imitation.
Lets look more closely to three Dalle2 pictures. This first one I initially painted with Canvas. The central part of the picture. In Canvas you do not select colors to your brush but bushes, stones, mountains, rivers and so on. An AI then tries to assemble a credible picture from that. Then I transferred that to Dalle2, rubbed two areas empty and asked Dalle to add few details. I also asked Dalle to extend the photo.
This is another where I asked Dalle to create a picture illustrating a ship arriving to Helsinki. As you can see, the city resembles Helsinki, but clearly Dalle has taken artistic liberties.
And here is my and Dalle’s attempt at illustrating a scene of futuristic Tech Art Metaverse Oulu. Starting point was the famous police statue that I varied. Besides Dalle, there are Midjourney and Stable Diffusion that produce comparable results and have helped already to win art competitions.
Lets look at two quick experiments I made at telling stories. First is 3D-animation in a small town I assembled. The second is illustrated by Dalle and written by GPT-3, performed by Synthesia. Thus – it is all AI produced under my direction.
4. My own toy town example / Sad Town / stolen childrens fable 2 min w. GPT-3 expl.
9 min. 40s tähän
5. Real City – Paris, all GIS 1 min
5Paris: Let us return closer to this reality. This part of real Paris I got from ESRI database. This is from world leading Geographic information system models. I downloaded the model, found it was missing the streets and added a shining metal sheet as the street layer. Then I added a carting car and started driving. This is again a screen capture. But there is also a model of Oulu available in the net, let us see.
6. Real City – Oulu – DInosaur 2 min
Real time ja Omniverse rendered ja selitys screen capture
6DOulu: The first one was animated in iClone, voices were from Replica, a speech synthesizor where voices are based on real voice actors. Building and animation were transferred to Unreal Engine and screen capture taken from real time rendering. In this picture you see a third software, Omniverse.
Omniverse rendering: Here the scene is transferred to Omniverse for path tracing render. Using more time, this would approach movie quality, but here I do not even use camera features, everything is in focus. You just need to believe me about the movie quality. It just takes more processing time and more attention to detail, including the materials, lights, sounds, animations
7. Matrix City, AI moves everything, traffic study etc. VR&AR, pokemon, but also practical things, city planning, maintenance, remote guidance, 2 min
7MCity; What you see now is Matrix city. AI moves people and cars, and this is again a screen capture. This illustrates, what kind of complexity a good gaming computer can produce in real time. This could be a digital twin of a real city, providing information for planning, logistics, maintenance, tourists, whatever. We could use this to set up information displays for users of AR-glasses and allow people to enter virtually to meet those physically in the city. The possibilities seem endless and initially many of them were conceptualized in late 90’s in Virtual Helsinki or Helsinki Arena 2000 and they spread around the world. But at that moment computers were not really up to the task and standards were not yet in place.
Lets pick up a lighter tune, here voice is mine and facial expression is copied from my face!
8. Dance in the closet, bought the movements 1 min
9. Marmorilinna – kävelyä 1 min
9Mlinna: That was rendered in Omniverse. I used be capable of the russian dance, but those movements I bought. Here I am giving a lecture. Rendering is directly fron iClone, tools from Omniverse – Audio to Face and Audio to Gesture move my face and body according to my voice. I can direct the style of body movements and also the emotions for facial expression. And this walking – I draw a path on the floor – and ask my character to follow the path. There is lots of very rapid development to automate character movements, make them more intelligent and natural. We will return to this later but lets have a peek at another production quality game and talk about VR-glasses.
10. Assassins Creed – mirror neurons – no VR-glasses 1 min
10ACreed: Open world virtual reality is amazing. This is from my gameplay in Assassins Creed Odyssey. Requires at least a medium powerful graphics processor to be enjoyable. This kind of games could be used with VR-glasses, but they are often not supported for three reasons, which are important to understand. First is simple – the screen resolution that is enough for a narrow screen is not nearly enough when you stretch it to cover your whole view. Then also standalone VR-glasses are totally unable to do the required calculations of even a medium powerful graphics processor. The beautiful richness cannot be achieved without more resolution and without connecting the glasses to a powerful PC. But the third point is most difficult. A 3rd person view as you see here – it does the most astonishing thing to our mind. Mirror neurons make us feel what the character feels, empathy and identification make us feel like climbing, getting hit or feeling exhausted. These feelings you do not get from 1st person view without actually climbing or getting hit. And 3rd person view is unnatural with VR-glasses.
11. Pro environments – BMW factory digital twin 1 min
11BMW: Industrial Metaverse is in rapid development. Here you can see a whole digital twin of a factory manufacturing cars. BMW now has ten of its factories modelled as exact replicas. And all phases of assembly are included. All machines, robots and human workers collaborate virtually in real time. This is the best example of Metaverse capabilities so far. Various software – like Catia and other CAD/CAM-applications provide their models into Omniverse which maintains the whole scene. A human acting as the assembly worker in VR can say to a Cad modeler that the table is too low and the modeler makes it higher and they both are happy. This can be connected to sensors and show the real situation in the factory too.
12 Physical simulation, not plain animation, digital twins of Marbles 30 sek
12Marbles: I have mentioned NVidias Omniverse. As you saw, it can be used for integrating other tools or rendering. But it has other uses. All objects can have physical qualities and the system can be turned to a simulator. Here the track and the marbles are physical simulations. So this is not an animated game but a physical simulation of the physical game captured real time from my screen. Thus if you add new items, they are taken into account as they would in real life. And yes – I did make more mistakes, I have cut most of them out when editing this video. But you can clearly see how this enables totally new kind of games.
13. Mocap – face, body 1 min, Motion Diffusion MDM
13Mocap: I mentioned copying my facial expressions. There are several applications, nearly all using iPhone and Apples ARKit system. They measure facial expressions and transfer them to animation software in real time. Body movements can be recognised from video or from a mocap -suit. These have sensors attached to suitable places to measure the movement of different joints and body as a whole. Both approaches have their benefits. You can record motions and sell them. I bought the dance motions I used in the rap-example. There are also very promising studies of creating motion in a similar way that Dalle or Stable Diffusion creates pictures. Motion Diffuser Model creates human body motion from textual descriptions. We can expect to that soon available. I have a Rokoko suit with gloves so that my digital twin can move its digits. But in this next demo, the user can also feel the fingers and whatever they touch in virtual reality.
14. SenseGlove 1 min
14SenseG: Sensegloves are intended to be used in training. Many expensive or complex assembly and maintenance tasks can not be practiced with real equipment. And plain VR is not enough to provide a real feeling. But haptic gloves give you the feedback and help create body memory of what the work requires.
Let us now return to artificial voice actors. In the next demo you can hear Replica AI reading one of Kiplings most beloved poems using my face.
15. Poem AI-voice better than mine 1 min
16. Marketplaces for animation, clothing, scenery 1 min, Text to 3D
16Market: There are several marketplaces for 3D-stuff that you can use in animation environments. A lot of free stuff too. I have mainly used iClone makers Reallusion Marketplace and Unreal Engine maker Epic’s Marketplace. Both contain both host and user provided content for sale. Typical items include scenes, clothes, characters, buildings and nature. Many include generative tools that allow you to give parameters and create whole forests or cities from their typical structures. Clothes often allow modifications too and they adapt to characters body. Durin the past two years I have payed way more for my virtual clothes compared to my physical ones.
17. Metaverse Standards Forum (USD, glTF, MaterialX, coordinate system, IFC etc., public procurement can enforce standards and produced digital twins can be used in all Metaverse compiant platforms, appearing in their proper places if they have absolute physical coordinates.) 2 min
17MetaSTD: Now we need to take up the question – why talk about Metaverse. Can we not just continue talking about virtual reality and 3D-game like environments. There are two crucial requirements for Metaverse. It is supposed to be persistent. When someone goes there and leaves a jigsaw puzzle half done, it should be half done when the next person arrives. But it should also join to other persistent sites in a compatible way. This means I should be capable of moving myself from one virtual world to another. Metaverse is not one platform or site, it is a compatible network of virtual reality sites.
Metaverse Standards Forum discusses these issues. It has now nearly 2.000 organizations as members. Nokia just recently joined. Apple is the only major player not as a member. Founders have declared as a starting point Pixars USD, Universal Scene Description, which has been used in several of these demos, MaterialX as a materials description standard and glTF as a layer optimizing what is sent to the periferal devices accounting their processing power and bandwidth. All these are ripe in themselves but require some knitting together.
They also need extensions. All major internet, geospatial and VR standardization organisations are members in Metaverse standards Forum. There is now major development in adding standard geospatial coordinates to Metaverse. Thus all things that have their proper coordinates set in geospatial coordinate system, would fall into their correct places in any platform when loaded. And if I would wish to see something using AR-glasses, I would see it when I am physically in right coordinates.
But let us return to more demos. Here you see high quality real time animation or actually it is way better in my screen than video can convey. The equipment is a high end PC with RTX 3090 GPU.
18. Processor requirements, Meercat -demo 2 min
19. Streaming VR (demo) 1 min
19STRVid: We cannot assume everyone has a powerful computer just to experience metaverse. Luckily this is not mandatory. Just like we have streaming video, we also have streaming VR. This example I did not set up, but I tested it myself. It required nothing but simple web browser, not more stress to the computer than a plain video. But I could fly a helicopter in Matric city and all the cars and people walking there as well as my helicopter – were animated in the cloud. As long as the server is close enough, lets say not thousands of kilometers away, the delay will not be a nuisance. You can run this on a tablet or with VR-glasses.
20 Virtual environment for robots to learn how to cope (robots dance) 1 min
20RDance: One more thing we have not yet discussed is teaching computers. Virtual reality is ideal for physical simulations. And if you put a digital twin of a robot into a simulation, it will not distinguish it from physical reality. It can start practicing. If it falls, it learns but does not break anything. Only when your robot can do its tasks well in virtual reality, you allow it to try its skills in physical reality. If your digital twin and simulation are close enough to their physical counterparts, this is enough. As a conclusion you must suffer one more of my artistic attempts. I had to sing myself as I could not find any other copyright free singing. I use a Metahuman as my face here. The pine forest in the background is 64 square kilometers, but that did not seem to slow down my PC.
21. Red River Valley 1 min, jos mahtuu – jokainen voi yrittää