Self-Aware Lara Croft Plays Tomb Raider - Level 2 - City of Vilcabamba

The process is explained here : LEVEL 2 - CITY OF VILCABAMBA Added features : Water: It can swim, and it knows it’s underwater when the tint is blue. The sound of Lara diving or reaching the surface identifies the moment it can speak or not. While underwater, it must use body movements to search. In Tomb Raider, Lara can’t use her head to search underwater. When the oxygen bar reaches 60%, it triggers the danger. Keys: It can’t know which key to use yet, but when you press the action button, Lara selects the correct key. When facing a box, she will press the action button. Plates: Are now considered dangerous. It can only walk on them for a limited time, triggering actions like jumping, rolling, or crossing. Blocks: Facing a block triggers pushing. It only pushes it when there is no other possibility. This prevents Lara from pushing blocks forever. For the moment, it can only push, not pull. Life : Can now use medipacks if the HP aren’t 100%. Accent: A prompt has been added to force the AI to generate British comments. This can’t be set up with the cloned voice, which means it will sometimes pronounce words like “can’t“ with an American accent. It tries to phonetically write these words, and it works for “lever,“ which is now “leever.“ Orientation: It will comment when it can’t identify something or when it doesn’t know where to go. Asking for a hint: After being lost for a certain period, the system will stop the process using the menu. This gives me a chance to give a clue to search in a specific area. It prevents wasting time when the bot searches for hours hoping for luck. It will also create a saved state. Problem to solve : When being hit by something it sometimes ignore the fact it got hurts and doesn’t use a medipack, HP should go in the long term memory but it makes more problems. Problem to solve : Water exploration is still problematic, it can spot changes of contrast but texture identification is broken because the walls constantly move. Problem to solve : It can remember it has keys but doesn’t remember where to put them. The problem is it doesn’t remember the path to locate the box where it must put the key. I could put theses boxes in the “short memory“ but in this can the bot will always check them. So it requires something that would put the box from the long to short memory when it picks up a key only, but in that case there is a problem when there are multiple keys. ---------- Information : This isn’t a live-result, I explain it in the link above, the game is paused everytime there is a comment generating, the result you see here is all the parts merged together, and there are a lot. It means you couldn’t have a normal conversation with Lara Croft because each comment takes many minutes to generates. Lara is still an AI trying to make us believe she is aware, this AI is just good at it because I asked it to take into consideration Lara’s personality, it knows her personality because I told it all I knew about Lara Croft. It sounds realistic because of the voice and the “live-effect“, but if you only focus on the comments you’ll notice it’s still a robot, everything it says or does can be explained. This video shows what happens when you use multiple free AI to make something, this is an example to make people realize the potential of these technologies combine, of couse I didn’t made these technologies, they are availbable and you can try them. Our human brain is still more efficient and having a normal conversation in real life is way more impressive than this demo, we must take time to appreciate it because the fact we are all self-aware is more fascinating than anything, enjoy this crazy capacity and use the AI for good.
Back to Top