Behold The Future…Imagine a machine that can laugh and cry, learn and dream, and can express its inner responses to how it perceives you to feel. It can express itself in a natural manner but also allows you to visualise the mental imagery emerging in its mind.
The Laboratory for Animate Technologies is creating ‘live’ computational models of the face and brain by combining Bioengineering, Computational and Theoretical Neuroscience, Artificial Intelligence and Interactive Computer Graphics Research.
We are developing multidisciplinary technologies to create interactive autonomously animated systems which will define the next generation of human computer interaction and facial animation.
“If I had my time again I’d want to spend it in this lab” – Alvy Ray Smith, Co-founder of Pixar (on his visit to the Laboratory for Animate Technologies).
We believe the best way to simulate biological behaviour is through biological models. We model the brain processes which give rise to behaviour and social learning and use these to animate lifelike models of the face that can interact with you.
BabyX is an interactive animated virtual infant prototype. BabyX is a computer generated psychobiological simulation under development in the Laboratory of Animate Technologies and is an experimental vehicle incorporating computational models of basic neural systems involved in interactive behaviour and learning.
These models are embodied through advanced 3D computer graphics models of the face and upper body of an infant. The system can analyse video and audio inputs in real time to react to the caregiver’s or peer’s behaviour using behavioural models.
BabyX embodies many of the technologies we work on in the Laboratory and is under continuous development, in its neural models, sensing systems and also the realism of its real time computer graphics.
We create interactive models of neural systems and neuroanatomy enabling visualisation of the internal processes generated by computational simulations giving rise to behaviour.
The Auckland Face Simulator is being developed to cost effectively create extremely realistic and precisely controllable models of the human face and its expressive dynamics for Psychology research.
We are developing the technology to simulate faces both inside and out. We simulate how faces move and how they look, and even their underlying anatomic structure.
We are developing a visual modelling methodology for the construction, visualisation and animation of neural systems called Brain Language [BL], a novel simulation environment for neural models.
This allows users to create animations and real-time visualisations from biologically based neural network models, allowing simulation effects to be viewed in an interactive context. Such a visual environment is not only suitable for visualising a simulation; it is also ideal for model development.
We are developing computer vision based systems to track and analyse facial expression and state of the art algorithms to solve for individual facial muscle activation.
Applications range from real-time expression recognition to microdynamic interaction analysis for psychology research.
This Freaky Baby Could Be the Future of AI. Watch It in Action
Polyworld: Using Evolution to Design Artificial Intelligence
ESA Space Scientist Detlef Koschny demonstrates the prototype LEGO® Philae lander
Python Artificial Intelligence Projects for Beginners : Neural Networks | packtpub.com
NASA Asian-American History Month Profile — Allen Chen
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?