AI Detects Emotional Content For AV Safety

If you think that autonomous vehicles will drive themselves by merely tapping a meter with your credit card, then you really need to get yourself to Sensors Expo 2018. Importantly, you should be there Wednesday, June 27 at 10 AM to attend Modar Alaoui’s session titled, “Vision AI for Human Behavior Understanding Inside Autonomous and Highly Automated Vehicles”.

 

Modar Alaoui is the founder & CEO of Eyeris, a company considered to be a leader of human visual understanding inside autonomous and highly automated vehicles. Modar is a technologist and expert in vision artificial intelligence (AI) for human behavior understanding. He possesses a decade of experience bridging human machine interaction and human behavior measurement through facial and emotion analytics, body pose, action, and activity recognition. A regular speaker and keynote authority on human-centered ambient intelligence as the next frontier in AI, Modar is a recipient of many technology and innovation awards and has been featured in media publications such as the Wall Street Journal, the Huffington Post, CNBC, and Bloomberg, to name a few.

SENSORS EXPO & CONFERENCE

Sensors 2018 Hits Silicon Valley June 26-28!

Join thousands of engineers this June in San Jose at the sensor industry’s biggest event! With 65+ Technical Sessions, 100+ Leading Speakers, and 300+ Interactive Exhibits, there’s more opportunity than ever to connect with this booming industry and the technologies driving it. See thousands of the newest technologies in action, learn about the latest applications, including AI, Autonomous Vehicles, IoT, and Medical, and develop invaluable partnerships at the only event dedicated to sensors, connectivity, and systems.

 

 

 

Mr. Alaoui’s session, “Vision AI for Human Behavior Understanding Inside Autonomous and Highly Automated Vehicles”,

will cover the latest computer vision AI technologies that enable visual behavior understanding for both the driver and passengers of autonomous and highly automated vehicles (HAVs). Vision AI uses standard cameras to provide emotion recognition from facial micro-expressions plus over 30 face analytics, body pose tracking, action and activity recognition that trigger the activation of support systems to react to variations in the driving experience.

 

If you need and want to learn more, there’s only one way to do it: get to Sensors Expo & Conference 2018 West and be sure to attend Modar’s session Wednesday, June 27 at 10 AM