AIthena

Connected and Cooperative Automotive Mobility (CCAM) solutions have emerged thanks to novel Artificial Intelligence (AI) which can be trained with huge amounts of data to produce driving functions with better-than-human performance under certain conditions.

The race on AI keeps on building HW/SW frameworks to manage and process even larger real and synthetic datasets to train increasingly accurate AI models. However, AI remains largely unexplored with respect to explainability (interpretability of model functioning), privacy preservation (exposure of sensitive data), ethics (bias and wanted/unwanted behaviour), and accountability (responsibilities of AI outputs). These features will establish the basis of trustworthy AI, as a novel paradigm to fully understand and trust AI in operation, while using it at its full capabilities for the benefit of society.

AITHENA will contribute to building Explainable AI (XAI) in CCAM development and testing frameworks, researching three main AI pillars: data (real/synthetic data management), models (data fusion, hybrid AI approaches), and testing (physical/virtual XiL set-ups with scalable MLOps). A human-centric methodology will be created to derive trustworthy AI dimensions from user-identified group needs in CCAM applications.

AITHENA will innovate proposing a set of Key Performance Indicators (KPI) on XAI, and an analysis to explore trade-offs between these dimensions. Demonstrators will show the AITHENA methodology in four critical use cases: perception (what does the AI perceive, and why), situational awareness (what is the AI understanding about the current driving environment, including the driver state), decision (why a certain decision is taken), and traffic management (how transport-level applications interoperate with AI-enabled systems operating at vehicle-level). Created data and tools will be made available via European data sharing initiatives (OpenData and OpenTools) to foster research on trustworthy AI for CCAM.

Project Information