If you suspend your transcription on amara.org, please add a timestamp below to indicate how far you progressed! This will help others to resume your work!
Please do not press “publish” on amara.org to save your progress, use “save draft” instead. Only press “publish” when you're done with quality control.
Apart from building electric vehicles, Tesla has gained a reputation for their integrated computer platform comprising a feature-rich infotainment system, remote services through Tesla's Cloud and mobile app, and, most notably, an automated driving assistant. Enabled by a dedicated arm64-based system called Autopilot, Tesla offers different levels of "self-driving". The "full self-driving" (FSD) is provided to specific customers via in-car purchases and has been subject to public discourse.
Despite using multiple cameras and Autopilot's machine learning (ML) models, accidents persist and shape FSD reporting. While the platform security of Autopilot's hardware protects the code and ML models from competitors, it also hinders third parties from accessing critical user data, e.g., onboard camera recordings and other sensor data, that could help facilitate crash investigations.
This presentation shows how we rooted Tesla Autopilot using voltage glitching. The attack enables us to extract arbitrary code and user data from the system. Among other cryptographic keys, we extract a hardware-unique key used to authenticate Autopilot towards Tesla's "mothership". Overall, our talk will shed light on Autopilot's security architecture and gaps.
Before delving into Autopilot, we successfully executed a Tesla Jailbreak of the AMD-based infotainment platform and presented our attack at BlackHat USA 2023. This achievement empowered custom modifications to the root file system and temporarily facilitated the activation of paid car features.