Tencent Reverse Engineers Tesla Autopilot

David Silver
4 min readApr 2, 2019

This strikes me as so surprising that I feel like I have to preface it by stating that I’m pretty sure it’s not an April Fool’s joke.

Tencent, the Chinese Internet giant, has a division called the Keen Security Lab, which focuses on “cutting-edge security research.” Their most recent project has been to hack Tesla vehicles, which they demonstrate in this video:

So far, so good. Tencent Keen Labs even published a 40-page write-up.

The hacks have made some press for demonstrating the potential for adversarial attacks —basically, tricking a neural network. Tencent researchers ultimately were able to place a few stickers in an intersection and trick the car into switching lanes into (potentially) oncoming traffic.

I am skeptical of adversarial attacks, at least involving self-driving cars. But that strikes me as ignoring the most interesting part of this.

In order to get this far, the researchers had to hack Tesla Autopilot, and in so doing, they appear to have discovered and published a surprising amount about how Autopilot works.

Want to know the architecture of Tesla computer vision neural network? It’s published on page 29 of the paper:

The paper states that, “for many major tasks, Tesla uses a single large neural network with many outputs, and lane detection is one of those tasks.” It seems like if you spent a little while investigating what was going on in that network, you might be able to figure out a lot about how Autopilot works.

The paper is forty pages long, and the English is good but not perfect, so it takes a little while to read. I confess I’ll need to spend more time with it to really understand the ins and outs.

But there are some more good nuggets:

“Both APE and APE-B are Tegra chips, same as Nvidia’s PX2. LB (lizard brain), is an Infineon Aurix chip. Besides, there is a Parker GPU (GP106) from Nvidia connected to APE. Software image running on APE and APE-B are basically the same, while LB has its own firmware.”

“ (By the way, we noticed a camera called “selfie” here, but this camera does not exist on the Tesla Model S.)” [DS: Driver monitoring system? On what model? Supposedly they are using a Model S 75 for all of this research.]

“Those post processors are responsible for several jobs including tracking cars, objects and lanes, making maps of surrounding environments, and determining rainfall amount. To our surprise, most of those jobs are finished within only one perception neural network.”

“Tesla uses a large class for managing those functions(about “large”: the struct itself is nearly 900MB in v17.26.76, and over 400MB in v2018.6.1, not including chunks it allocates on the heap). Parsing each member out is not an easy job, especially for a stripped binary, filled with large class and Boost types. Therefore in this article, we won’t introduce a detailed member list of each class, and we also do not promise that our reverse engineering result here is representing the original design of Tesla.”

“Finally, we figured out an effective solution: dynamically inject malicious code into cantx service and hook the “DasSteeringControlMessageEmitter::finalize_message()” function of the cantx service to reuse the DSCM’s timestamp and counter to manipulate the DSCM with any value of steering angle.”

“rather than using a simple, single sensor to detect rain or moisture, Tesla decided to use its second-generation Autopilot suite of cameras and artificial intelligence network to determine whether & when the wipers should be turned on.”

“We found that in order to optimize the efficiency of the neural network, Tesla converts the 32-bit floating point operations to the 8-bit integer calculations, and a part of the layers are private implementation [DS: emphasis mine], which were all compiled in the “.cubin” file. Therefore the entire neural network is regarded as a black box to us.”

“The controller itself is kind of complex. It will receive tracking info, locate the car’s position in its own HD-Map, and provide control instructions according to surrounding situations. Most of the code in controller is not related to computer vision and only strategy-based choices.”

If this is all true, then the team reverse-engineered Tesla’s entire software stack on the way to implementing an adversarial neural network attack. The reverse engineering strikes me as the amazing part.

--

--