We recently decided to give the new iPhone Xs and Xs Max a run for their money. What was of particular interest for us is the new A12 Bionic and how it can handle ML tasks compared to older models.
Apple claimed that it can run up to 9x faster than the previous generation chip, so of course we had to see if it's true :)
One of the ML tasks we have implemented in our apps is object detection and we wanted to see how the new hardware is able to handle this relatively light task. Honestly, we were more than amazed by the results, despite the fact that they weren't really 9x better as Apple had claimed.
The last chart is the most informative as it shows the average amount of completed detections per second. The iPhone 8 and the iPhone X are not much different, which is no surprise, as they share the same A11 chip. The A12 in the iPhone Xs Max is a whopping 3 times more productive (per second), which results in a much smoother and more seamless experience for the user.
We also decided to put the older generation phones, without a GPU, int the mix just for fun and the difference is again staggering. It takes about two seconds to perform a single successful detection, which is 20 times slower than the A11 and 60 times slower than the A12.
So, from a user POV, your phone is A LOT more capable if you're rocking one of the newly released models (Xs, Xs Max, Xr). And from a developer's POV, a lot more can be done when you're utilising the potential of the A12, so it's probably a good time to get one and start exploring!