Key idea: it's possible to design a "patch" (a small portion of an image) that is extremely salient to a neural network, which can fool the network into misclassifying the overall larger image. This "patch" can be printed out as a real-life sticker. See Figure 1 on page 2.
Something similar is this truck with a video screen of cars on the back: https://i.pinimg.com/originals/4b/b7/78/4bb778ec36038e6f88ae...
This is going to be a more and more serious problem. It's not just image recognition either, but also ML data sets. Algorithms can be poisoned.