In this work we present a method for optical flow based background subtraction from a single moving camera with application to autonomous driving. Without the use of any other sensor or processing, the method detects the vast majority of the moving entities in the environment. This result does not aim to replace higher-level processing such as semantic classification but rather to robustify GPS-denied visual-inertial localization, and object detection at the earliest level possible, with low computational cost and without constraining assumptions. The data were collected onboard an electric bus operating in the city of Reno.