Motion detection in my hummingbird camera…
My goal in experimenting with the Raspberry Pi camera was to try to make an efficient and effective camera which can detect motion. Previous incarnations of the camera script merely looked at the differences in pixel values between adjacent frames, thresholded them at some value, and then counted the number of pixels which exceeded this value. What I discovered was that it was pretty hard to tune the two threshold values in a way that would not pick up changes due to wind motion of the grassy background.
But it turns out that the Raspberry Pi Camera and its associated software picamera has some other tricks up their sleeves. In addition to recording the h264 encoded video, you can record an alternative stream which contains “motion data”, which is essentially some of the raw data that is used by the h264 to do motion coding. Essentially this data provides 4 bytes of data for each 16×16 image block: two signed 8 bit image displacements (in x and y) which represents the estimated image velocity, and a 16 bit value which is the sum of the absolute difference of all the pixels in the block from the previous frame. Both would be rather expensive to compute (certainly in Python) but are quick and easy to extract when computed by the camera itself.
To test my understanding, I modified my camera script to acquire this data, and then transferred it along with the normal video, and then hacked together some scripts using python and gnuplot to superimpose this data atop the background video (which I’ve faded a bit to make the data more legible). The black contours represent the difference data, and are spaced at intervals of 100. The red vectors represent the motion data plotted atop the image.
One thing leaps out at me immediately: the motion data is very good at finding the hummingbirds, even when the birds are relatively stationary. While this clip was not taken in particularly high wind, it’s pretty clear that those vectors aren’t very large in the case of plant motion. Hence, it seems clear I could make a better motion detector by taking advantage of the precomputed motion vectors.
A couple of things remain though: there are obviously drop outs where the contour data drops out entirely. I’m not sure what that is about: it could be a bug in my conversion script, or something more insidious. I’ll go back to the data and find out. Secondly, I’m not sure how capturing this motion data interacts with another feature I use of the picamera: it’s ability to record into circular memory buffers. When I figure out these two issues, I’ll post (and likely github) another version of my watcher script.
Hope this is of interest to someone out there.
Addendum: While doing more reading on the picamera github site, I found a link to this awesome script, which points out a lot of clever things that can be done. I’ll be swiping ideas from it soon!
I suspect the world would be better if that percentage were even greater.
Apparently 15% of all web traffic is cat related. There's no reason for Brainwagon be any different.
Thanks Mal! I'm trying to reclaim the time that I was using doom scrolling and writing pointless political diatribes on…
Brainwagons back! I can't help you with a job, not least because I'm on the other side of our little…
Congrats, glad to hear all is well.