Skip to content

Commit d87bd14

Browse files
committed
lucas kanade
1 parent fb9c11d commit d87bd14

File tree

4 files changed

+5
-6
lines changed

4 files changed

+5
-6
lines changed
Loading
Binary file not shown.

source/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ Optical Flow
1818
Optical flow is the pattern of apparent motion of image objects between two consecutive frames caused by the movemement of object or camera. It is 2D vector field where each vector is a displacement vector showing the movement of points from first frame to second. Consider the image below (Image Courtesy: `Wikipedia article on Optical Flow <http://en.wikipedia.org/wiki/Optical_flow>`_).
1919

2020

21-
.. image:: images/optical_flow_basic1.png
21+
.. image:: images/optical_flow_basic1.jpg
2222
:alt: Optical Flow
2323
:align: center
2424

@@ -144,7 +144,7 @@ OpenCV provides all these in a single function, **cv2.calcOpticalFlowPyrLK()**.
144144
cap.release()
145145
146146

147-
(This code doesn't check the how correct are the next keypoints. So even if any feature point disappears in image, there is a chance that optical flow finds the next point which may look close to it. So actually for a robust tracking, corner points should be detected in particular intervals. OpenCV samples comes up with such a sample which finds the feature points at every 5 frames. It also run a backward-check of the optical flow points got to select only good one. Check ``samples/python2/lk_track.py``).
147+
(This code doesn't check how correct are the next keypoints. So even if any feature point disappears in image, there is a chance that optical flow finds the next point which may look close to it. So actually for a robust tracking, corner points should be detected in particular intervals. OpenCV samples comes up with such a sample which finds the feature points at every 5 frames. It also run a backward-check of the optical flow points got to select only good ones. Check ``samples/python2/lk_track.py``).
148148

149149
See the results we got:
150150

@@ -158,7 +158,7 @@ Dense Optical Flow in OpenCV
158158

159159
Lucas-Kanade method computes optical flow for a sparse feature set (in our example, corners detected using Shi-Tomasi algorithm). OpenCV provides another algorithm to find the dense optical flow. It computes the optical flow for all the points in the frame. It is based on Gunner Farneback's algorithm which is explained in "Two-Frame Motion Estimation Based on Polynomial Expansion" by Gunner Farneback in 2003.
160160

161-
Below sample shows how to find the dense optical flow using above algorithm. We get a 2-channel array with optical flow vectors, :math:`(u,v)`. We find their magnitude and direction. We color code the result to for better visualization. Direction corresponds to Hue value of the image. Magnitude corresponds to Value plane. See the code below:
161+
Below sample shows how to find the dense optical flow using above algorithm. We get a 2-channel array with optical flow vectors, :math:`(u,v)`. We find their magnitude and direction. We color code the result for better visualization. Direction corresponds to Hue value of the image. Magnitude corresponds to Value plane. See the code below:
162162
::
163163

164164
import cv2

source/py_tutorials/py_video/py_meanshift/py_meanshift.rst

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,10 @@ Meanshift
1818

1919
The intuition behind the meanshift is simple. Consider you have a set of points. (It can be a pixel distribution like histogram backprojection). You are given a small window ( may be a circle) and you have to move that window to the area of maximum pixel density (or maximum number of points). It is illustrated in the simple image given below:
2020

21-
.. image:: images/meanshift_basics.png
21+
.. image:: images/meanshift_basics.jpg
2222
:alt: Intuition behind meanshift
2323
:align: center
24-
:height: 300pt
25-
:width: 400pt
24+
2625

2726
The initial window is shown in blue circle with the name "C1". Its original center is marked in blue rectangle, named "C1_o". But if you find the centroid of the points inside that window, you will get the point "C1_r" (marked in small blue circle) which is the real centroid of window. Surely they don't match. So move your window such that circle of the new window matches with previous centroid. Again find the new centroid. Most probably, it won't match. So move it again, and continue the iterations such that center of window and its centroid falls on the same location (or with a small desired error). So finally what you obtain is a window with maximum pixel distribution. It is marked with green circle, named "C2". As you can see in image, it has maximum number of points. The whole process is demonstrated on a static image below:
2827

0 commit comments

Comments
 (0)