Quantcast
Channel: ANDOL » opencv camera
Viewing all articles
Browse latest Browse all 22

A method of detecting and recognising hand gestures using OpenCV (2)

0
0

hand detection opencv

A previous article has described a method to detect and recognise hand gestures using OpenCV, actually the c# wrapper of OpenCV called emgu CV. Details of this method can be found here a method of detecting and recognising hand gestures using Opencv. But Tongo’s project wrapped in emgu CV may not be easy to configure, so this article gives a new example transfering thet project to plain c++ OpenCV environment.

The example codes of the project, as well as the configurations, are described bellow.

Structure of previous hand gesture detection and recognition project

To make a clear tutorial of how the previous project did the hand gesture detection and recognition, the functions used in the project are listed below, through which I hope it can make a clear picture of this project’s algorithm flow.

1) reading video files. The project did not read video images from live webcam, rather it provided a video file as the source of video images. Similar functions were used to query video image one by one by using the function – grabber.QueryFrame().

grabber = new Emgu.CV.Capture(@”.\..\..\..\M2U00253.MPG”);

2) extracting contours and hulls of hand gestures. In this part, the method used HSV colour ranges to extract hand colours – the range can be adjusted dynamically according to background environments. A set of functions were applied to improve the accuracy of colour detection, such as cvErode and cvDilate. Here is a clip of source code in the project.

public override ImageDetectSkin(ImageImg, IColor min, IColor max)
{
Image currentYCrCbFrame = Img.Convert();
Image skin = new Image(Img.Width, Img.Height);int y, cr, cb, l, x1, y1, value;int rows = Img.Rows;
int cols = Img.Cols;
Byte[, ,] YCrCbData = currentYCrCbFrame.Data;
Byte[, ,] skinData = skin.Data;for (int i = 0; i < rows; i++)
for (int j = 0; j < cols; j++)
{
y = YCrCbData[i, j, 0];
cr = YCrCbData[i, j, 1];
cb = YCrCbData[i, j, 2];

cb -= 109;
cr -= 152;
x1 = (819 * cr – 614 * cb) / 32 + 51;
y1 = (819 * cr + 614 * cb) / 32 + 77;
x1 = x1 * 41 / 1024;
y1 = y1 * 73 / 1024;
value = x1 * x1 + y1 * y1;
if (y < 100)
skinData[i, j, 0] = (value < 700) ? (byte)255 : (byte)0;
else
skinData[i, j, 0] = (value < 850) ? (byte)255 : (byte)0;

}
StructuringElementEx rect_6 = new StructuringElementEx(6, 6, 3, 3, Emgu.CV.CvEnum.CV_ELEMENT_SHAPE.CV_SHAPE_RECT);
CvInvoke.cvErode(skin, skin, rect_6, 1);
CvInvoke.cvDilate(skin, skin, rect_6, 2);
return skin;

}

3) selecting the right contours. The skin detection generated many contours, including face shapes with similar colours as hands. So, the right contours need to be pick up from all these contours. In this project, the author used a simple method by comparing the size of contours – usually the hand in the image had the biggest contour size.

Double Result1 = 0;
Double Result2 = 0;
while (contours != null)
{
Result1 = contours.Area;
if (Result1 > Result2)
{
Result2 = Result1;
biggestContour = contours;
}
contours = contours.HNext;
}

4) if the biggest contour existed, then counting the finger number from detected contours. During this progress some basic markings such as drawing a rectangle or eclipse are helpful to identify the right object. Counting the finger was realised via a function of defect detection. There were obvious defects between finger contours with up and down shapes.

#region defects drawing
for (int i = 0; i < defects.Total; i++)
{
PointF startPoint = new PointF((float)defectArray[i].StartPoint.X,
(float)defectArray[i].StartPoint.Y);PointF depthPoint = new PointF((float)defectArray[i].DepthPoint.X,
(float)defectArray[i].DepthPoint.Y);PointF endPoint = new PointF((float)defectArray[i].EndPoint.X,
(float)defectArray[i].EndPoint.Y);LineSegment2D startDepthLine = new LineSegment2D(defectArray[i].StartPoint, defectArray[i].DepthPoint);

LineSegment2D depthEndLine = new LineSegment2D(defectArray[i].DepthPoint, defectArray[i].EndPoint);

CircleF startCircle = new CircleF(startPoint, 5f);

CircleF depthCircle = new CircleF(depthPoint, 5f);

CircleF endCircle = new CircleF(endPoint, 5f);

//Custom heuristic based on some experiment, double check it before use
if ((startCircle.Center.Y < box.center.Y || depthCircle.Center.Y < box.center.Y) && (startCircle.Center.Y < depthCircle.Center.Y) && (Math.Sqrt(Math.Pow(startCircle.Center.X – depthCircle.Center.X, 2) + Math.Pow(startCircle.Center.Y – depthCircle.Center.Y, 2)) > box.size.Height / 6.5))
{
fingerNum++;
currentFrame.Draw(startDepthLine, new Bgr(Color.Green), 2);
//currentFrame.Draw(depthEndLine, new Bgr(Color.Magenta), 2);
}

currentFrame.Draw(startCircle, new Bgr(Color.Red), 2);
currentFrame.Draw(depthCircle, new Bgr(Color.Yellow), 5);
//currentFrame.Draw(endCircle, new Bgr(Color.DarkBlue), 4);
}
#endregion

Transferring previous project to c++

Based on the understanding of hand gesture detection and recognition algorithm flow described above, it is possible to map the flow in c++ projects. The c++ project used similar OpenCV functions to realise the detection and recognition, and it followed the same flow of detection. Below is a screenshot of the c++ project detecting and recognition simple finger gestures as numbers. The source code of this project is also included in the bottom of this post and the download page as well.

Code example

See the example codes in download page.


Viewing all articles
Browse latest Browse all 22

Latest Images

Trending Articles





Latest Images