While playing around with OpenCV I was interested in hand gestures and looked to develop a tool that would recognize/translate American Sign Language gestures. I started with a simple script that recognized the number of digits by looking for a brightness threshold (finding the hand blob) and counted the negative spaces.
This was a little limiting and I soon found a project by Shubham Gupta that recognized multiple hand shapes by training a ML model on a dataset of hundreds of images. It worked pretty well under the exact right lighting conditions, but the blob tracker was based on pixel color (of a skin tone) and didn't isolate the hand as well. So after updating some of the script to work with the latest version of OpenCV, I brought in the blob tracker from the digit counter so it would track based on threshold.
Now that I have this up and running I'm looking to retrain the model on other gestures, maybe as controls for something else.