Proud Hands: An Android-Based Hand Gesture Recognition and Conversion System Using Image Processing, Image Segmentation and Feature Extraction
Keywords:
Android Technology, Human Computer Interaction, HCI, Assistive Technology, Image Processing, Hand Gesture RecognitionAbstract
The process of human communication has evolved, with many path-breaking inventions and discoveries heralding revolutions or a lift from one level to another. The main purpose behind the project is to provide persons with speech, mobility, and with physical impairments another means of communication using android-based technology, image processing (such as image acquisition, edge detection and token detection) and human computer interaction techniques. An android application that recognizes hand gestures and then convert it into its corresponding speech output.
Descriptive developmental design was used as the design model in developing the application. Using such, respondents can be depicted in a more accurate way, assessments can easily be known through numeric marks. Since the application is developed for an android platform and has highly volatile requirements, the Agile Development Model as software methodology was used. The proponent used Android Development Studio as the main IDE, Java as its programming language and MySQL Lite as the database.
The application was assessed by IT professionals and end-users, who are persons with speech, mobility and physical impairments, their parents and caregivers. Purposive sampling and availability techniques were used in choosing the respondents. A validated survey questionnaire was used to evaluate the application.
Based from the survey results, respondents strongly agreed that the application has performed all the tasks required from it. The proponent also concluded that the respondents had a positive assessment on the application, that it will improve ways on how people with speech, mobility and physical impairments communicate with other people.
Downloads
References
Aragon, M.C., Juanillo, M. and Cabauatan, R.J. 2014. Camera-Captured Writing System Recognition of Logosyllabic Han Character. International Journal of Computer and Communication Engineering, 3(3): 166–171. https://doi.org/10.7763/IJCCE.2014.V3.313
Bhargavi, K. 2014. A Survey on Threshold Based Segmentation Technique in Image Processing. International Journal of Innovative Research and Development, 3(12): 234–239
Bhat, R. and Mehandia, B. 2014. Recognition of Vehicle Number Plate Using Matlab. International Journal of Innovative Research in Electrical, Electronics, Instrumentation and Control Engineering, 2(8): 2321–2004.
Yang, M.H., Kriegman, D.J. and Ahuja, N. 2002. Detecting Faces in Image: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(1): 34–58. https://doi.org/10.1109/34.982883
Zabulis, X., Baltzakis, H. and Argyros, A. 2009. Vision-based hand gesture recognition for human-computer interaction. The Universal Access Handbook, 34.1.34.30. https://doi.org/10.1201/9781420064995-c34
Zhi-hua Chen, Jung-Tae Kim, Jianning Liang, Jing Zhang and Yu-Bo Yuan. 2014. Real-Time Hand Gesture Recognition Using Finger Segmentation. The Scientific World Journal, 2014: 1-9. http://dx.doi.org/10.1155/2014/267872
Published
How to Cite
Issue
Section

This work is licensed under a Creative Commons Attribution 4.0 International License.



