A CNN-Based System for Sign Language Recognition

Document Type : Original Article

Authors

1 The Higher Institute of Computers and Information Technology, Computer Science Department, El Shorouk Academy, Cairo, Egypt

2 Higher Institute of Computers and Information Technology, Computer Depart., El. Shorouk Academy, Cairo, Egypt

Abstract

Sign language plays a vital role in enabling communication for individuals with hearing impairments. To foster meaningful engagement with this community, learning sign language is essential. This paper aims to design and develop an intuitive sign language learning system leveraging deep learning. To attain this objective, a usable and intuitive application is developed adopting the Convolutional Neural Networks (CNN) for accurate recognition of American Sign Language (ASL) gestures dataset. The proposed CNN model is trained on a comprehensive dataset of Sign language gestures to achieve precise gesture classification. This paper demonstrated that the proposed Sign Language classification model achieved an accuracy of 94%, highlighting its effectiveness in reliably identifying and categorizing sign language gestures. An intuitive and user-friendly application has been developed, incorporating the proposed CNN model to process real-time input from a webcam and translate gestures into text output. Thus, this research holds significant potential to enhance the recognition and communication of sign language through the application of deep learning. The main purpose of this paper is to eliminate the barrier between the deaf and mute and the rest.

Keywords

Main Subjects