An operational web application that interprets spoken Arabic into Moroccan Sign Language videos, enabling seamless communication for the hearing-impaired.
This project is a part of my end of 2nd year project at ENSIAS, Mohammed V University. When exploring potential project ideas, Professor M. Naoum suggested creating a mobile app to facilitate communication between the Moroccan deaf community and individuals who can hear. After a brief review on platforms like Github and Google Scholar, I noticed a focus on converting sign language videos into written or spoken English using CNNs in existing projects. However, there were limited solutions in the reverse—translating spoken language into sign language, especially addressing Arabic speech and Moroccan Sign Language. In this project, I was supervised by Professor M. Lazaar.
July 21, 2015: Few resources exist for deaf students in Morocco, making assistive devices important for classrooms.
Image Source: https://www.nsf.gov/news/mmg/mmg_disp.jsp?med_id=78950&from=
Motivated by this insight, I embarked on the project myself, initiating with a basic web application featuring a limited set of Moroccan Sign Language words. This serves as a starting point for open-source contributors in Morocco, a primary focus of this project.
Looking ahead, I plan to evolve this into a mobile application, expanding the range of Moroccan signs to cover more everyday spoken vocabulary.
Additionally, I aim to develop an automatic speech recognition model specifically for the Moroccan dialect. This broader approach could benefit around 40 million potential users, catering to those who speak the dialect but not formal Arabic. The goal remains to create a comprehensive solution meeting the communication needs of the Moroccan market.
Every part of this project is sample code which shows how to do the following:
- Create a speech-to-video translator for Arabic Speech (AS) to Moroccan Sign Language (MSL) using Python.
- Develop a Messenger-like web application using Flask, HTML, CSS, and JavaScript.
- Convert Arabic speech to Arabic text by leveraging Automatic Arabic Speech Recognition using wav2vec2-large-xlsr-53-arabic through HuggingFace's inference api.
- Leverage NLP techniques to preprocess Arabic text using the Natural Language Toolkit (NLTK) and Regular Expressions (regex).
- Create an MSL video retriever and concatenator using OpenCV.
-
Install Git
sudo apt update sudo apt install git
-
Navigate to the Directory
cd path/to/desired/location
-
Clone this repository
git clone https://github.com/Heyyassinesedjari/QuestionAnsweringWebApp.git
-
Install Conda
wget https://repo.anaconda.com/miniconda/Miniconda3-4.12.0-Linux-x86_64.sh bash Miniconda3-4.12.0-Linux-x86_64.sh source ~/.bashrc
-
Creating a Conda Environment
conda create --name myenv python=3.9.12
-
Activate Conda Environment and Install all requirements
conda activate myenv conda install --file path_to_requirements.txt
-
Hover over to the api_var.json file located in application/static/ and update the 'Your_Hugging_Face_API_key' in the authorization field with your actual Hugging Face API key.
-
Run the App
python app.py
AS to MSL System Architecture Diagram
Step 1: Automatic Speech Recognition with Wave2Vec
Step 2: Arabic Text Preprocessing
Step 3: Video Retrieval and Concatenation
https://drive.google.com/file/d/1ZiisKXhRhfLi_eq9hodVj76eoho48SiF/view?usp=sharing