The project aims to create a machine-learning model that can classify the numerous hand gestures used for sign language fingerspelling. Classification machine learning algorithms are taught using a set of image data in this user- independent model, and testing is done on a completely diverse bundle of information Depth photos is employed for the image dataset, and they produced better results than some due to the quicker pre-processing, of the prior kinds of literature4. The datasets are subjected to machine learning techniques, such as Convolutional Neural Networks (CNN). The CNN model is pre- trained using the Imagenet dataset in an effort to improve its accuracy. But only a short dataset was employed for pre-training, resulting in a 15% accuracy.