Those should not be defined in modern society based on their condition; rather, settings might disable people with disabilities. As automatic Sign Language Recognition (SLR) advances, digital technology will provide more enabling environments. Many current techniques to SLR concentrate on the classification of static hand gestures, yet communication is a temporal activity, as many of the dynamic gestures show. As a result, temporal information obtained during the delivery of a gesture is rarely considered in SLR.The studies in this paper look at the challenge of SL gesture identification in terms of how dynamic gestures vary throughout delivery, and the goal of this research is to see how single and mixed features affect a machine learning model's classification abilities. A complex categorization task is presented with 18 frequent movements captured using a Leap Motion Controller sensor. A 0.6 second time window yields two sets of features:statistical descriptors and spatio-temporal properties.Features from each set are compared using ANOVA F-Scores and p-values, then sorted into bins of 10 features each, up to a maximum of 250 features. The best statistical model chose 240 features and achieved an accuracy of 85.96 percent, the best spatio-temporal model chose 230 features and achieved an accuracy of 80.98 percent, and the best mixed-feature model chose 240 features from each set and achieved an accuracy of 86.75 percent. When all three sets of results are examined, the overall distribution indicates that when inputs are any number of mixed features compared to any number of either of the two single sets of features, the minimum outcomes are raised.