In October 2019, an Amazon employee in Melbourne, Australia bumped into another person while cycling on the road. As she was assuring that person that she would help, she realised that he was deaf and mute and had no clue on what she was saying.
The awkward situation could have been avoided if assistive technology was on hand to facilitate communication between the two parties. Following the incident, a team led by Santanu Dutt, head of technology for Southeast Asia at Amazon Web Services, got down to work.
Within ten days or so, Dutt’s team built a machine learning model that was trained on sign languages. Using images of a person gesturing in sign language that were captured from a camera, the model could recognise and translate gestures into text. The model also could convert spoken words into text for a deaf-mute person to see.
Dutt said the model can also be customised to translate speech into sign languages as the machine learning services and application programming interfaces (APIs) are available and open – though he has not seen that demand yet. “But once you write a small bit of code, training the machine learning model is easy,” he said.
There is still more work to be done. As the training was performed with signs gestured against a white background, the efficacy of the model in its current form would be limited in actual use.
“Our team had limited time to showcase this and we wanted to bump up something to showcase for experimental purposes,” Dutt said, adding that organisations can use tools such as Amazon SageMaker to edit and train the model with more images and videos to recognise a larger variety of environments.
As the training process is intensive, Dutt said organisations with limited resources can use Amazon SageMaker Ground Truth to build training datasets for such machine learning models quickly. Besides automatic labelling, Ground Truth also provides access to human labellers through the Amazon Mechanical Turk crowdsourcing service.
This will also help to improve the model’s accuracy rate. “The more data you have, the more accurate the model gets,” Dutt said, adding that developers can set confidence levels and reject results that fall below a certain level of accuracy.
Dutt said AWS’s public sector team has engaged non-profit organisations in Australia to conduct a proof-of-concept that makes use of the machine learning model, as well as those in other countries through credits that offset the cost of using AWS services to train and deploy the model.
Credit: Google News