Instagram has developed various machine learning algorithms that make it easier for the staff to deal with instances of cyberbullying. To take it a step further, they integrated machine learning algorithms capable of identifying comments that can be deemed as harassment and deleting them appropriately. The company achieved this by having staff flag photos as bullying or not bullying, allowing the AI to develop an idea of what content can be deemed as offensive. The AI was trained to identify these types of posts/comments and take them down accordingly.
However, the social media giant doesn’t plan on stopping there. Two new features are currently being developed in order to combat cyberbullying to a more accurate degree. One of them being a more advanced form of the AI that scans posts for offensive content but instead focused specifically for comments. This will prevent someone from commenting inappropriate things on anyone’s post, alerting the individual that there is something wrong with their comment. The second idea is a ‘restrict’ feature which has already been implemented. The idea is rather than blocking someone, it would stop someone from commenting on your post unless they have approval and content that you do not want other people to see will be filtered.
It’s interesting to see the approach and measures Instagram is taking in order to tackle the cyberbullying problem the platform is facing. With a younger demographic being the app’s fastest growing user base, it means that the company should implement security and safety measures in order to ensure an individual’s experience on the app and most importantly, their well-being.
Credit: Google News