(AFP)
Global photo and video-sharing platform Instagram on Tuesday introduced various new safety features in its latest effort to fight online bullying and other kinds of online harassment.
The new features come as the social media platform has been criticized for becoming the center of various bullying acts, especially among young adults and teens, globally.
The Facebook Inc.-owned application said it is making it easier to block multiple people at once and offering tools to restrict who can tag users.
Among the new tools is allowing Instagram users to block comments on their posts from a certain user without the user knowing, according to Instagram.
Instagram said blocked users will still be able to see their comments on a post, but the comments will not be visible to anyone else.
The company also rolled out a new tool to control what comments can appear on users' posts so that they can guard themselves against unwanted followers.
Instagram also said it uses both artificial intelligence (AI) technology and human resources to help control abusive content and spam on the platform.
Machine learning-based technology first detects contents that violate the company guidelines, and reports them to a team of human officials who review the contents before deciding whether they violate Instagram's policy. Another AI tool, to be adopted later this month, works by identifying words and phrases that have been reported in the past as offensive, according to Instagram.
The AI tool then detects offensive language in comments and will send prompts to the writers, allowing the account users to rework their comments before posting them.
In an effort to prevent suicide and self-injuries, Instagram said it does not show any hashtag related to such content.
Instagram said it has cooperated with various agencies globally to prevent suicide, including inking a partnership with the Korea Suicide Prevention Center. (Yonhap)