Facebook continues to improve the algorithm it uses to detect and remove hate posts and other messages that violate social media rules, including misleading information related to the coronavirus pandemic, reports 3DNews. In the second quarter, more than 22.5 million posts were deleted on Facebook that violate the platform’s rules.
In the past, content moderation on Facebook was carried out mainly by contract employees. Due to the coronavirus pandemic, many people have to work from home, and moderators can usually only carry out their duties in the office.
Facebook VP Guy Rosen said the company is now relying more on technology to manage the workload placed on the team of moderators. Facebook uses an AI-based ranking system to identify high-priority content that should be reviewed by moderators first.
“Artificial Intelligence helps to ensure that as we reduce our moderators, we can still focus on the most critical and critical categories that require review and intervention,” said Mr Rosen. He also noted that due to the improvement of the technologies used to detect hate content, the rate of proactive detection has increased to 95%. As a result, in the second quarter, more than 22.5 million publications that violate the rules of the network were removed. About 3.3 million posts were removed on Instagram over the same period of time.
NIX Solutions notes that in the period from April to June of this year, over 7 million posts were deleted on Facebook and Instagram, the authors of which disseminated dangerous or misleading information about the coronavirus pandemic.