Addressing the spread of harmful content on social media platforms (SMPs) is a policy priority for regulators and governments around the world. Most of the conversation thus far has focused on developing appropriate content moderation strategies.
In India, the Information Technology Rules (2021) frame SMPs as intermediaries and outline the responsibilities of SMPs with regard to content moderation. They also mandate the availability of grievance redressal mechanisms and place additional obligations on ‘significant social media intermediaries,’ which include a responsibility to avoid posting certain types of harmful content, as well as responsibilities as a platform to swiftly remove any content that has been deemed to be unlawful as per the act.
This policy brief examines the limitations faced by existing content moderation regimes when addressing the spread of harmful content on social media as a result of algorithmic amplification, and identifies additional strategies that can be utilised to limit the spread of this harmful content. The proposed strategies do not seek to replace content moderation rules but rather fill in the gaps that have emerged from the application of existing laws.