Legal
How Mirantes Moderates Content and Maintains a Safe Environment
When content violates our policies, we may restrict its visibility or remove it completely. Our moderation approach follows these protection phases:
Automatic and Proactive Prevention
Before content is published, our automated systems analyze its characteristics in real-time, detecting potential violations. Using artificial intelligence and machine learning, we can quickly identify keywords, images, or patterns that may indicate guideline violations.
If content is identified as harmful, it will not be displayed.
Automatic Detection and Human Review
Some posts may show signs of possible violations, but without absolute certainty for immediate removal. In these cases, our system flags this content for more detailed human analysis.
Our review team evaluates the content, taking into account Mirantes' context and guidelines. Additionally, this process contributes to improving our AI models, making automatic filtering more accurate.
Community-Based Moderation
Users play an essential role in maintaining a safe environment. If they find inappropriate content, they can easily report it through the reporting mechanism, accessible via the three dots in the top right corner of posts, messages, and profiles.
When content is reported, it is forwarded to our moderation team, who assesses the situation and takes appropriate measures, ensuring our guidelines are followed.