Data storage of Flags, Hides, Blocks #386
-
Regarding the implementation of 'Hide Account' https://github.com/near/near-discovery/issues/383 I think it's worth designing the end state so the data storage doesn't change. It seems like eventually in the UI we will want 'hide and report' (flag), 'hide but don't report' (see less), 'block'. Does block need to also have report/don't report? We'll likely also allow the user to specify a reason at some point, which in terms of the data we store can be a label. Do we need more than one label per post/account, "this is both 'sexual content' and 'violent'? |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 7 replies
-
Thanks for setting this up @gabehamilton ! I have answered some of these questions here https://pagodaplatform.atlassian.net/browse/ROAD-346 but I will tackle the rest in this thread.
A: I think you are right. Flag is somehow in the same bucket as Hide and Block. Just some historical context: Social Media platforms have started off with giving Reporting options to users and over time they realized that users needed more granular actions such as Hide or Block where we wouldn't require the intervention of the platform. In essence "Hide" a content is exactly the same as "Flag" where in both cases the content will be removed from the reporter view but is still findable by that same reporter or anyone else on the platform. There is of course a slight distinction when we refer to "Hide Account" since it essentially means hiding all content (posts/comments) related to that account. However, it is not Blocked so it is still findable by the reporter (e.g. if they wish to go to the reportee account's page they will be able to see everything)
A: Absolutely correct on the UI side. Block will not lead to a report though. Block will simply prevent specific users from interacting with you or seeing your content but it also means you won't be able to see theirs (e.g. if a reporter goes to the reportee account's page they will NOT be able to see anything)
A: That is an excellent question to be fair. And although it seems natural to go that path, I am against the idea to have more than one label simply because it will lead to very unclear and complex data to read. TikTok had the exam same logic where they would allow users and moderators to label content for multiple things but it led to a very difficult understanding of what users were facing and very broad interpretations on the TikTok T&S side. My personal take is that we should mandate users to select one option only and potentially let them add additional comments if they feel like the option is not enough to describe the issue for instance. Let me know your thoughts on all of the above |
Beta Was this translation helpful? Give feedback.
-
I realized that there are a couple more issues to sort out.
It's advantageous if the data stored for moderation decisions and for reporting/self-moderation has a similar structure. This will help with features like opting-in or following moderation panels and composing moderation trees.
|
Beta Was this translation helpful? Give feedback.
-
An additional feature we'll likely want in the future is expiration, "hide this user for 24 hours", "prevent this user from posting for 1 week". For this we need an
In the self-moderation and compacted form that can be
|
Beta Was this translation helpful? Give feedback.
-
Latest full example of the data Index (Moderator decisions) format
User's self moderation format and compacted version of index format.
Additionally user actions that create a report get an index entry for compatibility with moderation systems that use Social.index.
|
Beta Was this translation helpful? Give feedback.
Latest full example of the data
Index (Moderator decisions) format
User's self moderation format and compacted version of index format.