Some initial comments on severity of penalties, and possible idea for automatically handling some problems.... On 2020/12/12 09:30, Per Jessen wrote:
If deemed appropriate, someone might have been banned for a while, or even for life.
Even in the case of murder, parole is possible depending on circumstances. Are people committing offenses on this list such that lifetime banishment should even be considered? The idea that someone should be banned for life for anything consisting solely of a list post, seems entirely excessive. Add them to a spam list that expires in a year, at most, but be reasonable. If they violate anything again, add them again, but realize a lifetime ban really encourages them getting one or more alternate email accounts from which they can easily snipe with impunity. Even a year is a long time in the internet community -- and remember, once you've enacted such a penalty, you are done. There's not much else you can do to influence their behavior, with a "lifetime" ban essentially saying that there is no way "someone like them" can ever change, grow or correct their behavior. Also, I can't say this is true for everyone, but from personal experience, I've been subject to, maybe 3-4 bans, none of which was for valid cause when the original postings were examined. In at least 1-2 situations, the organization (or person) involved was made redundant, dissolved, shut-down or defunded due, most often, to similarly ill-handled situations. Possible way to handle: Personally, I'd try to develop an automated approach -- literally a Bayesian filter that could stop a posting before it is distributed and send the posting back to the user with comments like: "This posting looks like it might contain:" "offensive language", "offensive style", "minority marginalization", or "XXYZ". "If you are sure you wish to send this, please add a line at the end with the text: Confirm post. On a line by itself, within 5 lines of the end (the line will be removed before being passed onto the list)". ==== As part of the list-footer, give a non-clickable URL (need to cut+paste into a browser) with a unique messageID that leads to a page where list-members can add a vote regarding the post as to it most matching one of the banned topics (it may match more than one, but indicate only 1 topic that best fits the circumstance). Votes would only be allowed to list members as verified by user-login by-email to a site, OR by sending an email from a list-user email with the email-ID of the email being evaluated+its category. Note: categories could be positive areas as well for building up kudos or bonuses for a given topic. ==== Ratings for email-types would be built up over time with # of users out of #-readers marking a post as 'matching'. Evaluations would be limited to votes associated with 1 poster, at most, 'N' times (1 or 2?)/day per poster. Blah blah blah....further design TBD as needed.... This way people can, AT FIRST, get preliminary feedback that what they are posting may be considered offensive. Depending on the strength of matched scores, and number of warning-overrides used, someone could be given an automatic cooldown period of "N" days or weeks (or months) -- which, ALSO, could be seen as a temporary 'posting ban' for that period. The automation of this process would hopefully reduce or eliminate reactionary, vigilante, and/or emotional actions. Preferably, the software would be open-sourced to allow other lists to use it as well as making "bot" actions mostly transparent, though identities of who voted what way should be eliminated via using a hash of a given email and the identity of those evaluating the email such that each evaluated email by a user would make use of a unique hash/email/user. I.e. it could be verified that a user had voted on an email, but not how they voted and only by knowing the user's subscription address (no way to obtain plaintext emails from a hash).