There's nothing in the Code of Conduct about spreading disinformation that has the potential to cause physical harm or loss of life. Should there be? As an open community, the Clojurians Slack is vulnerable to bad actors posing as "alternative viewpoints" or other innocuous-sounding parties, and using the medium to misinform and otherwise manipulate other participants contrary to their best interests.
I would prefer the Clojurians Slack community to only be about Clojure and related software development aspects. Talking about other things makes more valuable information for the community disappear from Slack much faster. I have left all channels that go off topic and have no intention of joining them again. I would encourage the closure of channels that are not about Clojure (no pun intended), as they do not benefit the community.
Actually that sounds like a reasonable position IMO -- I'd want to keep #off-topic because it can be used to discuss interesting clojure-related stuff that isn't strictly on-topic for Clojure (e.g. maybe some Scala dev is having an interesting discussion about typed-vs-dynamic that someone might like to reference), but limiting/eliminating contentious topics there too, on a case-by-case basis.
I'd prefer to allow more "hallway track" channels in addition to more on topic channels as I do think of this as a space for the clojure community, not just for the clojure language (if that changes then some place else will pick up that demand). I think non clojure topics in other channels is a good thing as long as the CoC is followed. I do wonder if all CoCs need an anti-disinfomation section. I could certainly ask some experts from the journalism world what that might look like (the people who ran misinfoCon)
> Iād prefer to allow more āhallway trackā channels in addition to more on topic channels as I do think of this as a space for the clojure community, not just for the clojure language Seconded. Iām less concerned about stuff falling off history since itās archived elsewhere (although thatās certainly the strongest reason to move from slack to another platform IMO).
The coronavirus being an unusually potent instance of a situation where disinformation can have significant if not disastrous consequences.
I agree with the sentiment but how do you enforce it?
I was just gonna say, you really don't want to leave it in my hands to decide what constitutes a legitimate alternate viewpoint and what doesn't. We worked together directly, @manutter51, so I'm sure you agree š We're fortunate enough to have some great admins, but I think the same principle applies š¤·
We're all grown ups, I guess we have to decide what's true and what isn't ourselves
It's too much work for the moderators to authenticate information
I'm not talking about policing legitimate alternative viewpoints, I'm talking specifically about spreading disinformation that could result in serious physical harm or loss of life.
What's your rubric for distinguishing the two, though? And is it one that the community would reach approximate consensus on? I think things get very, very subjective once you dive into those waters.
(I totally get your point, by the way, not trying to be pedantic here. I think it would be great if we could do that in a clear & objective way. I just don't know a way to do that, especially with limited moderation resources)
Itās a real problem, I agree. The problem with coming up with rules for stuff like this is that as soon as you codify anything, the bad actors will figure out a way to subvert it and use it to promote more bad actions.
Itās a kind of a trolley problem: if you throw the switch to the left, you preserve free speech at the cost of lost lives, and if you throw it the other way, you face accusations of bias and censorship, but more people live. But compromise of free speech has other consequences too, so itās not as easy an answer as my simplistic phrasing would suggest.
Plus it's not always clear what's true and what isn't, especially in a situation like the current one where experts disagree (eg all the conflicting messaging around mask-wearing). I'd be pretty reluctant to block anything unless it was unambiguously clear that it was in bad faith. (Caveat: I haven't been following #covid-19 much, so there may be such things there, dunno)
This is all about one person in that channel, right?
Thatās what got me thinking. Iāve looked at his sources and they seem to be somewhere on the spectrum between biased and malicious (IMO)
He will no longer participate in #covid-19
But also just in general, considering how disinformation has been weaponized in social media over the past several years.
I share your concern, but I also agree with others in this thread: how do you enforce it?
Yeah, thatās the question, isnāt it?
I wish I knew a good answer.
My gears started spinning on this. Not sure if it would work or the implications but here it goes: Invite approx 3 or 5 field experts into the community. That way there would be a majority. They donāt have authority but they can make recommendations into an admin channel to separate valid counter arguments versus harmful misinformation. The intention here would be to separate verification from administration without taking up too much time or effort from mods.