On Internet Culture and Governance Thereof

Picture this scenario in your head — an online community you've grown to know and love over the years has suffered a terrible fate; it has become, horror of all horrors, popular. Slowly but surely, the feel of the site changes. The lovely content that was the bread-and-butter of yesteryear becomes frowned-on fringe material. The lingo and behavior of the users morph into an unrecognizable, alien dialect. Discussion quality went out for smokes and never came back. The name's the same, but everything else has changed.

Sound familiar?

It's a problem older than many of you probably reading this right now1, and shows no sign of going away soon. In this essay, I would like to put forth the idea that sites on the internet have largely functioned as direct democracies with open borders, and that a site's culture under such a system will only exist as long as the internet at large does not find it. I'll go through some common mitigation strategies meant to deal with the issue, and also propose my own solution towards the end.

Let's nail down some terminology first, so we're all on the same page before some of you enthusiastically state your agreement, call for my head, rattle off snide remarks, and so on. By direct democracy, I'm referring not necessarily to how a site is moderated2, but to how the unwritten rules concerning the content posted to the site are made and maintained. A moderator can determine what content is not allowed, but a site culture subject to direct democracy has its individual users decide what content is most popular/most liked, what sort of terms are used, and so on and so forth.

The real world equivalent would be how pop culture changes and develops over time. The moderators — government and law enforcement — lay down rules concerning what is and is not allowed at a bare minimum, but individuals as a group have final say over trends and fashions, by choosing to participate in some and not others.3

There's nothing fundamentally wrong about the direct democracy approach to guiding site culture, though as I've briefly touched on before, the mob tends to use any means necessary to drown out thoughts it doesn't like, so any kind of site utilizing direct democracy should do so in moderation. It's particularly harmful when used in combination with the open borders approach, which together form the de-facto standard utilized by most online communities.

By open borders, I refer to the ease with which it is possible for new users to come onto the site and begin contributing and generating content. While there's a sliding scale of sorts, with totally anonymous imageboards/chans like 4chan/8kun on one side and invite-only communities such as lobste.rs and private torrent trackers on the other, the vast majority of the internet functions in an open borders fashion; the only barriers to entry are often just a single CAPTCHA nowadays, and maybe a click on a link sent in an automated email. I can "walk in" to the site whenever I like, with no real check on what I bring to the table.

In my view, the open borders approach invites discussion quality issues from the beginning; convenient as it might be to be able to easily jump into a conversation, that same ease, in my experience, leads to a much worse signal-to-noise ratio. But that's just an anecdote — the real poison lies with its combination with a direct democracy approach to cultural rule.

When a site utilizes both direct democracy and open borders, it's always just one viral link, one mob away from being culturally wiped out and assimilated Borg-style into the growing internet snowball. A site that has both a low barrier to registration and that is culturally governed by mob rule will find itself continually governed by whichever mob happens to have the most soapboxes.

Why is this a problem? It — the loss of culture, not the increased popularity — creates the helpless feeling that the first paragraph so (over)dramatically describes. A site's culture is what makes it worth visiting to you in the first place, and attracts like-minded individuals who visit the site for the same reasons as you. If the site's ethos is lost, so too is your original reason for hanging around in the first place.

So what is currently being done about this? The main trend I've noticed in communities aware of the issues I describe above is that they essentially make no attempt at publicizing their existence — an open secret club, if you will.4 If you have the link, you're free to join; the trick, of course, is becoming aware of the site's existence in the first place.

This approach has a couple problems. Firstly, some advertising must be done at some point — you can't expect anyone to come to your site if nobody is aware it exists in the first place. Maybe it's as covert as dropping a link in an IRC channel or on a Discord, but at some point, people not currently in your community will have to join it to make it grow. And outside of Fight Club-esque oaths and an ethos of secrecy, there's nothing preventing them from spilling the beans to others who may be better off not being a part of the team.

Secondly — and I believe this to be a much bigger issue — this is merely a special case of security through obscurity, which has been decried in the technical world for decades. To have secrecy as a site's last and only line of cultural defense is to beg for bad actors or an overzealous newbie to spread the word a bit too far and instantly ruin everything carefully built up over the years. The result is a painful, endless game of cat-and-mouse as the original founders try to pack up and move elsewhere before the viral spotlight is shone on them once more.

Another approach that I've seen adopted by some heavily experimental communities is to increase the technical barrier to entry; sometimes by doing away with HTTP altogether and using something like SSH or Gopher, so as to make the community inaccessible via your typical web browser. This is still security through obscurity; though only allowing access via anonymous SSH sessions will definitely deter many people, if the site proves popular enough, people will provide accessible workarounds.5

The other problem is that throwing up technical barriers to entry only works for communities where technical expertise can be expected. It might be fine for a programming community to have an SSH-only site, but it would make no sense for a bunch of artists to go through the hassle of figuring out how to install and use such software. How would you share high-resolution art over a terminal, anyway?6

You think your average Patreon-supported hotshot is gonna be fine with this quality?

The last common approach I've seen to tackling the problem is to have an invite system for user registration. Sometimes there's an application process by which new members can be added after filling a form and waiting for approval, and other times, new users must know existing users and ask them for an invite to get in. Both approaches have their merit, and I think this is the only in-use method for avoiding mob rule that doesn't fall prey to security through obscurity. I don't really have anything against the invite system outside of relatively minor nitpicks; the application process can be opaque and hard to define for many different types of communities, and the existing-user-invite system has the potential threat of lending too much cultural influence to users who give out a lot of invites. Additionally, depending on how long the approval process takes, potential new users might lose interest.

However, at least at the time I write this, the invite system is quite unpopular. It could be a chicken-and-egg situation where no communities see other communities use an invite system, so no new invite-only communities are made. It could also be that potentially dealing with the hassle of either writing an application or finding a user to get an invite from is too much effort, and people gravitate towards invite-less equivalents. Regardless of the reason, it's not a popular look right now.

Crying about the status quo is something anyone can do, so I'd like to at least offer my own personal take on a possible way forward. As you might be able to gather, I feel that an invite system provides most of the answer, but a little more polish could be added. I think one important addition would be a community-sourced application approval process, as opposed to using a handful of moderators. In this setup, community members review applications submitted by potential new users, either as a requirement of continued membership or as an opt-in duty. After a certain period of time passes, or a certain proportion of members give their approval, applications can be approved, and the new user is inducted into the community. If too much time has passed and not enough members have given approval, or if too many voice disapproval, the application is denied7.

This approach to application approval has several advantages over the usual, moderator-centric method. Transparency is greatly increased; a community can opt to make the applications public, and thus everyone can see what direction the landscape of the userbase is currently headed in. Scalability is much easier to achieve compared to using a small group of approved moderators, as the entire community itself (or at least a good portion of it in the opt-in route) is bearing the load of reviewing applications. Past approved/denied applications could also be potentially browsed, so new potential users could see what kind of behavior is expected from them; this admittedly could be done with the moderator approach as well, however.

The other major change I would introduce are term-limited and community-sourced moderators. It's odd, in my view, that the internet equivalents to judge-jury-and-executioner are usually both appointed by and answer only to the site administrator(s) in communities that do not use a direct democracy approach to moderation8. By term-limited, I mean that each individual moderator only moderates for a relatively short portion of time, say, a month or two. After that, they're placed in a cooldown period and cannot be chosen again for moderation until another portion of time passes, maybe three months.

By community-sourced, I mean to say that moderators are picked "invisibly" from within the community. Members mark content that they believe to best represent the community ethos, and after posting content that has gathered enough marks, a member is promoted to moderator status on the next cycle. I want to point out that marks should be entirely invisible; the only one who knows whether a post has been marked is the user who marked it in the first place. This is to prevent a Reddit-style bandwagoning effect on popular content, and to prevent users from gaming the system by observing which content gets more marks.

The rationale behind such an approach to moderation largely stems from the disconnect between content creators and moderators in other communities; in my experience, appointed moderators often do little more than passively consume the site and perform moderation duties. This creates an us-vs-them adversarial rift9 between the people who maintain the community and those who participate in it, which invites moderation decisions that may not reflect the culture of the userbase at large. By pulling the moderation team from members who have been selected by their peers as good cultural representatives, this pitfall is dodged, and term limits both alleviate potential missteps and give members who would rather focus on content creation a lighter load.

However, nothing's perfect, and this includes the system I've proposed. While it largely fixes the open borders problem, I don't believe it would hold up to coordinated subversion efforts; a large enough group of bad actors could conceivably gain entry and harm the site culture.10 However, there are multiple checkpoints that make such a scenario less likely — they would all have to pass the application process, and avoid running afoul of moderators. Though, as the enemy group gained more and more access, they could more easily approve more and more of their own number. I think this could be partially mitigated by making the number of allowed new users on a monthly basis roughly inversely proportional to site membership (larger site = harder to flood with applicants), and by making either new user's approval votes count less, or older user's approval votes count more. Users who typically post highly-marked content might also have more weight associated with their vote.

Additionally, the approval system itself doesn't do much other than function as a kind of border check. As soon as someone passes approval, the only thing keeping them in line are the moderators. I would argue that the application process would filter out most trolls who would do the site harm, as writing and getting an application approved would be more effort than their potential jollies would be worth. Still, outside of moderation there's nothing stopping a user from acting differently to how they described themselves in the application process, and that's the biggest weakness of the approach I describe, in my opinion.

One might argue the system as I describe doesn't do away with the direct democracy governance of cultural rule; if anything, it slightly increases it due to the moderation changes. In my eyes, this is a good thing — I've never argued against cultural direct democracy, only that it forms a bad combination with open borders in a community. If site registration is reasonably secured, then existing users should be free to culturally steer the site in the manner that they see fit.11

Now, I'm not the first one to come up with all of this, and I certainly won't be the last.12 There's nothing new under the sun, after all. However, I would like to do what I can to raise attention to the matter of site governance; the web is still young, but I think we could stand to make many improvements over the status quo.


[1] - I'm referring to Eternal September, which occurred when USENET users were overwhelmed by an influx of new AOL users who drowned out the existing norms and customs. I think it's telling that USENET's way of life was preserved by technical barriers, the same as many other sites mentioned here. And as soon as those barriers were removed, they had no further recourse.

[2] - Some sites do have partial direct democracy as a form of moderation. Slashdot is the oldest example that comes to my mind, while Reddit is probably the most well-known contemporary example.

[3] - Things are a little more complicated than that, as I would argue that individuals have less agency over their choice than they would like to think; those pesky rascals in marketing would be out of a job if it weren't so. Similar groups (astroturfers) exist for online communities, but that's a topic for another time.

[4] - I intentionally don't cite examples of sites that use this tactic, as to link to them/mention them by name would violate their (relative) secrecy. If it worries you that I'm making stuff up in this section, just try to think of it as a hypothetical, winky face.

[5] - Browser-based USENET access is a good example of this.

[6] - There are a couple methods, actually. SIXEL is one way, though you'd need a SIXEL-compatible terminal for that, and I don't believe any of the popular terminal emulators out there are. You could always just convert the pixels to block characters, though — which is what I did to generate the picture. This latter method limits you to 256 colors and low-triple-digit image dimensions at best, mind you.

[7] - The exact specifics of the application should be left to the community to decide. Perhaps it could involve writing some code for a programming forum, or a short essay for a literature group. My personal opinion is that the application should involve knowledge specific to the domain of the community, if at all possible. For more general discussion boards, maybe a generic introduction of the applicant would be good.

[8] - I am talking specifically about moderation here, not general cultural rule.

[9] - A kind of class warfare, even?

[10] - I personally find this sort of thing highly unlikely to begin with; probably 99% of websites out there are too small to warrant such an attack. But, I still think it's something to keep in mind, especially as we (unfortunately) gravitate towards a consolidated internet with traffic focused in a very small handful of online places.

[11] - I want to stress here that this essay doesn't construe cultural change in a site as necessarily harmful, but rather, cultural change unavoidably induced by outside forces. Some change will naturally occur over the lifespan of a site, and the site's community may pick and choose aspects of other places that they think are worth including, but they should have the ability to make that choice in the first place, and not have it be forced upon them.

[12] - This post — and my thoughts on the matter in general — are heavily inspired by Clay Shirky's A Group Is Its Own Worst Enemy; if you found my post interesting, definitely go read his. It's very interesting that the problems he and I describe and try to tackle have been around for over 50 years.