I think a lot about building social experiences. Building social and communities sites were hugely formative in my life, and the time in which I grew up. I also believe that social and communitiy experieces are a critical force in makign the world better, and as a technologist I think a lot about in what ways technology can play a positive role in their formation. But there is a problem that haunts me, so I'm writing this out as a way to start addressing it.
It used to be relatively cheap and easy to build a social network. This was great because there are lots of social experiments that can be a positive force in the world, but don't achieve the kind of scales that make them commercially viable in our modern advertising ecosystem. The internet (including the pre-web) internet flourished with dedicated small community discussion sites, and early 2000s web saw a similar flourish of purpose built social experiments around interests (think last.fm, or allconsuming, or mirror project, or dozens and dozens of others). While many things are much cheaper and easier than they were then (particularly server resources) the most dramatic shift has been the increase in bad actors.
In the year 2021 it would be unforgivably naive to start a service that allowed people to connect and publish without accounting for:
- people will come onto the site and find ways to spew hateful speech via whatever mechanism you create for sharing.
- people will come onto the site and find ways to harass other people on the site via whatever mechanism you create for connecting.
- people and robots will come onto the site and engage in spam and scams.
- people will come onto the site and connect with others for the purposes of exchanging illegal or disgusting content.
- any service that stores sensitive information will be expensive and difficult to secure
- (I'm sure I've missed a few, this is just off the top of my head)
Hate speech, harrassment, spam, scams, illegal content, and hacking are all problems that are addressable, but they are expensive to address requiring well trained humans and sophisticated technology, and as such are addressing them tends to only be within reach of large, centrally scaled and commercially viable projects. It's hard to overstate how much these sorts of attacks have ramped up their intensity over the last 15-20 years, with the intensity continuing to accelrate, and there is no such thing as being too small to bother.
Some approaches we've seen to make social spaces viable in the face of these attacks are:
- closed communities - far and away the simplest approach is to keep your community small and reliant on pre-existing social ties. While still subject to abuse, the abuse and mitigation techniques tend to be of a scale that human actors can address
- authenticated communities with centralized identity providers as gatekeepers - when you're allowing login with Apple, Google, or Discord outsourcing responsibility for identity can increase the effectiveness of banning someone from your community by raising the cost getting a replacement identity. that said your interests will never be fully aligned with these large central players who are interested in growth and therefore have incentives to make getting new identities tractable.
- charging - if you charge someone to participate bad behavior can come with real costs, the down side is that many people can't or won't pay, and additionally any sort of paid service invite a new class of scam and attacks around stolen payment methods, account take over, etc. Some of this can be outsourced these days to payment providers like Stripe, but it's definitely an additional risk you've opened yourself up for.
The question I'm thinking about is what other compromises and patterns exist for building social software that avoids being an enabler for the sorts of attacks we cataloged abvove.
(Work in progress)