Skip to content

Instantly share code, notes, and snippets.

@lordgrenville
Created November 2, 2017 11:59
Show Gist options
  • Save lordgrenville/548f29e8f1529da15e8f81f0ec8ef143 to your computer and use it in GitHub Desktop.
Save lordgrenville/548f29e8f1529da15e8f81f0ec8ef143 to your computer and use it in GitHub Desktop.
[Kumail Nanjiani on Twitter](https://twitter.com/kumailn/status/925828976882282496) says of his visits to tech companies: "ZERO consideration seems to be given to the ethical implications of tech...Only "Can we do this?" Never "should we do this?"" It is hard to disagree with his assessment of the risks, but I disagree with his implied prescription - that tech companies ought to self-police, to slow down their progress until ethicists and lawmakers can catch up. 
The first reason that tech companies shouldn't self-police is that they are conflicted: they are foxes guarding the henhouse. Their ethical policies will naturally be diluted so as to have the least impact on their bottom line.
The second reason is that this isn't their area of expertise. Technologists want to develop technology, and set up companies to speed up that process. Their instinct is to keep pulling at the thread, not to stop. A system with an external ethical body setting guidelines, and a tech company trying to be maximally innovative within those guidelines, will be far more effective than one in which the technologists themselves are required to imagine the future risks of their own tech.
However, the overall issue of ethical regulation is a complex one. Nanjiani says "I don't mean weapons etc. I mean altering video, tech that violates privacy, stuff w obv ethical issues." (Let alone something like gene editing, the public discussion of which has been largely notable by its absence.) There are obvious ethical issues here, but the topic is a complex one. For example, there seems to be a large consensus that people with noxious political opinions (white nationalists, Nazis) should be removed from Twitter, but this would be a substantial shift in our definition of civil liberties.
Another point is that this regulation would need to be global. If we block technology in one country, scientists will go somewhere else - Estonia, Kazakhstan, Suriname - that decides to become a pioneer in this tech, or worse still, to the intelligence agencies of shady foreign countries. This tech cannot be blocked without a level of global co-operation which seems unfeasible. We live in a globalised world, but we are at present unable to provide global solutions. (Cf. Piketty's suggestion of a global wealth tax to prevent tax avoidance by the wealthy - a great solution, except impossible to implement.)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment