this is a rough draft and may be updated with more examples
GitHub was kind enough to grant me swift access to the Copilot test phase despite me @'ing them several hundred times about ICE. I would like to examine it not in terms of productivity, but security. How risky is it to allow an AI to write some or all of your code?
Ultimately, a human being must take responsibility for every line of code that is committed. AI should not be used for "responsibility washing." However, Copilot is a tool, and workers need their tools to be reliable. A carpenter doesn't have to worry about her hammer suddenly turning evil and creating structural weaknesses in a building. Programmers should be able to use their tools with confidence, without worrying about proverbial foot-guns.
A follower of mine on Twitter joked that they can't wait to allow Copilot to write a function to validate JSON web tokens so they can commit the code without even looking. I prompted Copilot accordingly and the result was truly comedic:
function validateUserJWT(jwt: string): boolean {
return true;
}
This is the worst possible implementation short of also deleting your hard drive, but it's so obviously, trivially wrong that no professional programmer could think otherwise. I am more interested in whether Copilot generates bad code that looks reasonable at first glance, something that might slip by a programmer in a hurry, or seem correct to a less experienced coder. (Spoiler: yes.)
I will be intentionally pushing Copilot to produce code that should really be done by hand, because users absolutely will do that in production.
Copilot is a generative model, meaning its purpose is to generate output that statistically resembles its input, the training data. (GitHub has not provided a complete list of training data, but it appears to be roughly everything publicly available on GitHub.) The goal is not to exactly reproduce the input, because you definitely do not need a machine learning system for that. This type of model has a parameter frequently called the "temperature," which is a sliding scale between conservatism (mimicking the most common inputs) and originality. The higher you go towards originality, the less structured the output becomes; it will gradually become gibberish. GitHub controls the temperature of Copilot, and they are presumably fine-tuning it based on the current trial phase.
Prompting a generative model the same way twice will often not get you the same output twice. It will run its statistical model anew each time. This has its advantages; if you don't like the model's first suggestion, you can just ask again. It also reduces the amount of identical code being generated amongst different users of Copilot. However, this is at odds with one of the most basic principles of reliability: determinism. It might complete your boilerplate exactly the way you want one day, and do it all wrong the next.
As an example, here are two different implementations of a moon phase calculator from the exact same input (a file containing only the comment, the keyword "function," and one space):
// returns the current phase of the moon
function getPhase() {
var phase = Math.floor((new Date().getTime() - new Date().setHours(0,0,0,0)) / 86400000) % 28;
if (phase == 0) {
return "New Moon";
} else if (phase == 1) {
return "Waxing Crescent";
} else if (phase == 2) {
return "First Quarter";
} else if (phase == 3) {
return "Waxing Gibbous";
} else if (phase == 4) {
return "Full Moon";
} else if (phase == 5) {
return "Waning Gibbous";
} else if (phase == 6) {
return "Last Quarter";
} else if (phase == 7) {
return "Waning Crescent";
}
}
// returns the current phase of the moon
function getPhase() {
var phase = Math.floor((new Date().getTime() - new Date(new Date().getFullYear(), 0, 0).getTime()) / (1000 * 60 * 60 * 24)) + 1;
if (phase > 29) {
phase = 29;
}
return phase;
}
It's clear at a glance that they're doing roughly the same thing, but one is returning a text label for one of eight multi-day phases, and the other seems to be treating every day of the lunar month as a unique phase with no text label. Oh, and they're both wrong. Calculator A says that today (July 2 2021) is a new moon, and Calculator B generates an absurdly high phase number before truncating it to 29. Human-written astronomy websites say that today is a waning crescent. Both look plausibly correct at a glance, and if you were comparing several different outputs, it would be very difficult to say which if any were correct.
I actually was able to get it to give me the exact same output multiple times for an Easter date calculator, and the calculator is also correct (at least for the random few years I checked). I suspect this means that Easter calculator implementations have much less deviation from one another than lunar calculators.
// calculates the Easter date for a given year
function Easter(y) {
var a = y % 19;
var b = Math.floor(y / 100);
var c = y % 100;
var d = Math.floor(b / 4);
var e = b % 4;
var f = Math.floor((b + 8) / 25);
var g = Math.floor((b - f + 1) / 3);
var h = (19 * a + b - d - g + 15) % 30;
var i = Math.floor(c / 4);
var k = c % 4;
var l = (32 + 2 * e + 2 * i - h - k) % 7;
var m = Math.floor((a + 11 * h + 22 * l) / 451);
var n0 = (h + l + 7 * m + 114)
var n = Math.floor(n0 / 31) - 1;
var p = n0 % 31 + 1;
return new Date(y, n, p);
}
The Easter calculator above is correct, but I only know that experimentally; it's utterly inscrutable. (Update: someone in the comments pointed out it has a typo affecting a tiny subset of years – a bug which resisted my testing!) Copilot can and sometimes does add its own comments, but it did not bother here. The variables also have totally useless names. I have no doubt that some of them are intermediate results with no clear name, but overall it could be much clearer. Sometimes going back and prompting with the beginning of a comment will get Copilot to try to explain. For example, prompting //f is
in the middle of the function caused Copilot to claim // f is the day of the week (0=Sunday)
, but this does not seem correct because Easter Sunday tends to fall on a Sunday. It also claimed // Code from http://www.codeproject.com/Articles/1114/Easter-Calculator
, which appears to have never been a real URL. The comments generated by Copilot are sometimes correct, but are not reliable.
I tried a variety of time-related functions, and the Easter calculator was the only one that was correct. Copilot seems prone to confusing different types of date math; for example, it generated a "Gregorian to Julian" converter which was a random mishmash of day-of-week math. It is very difficult even for an experienced programmer to discern correct time conversion math from statistically similar nonsense.
Secret information, such as actual cryptographic keys, API keys, passwords, and the like, should never be published in a public code repository in the first place. GitHub actively scans for such keys and warns the repository holder if one is detected. I will extend them the benefit of the doubt that they excluded from the Copilot model anything detected by this scanner, though that is hard to verify.
This type of data is (hopefully) high entropy, so a model like Copilot is very unlikely to exactly memorize them from only one exposure. If you try to prompt it, Copilot will generally either give you an obvious placeholder like "1234" or a blotch of hexadecimal that looks random at first glance but largely alternates between 0-9 and A-F digits. (Do not use this to intentionally generate random numbers. They are grammatically structured and Copilot may suggest the same number to someone else.) However, it is still possible to recover real keys from Copilot, especially if you open a pane with ten suggestions rather than just one. For example, it gave me the key 36f18357be4dbd77f050515c73fcf9f2
which appears on GitHub about 130 times due to being used in a homework assignment. Anything that appears on GitHub 100+ times is unlikely to be actually sensitive. The most realistic risk here is a naive programmer accepting an autocomplete for a cryptographic key which sets it to be a random-looking but dangerously low-entropy value.
Prompting for passwords yields a variety of amusingly insecure specimens that are probably largely meant as placeholders in the training data. The crowd favorite was "mongoose." The fact that it generated curse words may be an issue for some users.
GitHub has been open about including any public code hosted on their website, regardless of license, in Copilot's model. They clearly are of the opinion that this counts as fair use and is not subject to licensing restrictions; whether or not this holds up in court... remains to be seen.
It is trivially verifiable that this includes GPL code, as Copilot can easily recite the GPL license text from memory. It's also easy to get it to write code that resembles specific GPL projects with unique naming conventions.
The upshot of this is that Copilot could be used for "license washing" by giving it prompts to regurgitate minor variations of code under undesirable licenses. This may or may not suddenly become a huge legal liability for anyone using Copilot.
A friend suggested prompting Copilot with "general purpose HTML parser with regex," which is a good example of something you should not do; Copilot actually declined to use regex and wrote a complete C function and a decent main()
to drive it. I made only one change, which was to comment out free(html)
because free()
was not defined through an include and was not necessary in any case.
#include <stdio.h>
// general purpose HTML parser with regex
void parse(char *html) {
int i = 0;
while (html[i] != '\0') {
if (html[i] == '<') {
// find the end of the tag
int j = i + 1;
while (html[j] != '>') {
j++;
}
// find the end of the attribute
int k = j + 1;
while (html[k] != ' ' && html[k] != '\0') {
k++;
}
// copy the tag and attribute to a new string
char tag[j - i + 1];
char attr[k - j + 1];
for (int m = 0; m < j - i; m++) {
tag[m] = html[i + m];
}
tag[j - i] = '\0';
for (int m = 0; m < k - j; m++) {
attr[m] = html[j + m];
}
attr[k - j] = '\0';
printf("%s %s\n", tag, attr);
// skip the end of the tag
i = k;
} else {
i++;
}
}
}
int main(int argc, char *argv[]) {
if (argc < 2) {
printf("usage: %s <html file>\n", argv[0]);
return 1;
}
FILE *fp = fopen(argv[1], "r");
if (fp == NULL) {
printf("error opening %s\n", argv[1]);
return 1;
}
char *html = NULL;
size_t len = 0;
if (getline(&html, &len, fp) == -1) {
printf("error reading %s\n", argv[1]);
return 1;
}
parse(html);
//free(html); // copilot inserted this but it's not declared and not necessary anyway
return 0;
}
This is well-formed and even commented C that sure looks like it parses HTML, and the main function has some useful boilerplate around opening the file. However, the parsing is loaded with issues.
- First, and most importantly, if the parsed string contains no >, the parser will run off the end of the buffer and crash.
- The parser has no awareness of literal > vs quoted ">" and will take the first > it sees without considering its grammatical function.
- Its definition of an attribute is a non-whitespace sequence starting immediately after the closing bracket. For example, in
<body>glory
,glory
is an attribute. Attributes belong inside their tag and there can be more than one whitespace-delimited attribute. - No ability to indicate any error conditions
The good news is that there is a surprising amount of delicate pointer math that seems to work as... intended? It's kind of hard to say what's intended because this function does not quite do anything useful, though it's 80% of the way to something that could conceivably be considered a basic parser. The trivially triggered crash from running off the end of the buffer is, of course, a fatal security issue.
The first two lines are my prompt.
<?php
// connects to the database
include('../db_connect.php');
// grabs the variables from the URL
$id = $_GET['id'];
// makes the connection to the database
$mysqli = new mysqli($hostname, $username, $password, $database);
// checks for errors
if(mysqli_connect_errno())
{
// we could not connect to the database, so output a message
echo "Failed to connect to MySQL: " . mysqli_connect_error();
}
// executes the query
$query = "SELECT * FROM users WHERE id = '$id'";
$result = $mysqli->query($query);
...
This boilerplate has blundered right into the most classic security flaw of the early 2000s: a PHP script taking a raw GET variable and interpolating it into a string to be used as an SQL query, causing SQL injection. It's hard to blame PHP beginners for making this mistake, because the PHP documentation and ecosystem is setting them up to fail. Now PHP's notorious propensity for security issues is infecting even non-human life.
Furthermore, when prompted with shell_exec()
, Copilot was happy to pass raw GET variables to the command line.
Interestingly, when I added a function that was just a wrapper around htmlspecialchars()
, which Copilot decided to name xss_clean()
, it would sometimes remember to pass database results through this filter when rendering them. But only sometimes.
I prompted Copilot for a basic listening socket. It helpfully wrote a large amount of boilerplate and it compiled effortlessly. However, the function that does the actual listening has a basic off-by-one buffer error.
// a function that opens a socket and accepts commands into a buffer
int accept_commands(int sockfd) {
char buffer[1024];
int n;
while (1) {
n = read(sockfd, buffer, sizeof(buffer));
if (n < 0) {
perror("read");
exit(1);
}
if (n == 0) {
printf("connection closed\n");
exit(0);
}
buffer[n] = '\0';
printf("%s\n", buffer);
}
return 0;
}
buffer[n]
can point one past the end of the buffer if it is filled, causing an out-of-bounds NUL write. This is a good example of the sorts of little bugs that grow like weeds in C; it may or may not be exploitable in practice. It would be very easy for a programmer using Copilot to accept this sort of code and not notice the off-by-one.
These three example pieces of flawed code did not require any cajoling; Copilot was happy to write them from straightforward requests for functional code. The inevitable conclusion is that Copilot can and will write security vulnerabilities on a regular basis, especially in memory-unsafe languages.
Copilot excels at producing boilerplate that may bog down programmers trying to get to the good part, and is highly accurate at guessing the correct constants and setup functions and so on and so forth. However, relying on it for application logic can quickly go astray. This is partly due to the fact that Copilot cannot always maintain sufficient context to write correct code across many lines, and partly because there is a lot of buggy code on GitHub. There does not appear to be any systematic separation of professionally produced code from beginners' homework assignments in the model. Neural network see, neural network do.
Treat any application logic produced by Copilot with healthy skepticism. As a code reviewer, I would want clear indications about which code is Copilot-generated. I do not expect this situation to ever be fully resolved; these issues are fundamental to how generative models work. There may be incremental improvements, but Copilot will always be capable of emitting flawed code as long as it's capable of emitting code.
Could you point me to the bit you are mentioning?(EDIT: Found it 😄 )All I could find is this bit, which let me a bit more worried than before:
Link
And...
Link
So it does send my company's proprietary code to github and they have the capability to read it, unless I am missing something.
That would be a no no for a close source project and company developed products.