The challenge binary spawned 4000 worker processes, each process listened on a separate udp port (hence the name) from localhost:6000 to localhost:9999. The parent process listened on localhost:5999. The processes used these ports to send direct messages to each other in synchronous manner.
Six message types were used among the processes. Message type 3, 4, and 5 were used to implement some kind of algorithm that computed something very inefficiently. The goal of the challenge was to find out what this algorithm was and compute it. I named these messages QUERY, DONE and FAIL.
There was a huge 4000x4000 matrix of uint64 numbers, row i
of the matrix was
only accessed by process i
and its main diagonal was all 0. This was clearly
a weighted adjacency graph of processes.
The parent -- after starting the workers -- only communicated with process[0], it kept sending it QUERY messages in an infinite loop. If it received DONE, it incremented the flag variable, if it received FAIL, it printed the flag and exited.
When getting a QUERY from process[k], process[i] tried to forward it to another
process[j] where graph[i][j] > 0
and entered state=5. While in that state, it
responded with FAIL to all incoming requests. It was basically a greedy walk of
the graph. If process[i] got a DONE response back, it decremented graph[i][j]
and incremented graph[i][k]
. Process[1] was different, it always responded with
DONE.
I patched the binary to use fewer processes and set the weights of the graph to small numbers and used strace/awk/dot to plot how messages were propagated in the network before arriving back to the server. After a while it started to make sense: the network computed its maximum flow value, with the graph weights being capacity values.
I extracted the graph from the binary and used the graph_tool
python package
to calculate the flow value. That was the flag flag{12491f295fb0}
. (note the
hexadecimal notation, I wasted lot of time trying to upload it in decimal)