Language models are not computers. I'd say, they're terrible at reasoning because they can't follow simple instructions correctly. If it can't do this, then why should it be able to do its own logical reasoning?
Jelmer:
Are there more ones or more zeros in the following sequence: 1111111100000111111110000000000
ChatGPT:
To determine whether there are more ones or zeros in the sequence "1111111100000111111110000000000", we can simply count the number of ones and zeros in the sequence and compare them.
Counting the number of ones in the sequence, we get: