I've done some comparisons of the methods above + RE compiled.
Python 3.7.4
I've used the book from project Gutenderg.
from urllib.request import urlopen
# Download the book Alice’s Adventures in Wonderland, by Lewis Carroll
text = urlopen('https://www.gutenberg.org/files/11/11-0.txt').read().decode('utf-8')
# Split it into the separate chapters and remove table of contents, etc
sep = 'CHAPTER'
chaps = [sep + ch for ch in text.split('CHAPTER') if len(ch) > 1000]
len(chaps)
Defined all approaches as functions in order to use them in the loop and keep succinct.
import re
import string
def py_isupper(text):
return sum(1 for c in text if c.isupper())
def py_str_uppercase(text):
return sum(1 for c in text if c in string.ascii_uppercase)
def py_filter_lambda(text):
return len(list(filter(lambda x: x in string.ascii_uppercase, text)))
def regex(text):
return len(re.findall(r'[A-Z]',text))
# remove compile from the loop
REGEX = re.compile(r'[A-Z]')
def regex_compiled(text):
return len(REGEX.findall(text))
The results are below.
%%timeit
cnt = [py_isupper(ch) for ch in chaps]
7.84 ms ± 69.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%%timeit
cnt = [py_str_uppercase(ch) for ch in chaps]
11.9 ms ± 94.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%%timeit
cnt = [py_filter_lambda(ch) for ch in chaps]
19.1 ms ± 499 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%%timeit
cnt = [regex(ch) for ch in chaps]
1.49 ms ± 13 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
cnt = [regex_compiled(ch) for ch in chaps]
1.45 ms ± 8.69 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)