Last active
October 10, 2018 00:55
-
-
Save zachwhalen/e3b419c123ce21aed6508b67a7bcf9fb to your computer and use it in GitHub Desktop.
A python counter for quantifying the number of possible variations for a Tracery grammar
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import json | |
import re | |
# find out if a given chunk includes symbols | |
def get_symbols(phrase): | |
matches = re.finditer(r"#.*?#", phrase, re.MULTILINE) | |
symbols = [] | |
for matchNum, match in enumerate(matches): | |
matchNum = matchNum + 1 | |
symbols.append( re.sub(r"\.a|\.s|\.capitalize|#","",match.group()) ) | |
return symbols | |
# quantify the flattenings of a particular symbol | |
def quant(s): | |
size = 0 | |
if s in data: | |
children = data[s] | |
for item in children: | |
addSize = 1 | |
subs = get_symbols(item) | |
if len(subs) > 0: | |
for su in subs: | |
addSize *= quant(su) | |
size += addSize | |
else: | |
size = 1 | |
return size | |
with open('grammar.json') as json_data: | |
data = json.load(json_data) | |
print(str(quant('origin'))) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This is a first try at counting the variations a Tracery Grammar can generate. I didn't account for remembered choices or for the various modifiers. Just trying to see if this actually calculates what I think it does. Suggestions are welcome.