Skip to content

Instantly share code, notes, and snippets.

@andjc
Last active October 20, 2024 13:25
Show Gist options
  • Save andjc/43a98c6d6f5e419303604081d57a401e to your computer and use it in GitHub Desktop.
Save andjc/43a98c6d6f5e419303604081d57a401e to your computer and use it in GitHub Desktop.
Grapheme tokenisation in Python

Grapheme tokenisation in Python

When working with tokenisation and break iterators, it is sometimes necessary to work at the character, syllable, line, or sentence levels. Character level tokenisation is an interesting case. By character, I mean a user perceivable unit of text, which the Unicode standard would refer to as a grapheme. The usual way I see developers handling character level tokenisation of English is via list comprehension or typecasting a string to a list:

>>> t1 = "transformation"
>>> [char for char in t1]
['t', 'r', 'a', 'n', 's', 'f', 'o', 'r', 'm', 'a', 't', 'i', 'o', 'n']

This will give you discrete characters or codepoints. But this approach doesn't work as well for other languages. Let's take a Dinka string as an example:

>>> t2 = "dɛ̈tëicëkäŋ akɔ̈ɔ̈n"
>>> [char for char in t2]
['d', 'ɛ', '̈', 't', 'ë', 'i', 'c', 'ë', 'k', 'ä', 'ŋ', ' ', 'a', 'k', 'ɔ', '̈', 'ɔ', '̈', 'n']

There is a mixture of precomposed and decomposed character sequences.

>>> import unicodedata as ud
>>> ud.is_normalized('NFC', t2)
True

The text is fully precomposed, using Unicode Normalization Form C, but many user-perceived characters do not exist as single codepoints. How do we work with ä vs ɛ̈? We can use Python packages such as grapheme and PyICU, or we can also get the same results from a simple regular expression assertion.

Unicode documentation refers to legacy grapheme clusters, extended grapheme clusters, and tailored grapheme clusters. Generally, when graphemes are referred to, extended grapheme clusters are meant.

Regex

I will use the regex module rather than re, since regex has more comprehensive Unicode support.

import regex as re
re.findall(r'\X',t2)
# ['d', 'ɛ̈', 't', 'ë', 'i', 'c', 'ë', 'k', 'ä', 'ŋ', ' ', 'a', 'k', 'ɔ̈', 'ɔ̈', 'n']

As can be seen, each grapheme is treated as a single unit, so rather than splitting ɛ̈ into two codepoints or characters, it treats it as a single unit.

For the Dinka string, grapheme segmentation will be consistent across implementations. If we look at a Devenagari example, differences can be observed between segmentation by different implementations

t3 = "हिन्दी"
re.findall(r'\X',t3)
# ['हि', 'न्', 'दी']

This generates three grapheme clusters for the string.

Grapheme

Alternatively we could use the grapheme package, which provides a number of functions to manipulate strings using graphemes as the basic unit, rather than individual codepoints, as standard Python string operations do.

N.B. The grapheme package hasn't been updated since 2020, so it is better to avoid this package if you require recent versions of Unicode.

import grapheme
grapheme.UNICODE_VERSION
# '13.0.0'
list(grapheme.graphemes(t2))
# ['d', 'ɛ̈', 't', 'ë', 'i', 'c', 'ë', 'k', 'ä', 'ŋ', ' ', 'a', 'k', 'ɔ̈', 'ɔ̈', 'n']

Likewise, grapheme package gives us three grapheme clusters:

list(grapheme.graphemes(t3))
['हि', 'न्', 'दी']

PyICU

Alternatively, we could use PyICU, with icu4c, for grapheme tokenisation.

Using pyicu is more complex than using regex or grapheme, many of the functions available in icu4c are low level functions, and it is necessary to develop your own wrapper around it. The break iterator returns a set of breakpoints in the string. It is necessary to iterate through the breakpoints. The PyICU cheat sheet has a useful function, iterate_breaks(), to iterate through each breakpoint in the string.

We then need to create a break interator instance, and then pass the string and instance to iterate_breaks(). In this instance I will use the root locale for the break iterator.

import icu
def iterate_breaks(text, break_iterator):
    break_iterator.setText(text)
    lastpos = 0
    while True:
        next_boundary = break_iterator.nextBoundary()
        if next_boundary == -1: return
        yield text[lastpos:next_boundary]
        lastpos = next_boundary
bi = icu.BreakIterator.createCharacterInstance(icu.Locale.getRoot())
list(iterate_breaks(t2, bi))
# ['d', 'ɛ̈', 't', 'ë', 'i', 'c', 'ë', 'k', 'ä', 'ŋ', ' ', 'a', 'k', 'ɔ̈', 'ɔ̈', 'n']

Alternative code we use in our internal projects:

import icu
def generate_tokens(text, brkiter = None):
    if not brkiter:
        brkiter = icu.BreakIterator.createWordInstance(icu.Locale.getRoot())
    brkiter.setText(text)
    i = brkiter.first()
    for j in brkiter:
        yield text[i:j]
        i = j

def get_generated_tokens(text, bi = None):
    return [*generate_tokens(text, brkiter=bi)]

iter = icu.BreakIterator.createCharacterInstance(icu.Locale('und'))
get_generated_tokens(t2, iter)
# ['d', 'ɛ̈', 't', 'ë', 'i', 'c', 'ë', 'k', 'ä', 'ŋ', ' ', 'a', 'k', 'ɔ̈', 'ɔ̈', 'n']

If we look at the Devanagari string:

get_generated_tokens(t3, iter)
# ['हि', 'न्दी']

The icu4c break iterator generates two grapheme clusters. The key difference between the icu4c implementation and graphame and regex is that pyicu treats 094D ( ् ) DEVANAGARI SIGN VIRAMA as extending the grapheme cluster, while for the other two packages the U+094D does nor extend the grapheme cluster boundaries. The icu4c behaviour matches user expectations.

pyuegc

The next approach is pyuegc.

from pyuegc import EGC, UCD_VERSION
UCD_VERSION
# '15.1.0'

The lastest pyuegc package uses the current Unicode version.

EGC(t2)
# ['d', 'ɛ̈', 't', 'ë', 'i', 'c', 'ë', 'k', 'ä', 'ŋ', ' ', 'a', 'k', 'ɔ̈', 'ɔ̈', 'n']

EGC(t3)
['हि', 'न्दी']

pyegc splits the string into two grapheme clusters, giving the same result as pyicu.

ugrapheme

The final solution is ugrapheme. Unlike the other solutions, ugrapheme creates a class instance providing a range of methods for working with and manipulating strings on a grapheme level rather than a character or codepoint level. To mimic the other solutions, you can cast the object to a list:

from ugrapheme import graphemes
grs = graphemes(t3)
[*grs]
# ['हि', 'न्दी']

or, alternatively use the grapheme_split() function:

from ugrapheme import grapheme_split
grapheme_split(t3)
# ['हि', 'न्दी']

ugrapheme splits the string into two grapheme clusters, giving the same result as pyicu.

@Z4JC
Copy link

Z4JC commented Oct 12, 2024

@ramSeraph: Great feedback.
ugrapheme, according to my analysis is the only one that parses 'పరీక్ిలకు' according to the UAX #29 spec.

Specifically, in Table 1c. Regex Definitions, we have:

conjunctCluster := \p{InCB=Consonant} ([\p{InCB=Extend} \p{InCB=Linker}]* \p{InCB=Linker} [\p{InCB=Extend} \p{InCB=Linker}]* \p{InCB=Consonant})+

For ease of reading, instead of writing \p{InCB=Consonant}, I will write C. Instead of \p{InCb=Linker}, I will write L and instead of `\p{InCB=Extend}' I will write E

The expression then becomes:

conjunctCluster := C ([E L]* L [E L]* C)+

In the word 'పరీక్ిలకు' we encounter the following unicode codepoints:
'క': {InCB=Consonant} or, shorthand, C
'్': {InCB=Linker} or, shorthand, L
'ి': {InCB=Extend}, or, shorthand, E
'ల': {InCB=Consonant}, or shorthand, C
the above 4 codepoints thus correspond to CLEC

The sequence CLEC is a match for the conjunctCluster production (match shown as bold font here):
conjunctCluster := C ([E L]* L [E L]* C)+

The spec has 'క్ిల' in a conjunct cluster, although I understand it is not a naturally occuring Telugu conjunct cluster and probably looks wrong. The spec does not make a difference between, for example, Devenagari and Telugu in terms of conjunct clusters and I am not familiar with the scripts enough to be able to tell whether there should be one. There has to be a reason for the conjunctCluster production in the spec to be so complicated, so I am thinking if pyicu and pyuegc broke this one differently, they might actually make a breaking mistake on some valid conjunct clusters..

@ramSeraph
Copy link

I think the conjunct clusters are similar in Telugu and Hindi. But there are quite a few scripts in the Brahmic script family, maybe some of them do things differently.

If you are curious as to why 'క్ిల' is not a valid sequence, you can checkout this FSA for well-formedness that the nisaba folks wrote. I am surprised this hasn't been codified in the unicode standard.

@ramSeraph
Copy link

@Z4JC As you are probably already aware, there was prior discussion of including the unicode segmentation into python here. I hope your work makes it into the core some day. It is a very nice piece of engineering. You clearly blew all other python implementations away in performance. Maybe you can benchmark it against the icu rust implementation here.

@andjc
Copy link
Author

andjc commented Oct 12, 2024

@Z4JC will look at it, have been busy with ethiopic data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment