-
-
Save asissuthar/cf8fcf0b3be968b1f341e537eb423163 to your computer and use it in GitHub Desktop.
fun countryFlag(code: String) = code | |
.uppercase() | |
.split("") | |
.filter { it.isNotBlank() } | |
.map { it.codePointAt(0) + 0x1F1A5 } | |
.joinToString("") { String(Character.toChars(it)) } | |
println(countryFlag("in")) | |
// output 🇮🇳 |
Meaning from which Unicode value to which Unicode value it's valid?
It is really complex to validate country flag emoji.
It generates code between 0x1F1E6 to 0x1F1FF.
You can't directly verify generated emoji, because it doesn't follow sequential order.
Out of the 676 possible pairs of regional indicator symbols (26 × 26), only 270 are considered valid Unicode region codes. These are a subset of the region sequences in the Common Locale Data Repository (CLDR):
Oh you mean there are gaps. But at least the range of all of the possible ones is what you said?
Nothing smaller than 0x1F1E6, nothing larger than 0x1F1FF?
Isn't there an explanation about what's available? Meaning which ranges?
Or it's better to leave it as it is, with just the values you've mentioned?
How did you reach the value of 1F1A5 in the calculation? On StackOverflow someone wrote - 0x41 + 0x1F1E6
. Those are probably the same values, but why is it this way.
Oh you mean there are gaps.
Yes
How did you reach the value of 1F1A5 in the calculation?
For example
Simple alphabet A = 65 = 0x41
Regional indicator symbol 🇦 = 127462 = 0x1F1E6
🇦 - A = 127462 - 65 = 127397 = 0x1F1A5
difference
This is how 2 character country code generated for flag emoji.
Imagine there are only two countries. IN and US. Rest of countries are yet to be allocated.
How would you build logic to validate that? It can be done if someone maintain table for that.
It will require us to build a library for that instead of using simple function to generate flag.
Oh so you reset it to 0, making it the beginning of the range.
Using joinToString means it's multiple Unicode characters. According to the standard, supposed to always be 2 , no?
If so, this shows why the other implementation seems like this instead:
val firstLetter: Int = Character.codePointAt(countryCodeUpperCase, 0) + 0x1F1A5
val secondLetter: Int = Character.codePointAt(countryCodeUpperCase, 1) + 0x1F1A5
return String(Character.toChars(firstLetter)) + String(Character.toChars(secondLetter))
According to the standard, supposed to always be 2 , no?
Right
If so, this shows why the other implementation seems like this instead:
I wanted to do it in Kotlin way 😅
I see. Thanks.
So if I want to check the values, that it is in the range, each of the 2 characters should be between 0x1F1E6 and 0x1F1FF , right?
That's about the output.
The input should just be between A and Z, right (from AA to ZZ)?
each of the 2 characters should be between 0x1F1E6 and 0x1F1FF , right?
Correct
OK thanks
This seems to work nicely, but can you please add a condition related to the range of valid values for country flags? Meaning from which Unicode value to which Unicode value it's valid?