- still has sort of a weird english bias
- disambiguation sucks
- completely divorced from search
- syntax is arcane and awful
- no neat shortcuts for anything but the five or six primary loci
import mock | |
import pytest | |
@pytest.fixture | |
def mp(request): | |
coll = PatchCollector() | |
request.addfinalizer(coll.revert) | |
return coll | |
class PatchCollector(object): |
struct Parent { | |
data: uint, | |
} | |
struct Child<'self> { | |
parent: &'self Parent, | |
} | |
impl<'self> Parent { | |
fn new() ⟶ Parent { |
terminal-example.rs:35:17: 35:21 error: borrowed value does not live long enough | |
terminal-example.rs:35 let canvas = term.create_canvas(); | |
^~~~ | |
terminal-example.rs:33:39: 40:1 note: borrowed pointer must be valid for the lifetime &'r as defined on the block at 33:39... | |
terminal-example.rs:33 fn get_app_state<'r>() -> AppState<'r> { | |
terminal-example.rs:34 let term = Terminal::new(); | |
terminal-example.rs:35 let canvas = term.create_canvas(); | |
terminal-example.rs:36 return AppState{ | |
terminal-example.rs:37 term: term, | |
terminal-example.rs:38 canvas: canvas, |
Regular expressions are great. Every language supports them; everyone knows (generally) how to use them. They can solve a lot of ad-hoc parsing problems, and pretend to solve many others if you don't look too hard.
But they have their downsides. They're hilariously compact, even with /x
, making them very hard to read and maintain once they grow beyond the trivial. They have unintuitive backtracking behavior, leading to surprising matches and performance gotchas on most implementations. They don't handle context-sensitive grammars very well. And very few regex engines make it easy to debug a regex that doesn't match where you think it should.
And yet most programmers will still turn to a regex long before a parser generator, because parser generators tend to be entire new large systems with multiple moving parts and even more obtuse restrictions.
# A file containing descriptions of types' immunities to things that are not other types. | |
# XXX: it does occur to me that this could be stored programmatically, so perhaps this is not a great example | |
# note that this also raises the question (hopefully to be answered RSN) of just where the index of "mechanic" items lives and how it works | |
fire: | |
en: Cannot be [burned]{mechanic:burn}. | |
ja: [燃やせられない]{mechanic:burn}。 | |
ghost: | |
en: Immune to [trapping]{mechanic:trap}. | |
ja: [引っ掛けられない]{mechanic:trap}. |
HEY: I've turned this into a blog post, which is a little more in depth.
🚨 https://eev.ee/blog/2016/06/04/converting-a-git-repo-from-tabs-to-spaces/ 🚨
class DamageClass(Enum): | |
__tablename__ = 'move_damage_classes' | |
identifier = Identifier() | |
name = LocalFanText() | |
description = LocalFanText() | |
class Generation(Enum): | |
__tablename__ = 'generations' |
I took all the commented-out unmapped glyphs from the list above and pored over them. A couple were overlooked:
'f111' => '25cf', # ● BLACK CIRCLE (icon-circle)
'f10c' => '25cb', # ○ WHITE CIRCLE (icon-circle-blank)
# NOTE: REQUIRES CHANGING icon-refresh TO 🗘 U+1F5D8 CLOCKWISE RIGHT AND LEFT SEMICIRCLE ARROWS
'f079' => '1f501', # 🔁 CLOCKWISE RIGHTWARDS AND LEFTWARDS OPEN CIRCLE ARROWS (icon-retweet)
Many of them can be encoded, with a caveat: they aren't actually in a release of Unicode yet. But they're in the pipeline as imports from Wingdings, approved, and pending publication. There's a list here, and the current Symbola encodes all of them (or at least the ones I use below).
- how does this format interact with side games? | |
- where do names and flavor text go: inline in this file, or in separate files? should there be a separate file per language? | |
- once and for all, WHAT ABOUT FORMS? |