Basic functionality used in the fastai library
source
> ifnone (a, b)
b
if a
is None else a
Since b if a is None else a
is such a common pattern, we wrap it in a function. However, be careful, because python will evaluate both a
and b
when calling ifnone
(which it doesn’t do if using the if
version directly).
test_eq(ifnone(None,1), 1)
test_eq(ifnone(2 ,1), 2)__
source
> maybe_attr (o, attr)
getattr(o,attr,o)
Return the attribute attr
for object o
. If the attribute doesn’t exist, then return the object o
instead.
class myobj: myattr='foo'
test_eq(maybe_attr(myobj, 'myattr'), 'foo')
test_eq(maybe_attr(myobj, 'another_attr'), myobj)__
source
> basic_repr (flds=None)
Minimal__repr__
In types which provide rich display functionality in Jupyter, their __repr__
is also called in order to provide a fallback text representation. Unfortunately, this includes a memory address which changes on every invocation, making it non-deterministic. This causes diffs to get messy and creates conflicts in git. To fix this, put __repr__=basic_repr()
inside your class.
class SomeClass: __repr__=basic_repr()
repr(SomeClass())__
'<__main__.SomeClass>'
If you pass a list of attributes (flds
) of an object, then this will generate a string with the name of each attribute and its corresponding value. The format of this string is key=value
, where key
is the name of the attribute, and value
is the value of the attribute. For each value, attempt to use the __name__
attribute, otherwise fall back to using the value’s __repr__
when constructing the string.
class SomeClass:
a=1
b='foo'
__repr__=basic_repr('a,b')
__name__='some-class'
repr(SomeClass())__
"__main__.SomeClass(a=1, b='foo')"
class AnotherClass:
c=SomeClass()
d='bar'
__repr__=basic_repr(['c', 'd'])
repr(AnotherClass())__
"__main__.AnotherClass(c=__main__.SomeClass(a=1, b='foo'), d='bar')"
source
> is_array (x)
True
if x
supports __array__
or iloc
is_array(np.array(1)),is_array([1])__
(True, False)
source
> listify (o=None, *rest, use_list=False, match=None)
Converto
to a list
Conversion is designed to “do what you mean”, e.g:
test_eq(listify('hi'), ['hi'])
test_eq(listify(b'hi'), [b'hi'])
test_eq(listify(array(1)), [array(1)])
test_eq(listify(1), [1])
test_eq(listify([1,2]), [1,2])
test_eq(listify(range(3)), [0,1,2])
test_eq(listify(None), [])
test_eq(listify(1,2), [1,2])__
arr = np.arange(9).reshape(3,3)
listify(arr)__
[array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])]
listify(array([1,2]))__
[array([1, 2])]
Generators are turned into lists too:
gen = (o for o in range(3))
test_eq(listify(gen), [0,1,2])__
Use match
to provide a length to match:
test_eq(listify(1,match=3), [1,1,1])__
If match
is a sequence, it’s length is used:
test_eq(listify(1,match=range(3)), [1,1,1])__
If the listified item is not of length 1
, it must be the same length as match
:
test_eq(listify([1,1,1],match=3), [1,1,1])
test_fail(lambda: listify([1,1],match=3))__
source
> tuplify (o, use_list=False, match=None)
Makeo
a tuple
test_eq(tuplify(None),())
test_eq(tuplify([1,2,3]),(1,2,3))
test_eq(tuplify(1,match=[1,2,3]),(1,1,1))__
source
> true (x)
Test whetherx
is truthy; collections with >0 elements are considered True
[(o,true(o)) for o in
(array(0),array(1),array([0]),array([0,1]),1,0,'',None)]__
[(array(0), False),
(array(1), True),
(array([0]), True),
(array([0, 1]), True),
(1, True),
(0, False),
('', False),
(None, False)]
source
> NullType ()
An object that isFalse
and can be called, chained, and indexed
bool(null.hi().there[3])__
False
source
> tonull (x)
ConvertNone
to null
bool(tonull(None).hi().there[3])__
False
source
> get_class (nm, *fld_names, sup=None, doc=None, funcs=None, anno=None,
> **flds)
Dynamically create a class, optionally inheriting fromsup
, containing fld_names
_t = get_class('_t', 'a', b=2, anno={'b':int})
t = _t()
test_eq(t.a, None)
test_eq(t.b, 2)
t = _t(1, b=3)
test_eq(t.a, 1)
test_eq(t.b, 3)
t = _t(1, 3)
test_eq(t.a, 1)
test_eq(t.b, 3)
test_eq(t, pickle.loads(pickle.dumps(t)))
test_eq(_t.__annotations__, {'b':int, 'a':typing.Any})
repr(t)__
'__main__._t(a=1, b=3)'
Most often you’ll want to call mk_class
, since it adds the class to your module. See mk_class
for more details and examples of use (which also apply to get_class
).
source
> mk_class (nm, *fld_names, sup=None, doc=None, funcs=None, mod=None,
> anno=None, **flds)
Create a class usingget_class
and add to the caller’s module
Any kwargs
will be added as class attributes, and sup
is an optional (tuple of) base classes.
mk_class('_t', a=1, sup=dict)
t = _t()
test_eq(t.a, 1)
assert(isinstance(t,dict))__
A __init__
is provided that sets attrs for any kwargs
, and for any args
(matching by position to fields), along with a __repr__
which prints all attrs. The docstring is set to doc
. You can pass funcs
which will be added as attrs with the function names.
def foo(self): return 1
mk_class('_t', 'a', sup=dict, doc='test doc', funcs=foo)
t = _t(3, b=2)
test_eq(t.a, 3)
test_eq(t.b, 2)
test_eq(t.foo(), 1)
test_eq(t.__doc__, 'test doc')
t __
{}
source
> wrap_class (nm, *fld_names, sup=None, doc=None, funcs=None, **flds)
Decorator: makes function a method of a new classnm
passing parameters to mk_class
@wrap_class('_t', a=2)
def bar(self,x): return x+1
t = _t()
test_eq(t.a, 2)
test_eq(t.bar(3), 4)__
source
> ignore_exceptions ()
Context manager to ignore exceptions
with ignore_exceptions():
# Exception will be ignored
raise Exception __
source
> exec_local (code, var_name)
Callexec
on code
and return the var var_name
test_eq(exec_local("a=1", "a"), 1)__
source
> risinstance (types, obj=None)
Curriedisinstance
but with args reversed
assert risinstance(int, 1)
assert not risinstance(str, 0)
assert risinstance(int)(1)
assert not risinstance(int)(None)__
types
can also be strings:
assert risinstance(('str','int'), 'a')
assert risinstance('str', 'a')
assert not risinstance('int', 'a')__
source
> ver2tuple (v:str)
test_eq(ver2tuple('3.8.1'), (3,8,1))
test_eq(ver2tuple('3.1'), (3,1,0))
test_eq(ver2tuple('3.'), (3,0,0))
test_eq(ver2tuple('3'), (3,0,0))__
These are used when you need a pass-through function.
> noop (x=None, *args, **kwargs)
Do nothing
noop()
test_eq(noop(1),1)__
> noops (x=None, *args, **kwargs)
Do nothing (method)
class _t: foo=noops
test_eq(_t().foo(1),1)__
These lists are useful for things like padding an array or adding index column(s) to arrays.
Inf
defines the following properties:
count: itertools.count()
zeros: itertools.cycle([0])
ones : itertools.cycle([1])
nones: itertools.cycle([None])
test_eq([o for i,o in zip(range(5), Inf.count)],
[0, 1, 2, 3, 4])
test_eq([o for i,o in zip(range(5), Inf.zeros)],
[0]*5)
test_eq([o for i,o in zip(range(5), Inf.ones)],
[1]*5)
test_eq([o for i,o in zip(range(5), Inf.nones)],
[None]*5)__
source
> in_ (x, a)
True
if x in a
# test if element is in another
assert in_('c', ('b', 'c', 'a'))
assert in_(4, [2,3,4,5])
assert in_('t', 'fastai')
test_fail(in_('h', 'fastai'))
# use in_ as a partial
assert in_('fastai')('t')
assert in_([2,3,4,5])(4)
test_fail(in_('fastai')('h'))__
In addition to in_
, the following functions are provided matching the behavior of the equivalent versions in operator
: lt gt le ge eq ne add sub mul truediv is is_not mod_.
lt(3,5),gt(3,5),is_(None,None),in_(0,[1,2]),mod(3,2)__
(True, False, True, False, 1)
Similarly to _in
, they also have additional functionality: if you only pass one param, they return a partial function that passes that param as the second positional parameter.
lt(5)(3),gt(5)(3),is_(None)(None),in_([1,2])(0),mod(2)(3)__
(True, False, True, False, 1)
source
> ret_true (*args, **kwargs)
Predicate: alwaysTrue
assert ret_true(1,2,3)
assert ret_true(False)__
source
> ret_false (*args, **kwargs)
Predicate: alwaysFalse
source
> stop (e=<class 'StopIteration'>)
Raises exceptione
(by default StopIteration
)
source
> gen (func, seq, cond=<function ret_true>)
Like(func(o) for o in seq if cond(func(o)))
but handles StopIteration
test_eq(gen(noop, Inf.count, lt(5)),
range(5))
test_eq(gen(operator.neg, Inf.count, gt(-5)),
[0,-1,-2,-3,-4])
test_eq(gen(lambda o:o if o<5 else stop(), Inf.count),
range(5))__
source
> chunked (it, chunk_sz=None, drop_last=False, n_chunks=None)
Return batches from iteratorit
of size chunk_sz
(or return n_chunks
total)
Note that you must pass either chunk_sz
, or n_chunks
, but not both.
t = list(range(10))
test_eq(chunked(t,3), [[0,1,2], [3,4,5], [6,7,8], [9]])
test_eq(chunked(t,3,True), [[0,1,2], [3,4,5], [6,7,8], ])
t = map(lambda o:stop() if o==6 else o, Inf.count)
test_eq(chunked(t,3), [[0, 1, 2], [3, 4, 5]])
t = map(lambda o:stop() if o==7 else o, Inf.count)
test_eq(chunked(t,3), [[0, 1, 2], [3, 4, 5], [6]])
t = np.arange(10)
test_eq(chunked(t,3), [[0,1,2], [3,4,5], [6,7,8], [9]])
test_eq(chunked(t,3,True), [[0,1,2], [3,4,5], [6,7,8], ])
test_eq(chunked([], 3), [])
test_eq(chunked([], n_chunks=3), [])__
source
> otherwise (x, tst, y)
y if tst(x) else x
test_eq(otherwise(2+1, gt(3), 4), 3)
test_eq(otherwise(2+1, gt(2), 4), 4)__
These functions reduce boilerplate when setting or manipulating attributes or properties of objects.
source
> custom_dir (c, add)
Implement custom__dir__
, adding add
to cls
custom_dir
allows you extract the __dict__
property of a class and appends the list add
to it.
class _T:
def f(): pass
s = custom_dir(_T(), add=['foo', 'bar'])
assert {'foo', 'bar', 'f'}.issubset(s)__
source
dict
subclass that also provides access to keys as attrs
d = AttrDict(a=1,b="two")
test_eq(d.a, 1)
test_eq(d['b'], 'two')
test_eq(d.get('c','nope'), 'nope')
d.b = 2
test_eq(d.b, 2)
test_eq(d['b'], 2)
d['b'] = 3
test_eq(d['b'], 3)
test_eq(d.b, 3)
assert 'a' in dir(d)__
AttrDict
will pretty print in Jupyter Notebooks:
_test_dict = {'a':1, 'b': {'c':1, 'd':2}, 'c': {'c':1, 'd':2}, 'd': {'c':1, 'd':2},
'e': {'c':1, 'd':2}, 'f': {'c':1, 'd':2, 'e': 4, 'f':[1,2,3,4,5]}}
AttrDict(_test_dict)__
{ 'a': 1,
'b': {'c': 1, 'd': 2},
'c': {'c': 1, 'd': 2},
'd': {'c': 1, 'd': 2},
'e': {'c': 1, 'd': 2},
'f': {'c': 1, 'd': 2, 'e': 4, 'f': [1, 2, 3, 4, 5]}}__
source
> AttrDictDefault (*args, default_=None, **kwargs)
AttrDict
subclass that returns None
for missing attrs
d = AttrDictDefault(a=1,b="two", default_='nope')
test_eq(d.a, 1)
test_eq(d['b'], 'two')
test_eq(d.c, 'nope')__
source
SimpleNamespace
subclass that also adds iter
and dict
support
This is very similar to AttrDict
, but since it starts with SimpleNamespace
, it has some differences in behavior. You can use it just like SimpleNamespace
:
d = NS(**_test_dict)
d __
namespace(a=1,
b={'c': 1, 'd': 2},
c={'c': 1, 'd': 2},
d={'c': 1, 'd': 2},
e={'c': 1, 'd': 2},
f={'c': 1, 'd': 2, 'e': 4, 'f': [1, 2, 3, 4, 5]})
…but you can also index it to get/set:
d['a']__
1
…and iterate t:
list(d)__
['a', 'b', 'c', 'd', 'e', 'f']
source
> get_annotations_ex (obj, globals=None, locals=None)
Backport of py3.10get_annotations
that returns globals/locals
In Python 3.10 inspect.get_annotations
was added. However previous versions of Python are unable to evaluate type annotations correctly if from future import __annotations__
is used. Furthermore, all annotations are evaluated, even if only some subset are needed. get_annotations_ex
provides the same functionality as inspect.get_annotations
, but works on earlier versions of Python, and returns the globals
and locals
needed to evaluate types.
source
> eval_type (t, glb, loc)
eval
a type or collection of types, if needed, for annotations in py3.10+
In py3.10, or if from future import __annotations__
is used, a
is a str
:
class _T2a: pass
def func(a: _T2a): pass
ann,glb,loc = get_annotations_ex(func)
eval_type(ann['a'], glb, loc)__
__main__._T2a
|
is supported for defining Union
types when using eval_type
even for python versions prior to 3.9:
class _T2b: pass
def func(a: _T2a|_T2b): pass
ann,glb,loc = get_annotations_ex(func)
eval_type(ann['a'], glb, loc)__
typing.Union[__main__._T2a, __main__._T2b]
source
> type_hints (f)
Liketyping.get_type_hints
but returns {}
if not allowed type
Below is a list of allowed types for type hints in python:
list(typing._allowed_types)__
[function,
builtin_function_or_method,
method,
module,
wrapper_descriptor,
method-wrapper,
method_descriptor]
For example, type func
is allowed so type_hints
returns the same value as typing.get_hints
:
def f(a:int)->bool: ... # a function with type hints (allowed)
exp = {'a':int,'return':bool}
test_eq(type_hints(f), typing.get_type_hints(f))
test_eq(type_hints(f), exp)__
However, class
is not an allowed type, so type_hints
returns {}
:
class _T:
def __init__(self, a:int=0)->bool: ...
assert not type_hints(_T)__
source
> annotations (o)
Annotations foro
, or type(o)
This supports a wider range of situations than type_hints
, by checking type()
and __init__
for annotations too:
for o in _T,_T(),_T.__init__,f: test_eq(annotations(o), exp)
assert not annotations(int)
assert not annotations(print)__
source
> anno_ret (func)
Get the return annotation offunc
def f(x) -> float: return x
test_eq(anno_ret(f), float)
def f(x) -> typing.Tuple[float,float]: return x
assert anno_ret(f)==typing.Tuple[float,float]__
If your return annotation is None
, anno_ret
will return NoneType
(and not None
):
def f(x) -> None: return x
test_eq(anno_ret(f), NoneType)
assert anno_ret(f) is not None # returns NoneType instead of None __
If your function does not have a return type, or if you pass in None
instead of a function, then anno_ret
returns None
:
def f(x): return x
test_eq(anno_ret(f), None)
test_eq(anno_ret(None), None) # instead of passing in a func, pass in None __
source
> signature_ex (obj, eval_str:bool=False)
Backport ofinspect.signature(..., eval_str=True
to <py310
source
> union2tuple (t)
test_eq(union2tuple(Union[int,str]), (int,str))
test_eq(union2tuple(int), int)
assert union2tuple(Tuple[int,str])==Tuple[int,str]
test_eq(union2tuple((int,str)), (int,str))
if UnionType: test_eq(union2tuple(int|str), (int,str))__
source
> argnames (f, frame=False)
Names of arguments to function or framef
test_eq(argnames(f), ['x'])__
source
> with_cast (f)
Decorator which uses any parameter annotations as preprocessing functions
@with_cast
def _f(a, b:Path, c:str='', d=0): return (a,b,c,d)
test_eq(_f(1, '.', 3), (1,Path('.'),'3',0))
test_eq(_f(1, '.'), (1,Path('.'),'',0))
@with_cast
def _g(a:int=0)->str: return a
test_eq(_g(4.0), '4')
test_eq(_g(4.4), '4')
test_eq(_g(2), '2')__
source
> store_attr (names=None, but='', cast=False, store_args=None, **attrs)
Store params named in comma-separatednames
from calling context into attrs in self
In it’s most basic form, you can use store_attr
to shorten code like this:
class T:
def __init__(self, a,b,c): self.a,self.b,self.c = a,b,c __
…to this:
class T:
def __init__(self, a,b,c): store_attr('a,b,c', self)__
This class behaves as if we’d used the first form:
t = T(1,c=2,b=3)
assert t.a==1 and t.b==3 and t.c==2 __
In addition, it stores the attrs as a dict
in __stored_args__
, which you can use for display, logging, and so forth.
test_eq(t.__stored_args__, {'a':1, 'b':3, 'c':2})__
Since you normally want to use the first argument (often called self
) for storing attributes, it’s optional:
class T:
def __init__(self, a,b,c:str): store_attr('a,b,c')
t = T(1,c=2,b=3)
assert t.a==1 and t.b==3 and t.c==2 __
With cast=True
any parameter annotations will be used as preprocessing functions for the corresponding arguments:
class T:
def __init__(self, a:listify, b, c:str): store_attr('a,b,c', cast=True)
t = T(1,c=2,b=3)
assert t.a==[1] and t.b==3 and t.c=='2'__
You can inherit from a class using store_attr
, and just call it again to add in any new attributes added in the derived class:
class T2(T):
def __init__(self, d, **kwargs):
super().__init__(**kwargs)
store_attr('d')
t = T2(d=1,a=2,b=3,c=4)
assert t.a==2 and t.b==3 and t.c==4 and t.d==1 __
You can skip passing a list of attrs to store. In this case, all arguments passed to the method are stored:
class T:
def __init__(self, a,b,c): store_attr()
t = T(1,c=2,b=3)
assert t.a==1 and t.b==3 and t.c==2 __
class T4(T):
def __init__(self, d, **kwargs):
super().__init__(**kwargs)
store_attr()
t = T4(4, a=1,c=2,b=3)
assert t.a==1 and t.b==3 and t.c==2 and t.d==4 __
class T4:
def __init__(self, *, a: int, b: float = 1):
store_attr()
t = T4(a=3)
assert t.a==3 and t.b==1
t = T4(a=3, b=2)
assert t.a==3 and t.b==2 __
You can skip some attrs by passing but
:
class T:
def __init__(self, a,b,c): store_attr(but='a')
t = T(1,c=2,b=3)
assert t.b==3 and t.c==2
assert not hasattr(t,'a')__
You can also pass keywords to store_attr
, which is identical to setting the attrs directly, but also stores them in __stored_args__
.
class T:
def __init__(self): store_attr(a=1)
t = T()
assert t.a==1 __
You can also use store_attr inside functions.
def create_T(a, b):
t = SimpleNamespace()
store_attr(self=t)
return t
t = create_T(a=1, b=2)
assert t.a==1 and t.b==2 __
source
> attrdict (o, *ks, default=None)
Dict from eachk
in ks
to getattr(o,k)
class T:
def __init__(self, a,b,c): store_attr()
t = T(1,c=2,b=3)
test_eq(attrdict(t,'b','c'), {'b':3, 'c':2})__
source
> properties (cls, *ps)
Change attrs incls
with names in ps
to properties
class T:
def a(self): return 1
def b(self): return 2
properties(T,'a')
test_eq(T().a,1)
test_eq(T().b(),2)__
source
> camel2words (s, space=' ')
Convert CamelCase to ‘spaced words’
test_eq(camel2words('ClassAreCamel'), 'Class Are Camel')__
source
> camel2snake (name)
Convert CamelCase to snake_case
test_eq(camel2snake('ClassAreCamel'), 'class_are_camel')
test_eq(camel2snake('Already_Snake'), 'already__snake')__
source
> snake2camel (s)
Convert snake_case to CamelCase
test_eq(snake2camel('a_b_cc'), 'ABCc')__
source
> class2attr (cls_name)
Return the snake-cased name of the class; strip endingcls_name
if it exists.
class Parent:
@property
def name(self): return class2attr(self, 'Parent')
class ChildOfParent(Parent): pass
class ParentChildOf(Parent): pass
p = Parent()
cp = ChildOfParent()
cp2 = ParentChildOf()
test_eq(p.name, 'parent')
test_eq(cp.name, 'child_of')
test_eq(cp2.name, 'parent_child_of')__
source
> getcallable (o, attr)
Callsgetattr
with a default of noop
class Math:
def addition(self,a,b): return a+b
m = Math()
test_eq(getcallable(m, "addition")(a=1,b=2), 3)
test_eq(getcallable(m, "subtraction")(a=1,b=2), None)__
source
> getattrs (o, *attrs, default=None)
List of allattrs
in o
from fractions import Fraction __
getattrs(Fraction(1,2), 'numerator', 'denominator')__
[1, 2]
source
> hasattrs (o, attrs)
Test whethero
contains all attrs
assert hasattrs(1,('imag','real'))
assert not hasattrs(1,('imag','foo'))__
source
> setattrs (dest, flds, src)
d = dict(a=1,bb="2",ignore=3)
o = SimpleNamespace()
setattrs(o, "a,bb", d)
test_eq(o.a, 1)
test_eq(o.bb, "2")__
d = SimpleNamespace(a=1,bb="2",ignore=3)
o = SimpleNamespace()
setattrs(o, "a,bb", d)
test_eq(o.a, 1)
test_eq(o.bb, "2")__
source
> try_attrs (obj, *attrs)
Return first attr that exists inobj
test_eq(try_attrs(1, 'real'), 1)
test_eq(try_attrs(1, 'foobar', 'real'), 1)__
source
> GetAttrBase ()
Basic delegation of__getattr__
and __dir__
source
> GetAttr ()
Inherit from this to have all attr accesses inself._xtra
passed down to self.default
Inherit from GetAttr
to have attr access passed down to an instance attribute. This makes it easy to create composites that don’t require callers to know about their components. For a more detailed discussion of how this works as well as relevant context, we suggest reading the delegated composition section of this blog article.
You can customise the behaviour of GetAttr
in subclasses via; - _default
- By default, this is set to 'default'
, so attr access is passed down to self.default
- _default
can be set to the name of any instance attribute that does not start with dunder __
- _xtra
- By default, this is None
, so all attr access is passed down - You can limit which attrs get passed down by setting _xtra
to a list of attribute names
To illuminate the utility of GetAttr
, suppose we have the following two classes, _WebPage
which is a superclass of _ProductPage
, which we wish to compose like so:
class _WebPage:
def __init__(self, title, author="Jeremy"):
self.title,self.author = title,author
class _ProductPage:
def __init__(self, page, price): self.page,self.price = page,price
page = _WebPage('Soap', author="Sylvain")
p = _ProductPage(page, 15.0)__
How do we make it so we can just write p.author
, instead of p.page.author
to access the author
attribute? We can use GetAttr
, of course! First, we subclass GetAttr
when defining _ProductPage
. Next, we set self.default
to the object whose attributes we want to be able to access directly, which in this case is the page
argument passed on initialization:
class _ProductPage(GetAttr):
def __init__(self, page, price): self.default,self.price = page,price #self.default allows you to access page directly.
p = _ProductPage(page, 15.0)__
Now, we can access the author
attribute directly from the instance:
test_eq(p.author, 'Sylvain')__
If you wish to store the object you are composing in an attribute other than self.default
, you can set the class attribute _data
as shown below. This is useful in the case where you might have a name collision with self.default
:
class _C(GetAttr):
_default = '_data' # use different component name; `self._data` rather than `self.default`
def __init__(self,a): self._data = a
def foo(self): noop
t = _C('Hi')
test_eq(t._data, 'Hi')
test_fail(lambda: t.default) # we no longer have self.default
test_eq(t.lower(), 'hi')
test_eq(t.upper(), 'HI')
assert 'lower' in dir(t)
assert 'upper' in dir(t)__
By default, all attributes and methods of the object you are composing are retained. In the below example, we compose a str
object with the class _C
. This allows us to directly call string methods on instances of class _C
, such as str.lower()
or str.upper()
:
class _C(GetAttr):
# allow all attributes and methods to get passed to `self.default` (by leaving _xtra=None)
def __init__(self,a): self.default = a
def foo(self): noop
t = _C('Hi')
test_eq(t.lower(), 'hi')
test_eq(t.upper(), 'HI')
assert 'lower' in dir(t)
assert 'upper' in dir(t)__
However, you can choose which attributes or methods to retain by defining a class attribute _xtra
, which is a list of allowed attribute and method names to delegate. In the below example, we only delegate the lower
method from the composed str
object when defining class _C
:
class _C(GetAttr):
_xtra = ['lower'] # specify which attributes get passed to `self.default`
def __init__(self,a): self.default = a
def foo(self): noop
t = _C('Hi')
test_eq(t.default, 'Hi')
test_eq(t.lower(), 'hi')
test_fail(lambda: t.upper()) # upper wasn't in _xtra, so it isn't available to be called
assert 'lower' in dir(t)
assert 'upper' not in dir(t)__
You must be careful to properly set an instance attribute in __init__
that corresponds to the class attribute _default
. The below example sets the class attribute _default
to data
, but erroneously fails to define self.data
(and instead defines self.default
).
Failing to properly set instance attributes leads to errors when you try to access methods directly:
class _C(GetAttr):
_default = 'data' # use a bad component name; i.e. self.data does not exist
def __init__(self,a): self.default = a
def foo(self): noop
# TODO: should we raise an error when we create a new instance ...
t = _C('Hi')
test_eq(t.default, 'Hi')
# ... or is it enough for all GetAttr features to raise errors
test_fail(lambda: t.data)
test_fail(lambda: t.lower())
test_fail(lambda: t.upper())
test_fail(lambda: dir(t))__
source
> delegate_attr (k, to)
Use in__getattr__
to delegate to attr to
without inheriting from GetAttr
delegate_attr
is a functional way to delegate attributes, and is an alternative to GetAttr
. We recommend reading the documentation of GetAttr
for more details around delegation.
You can use achieve delegation when you define __getattr__
by using delegate_attr
:
class _C:
def __init__(self, o): self.o = o # self.o corresponds to the `to` argument in delegate_attr.
def __getattr__(self, k): return delegate_attr(self, k, to='o')
t = _C('HELLO') # delegates to a string
test_eq(t.lower(), 'hello')
t = _C(np.array([5,4,3])) # delegates to a numpy array
test_eq(t.sum(), 12)
t = _C(pd.DataFrame({'a': [1,2], 'b': [3,4]})) # delegates to a pandas.DataFrame
test_eq(t.b.max(), 4)__
ShowPrint
is a base class that defines a show
method, which is used primarily for callbacks in fastai that expect this method to be defined.
Int
, Float
, and Str
extend int
, float
and str
respectively by adding an additional show
method by inheriting from ShowPrint
.
The code for Int
is shown below:
Examples:
Int(0).show()
Float(2.0).show()
Str('Hello').show()__
0
2.0
Hello
Functions that manipulate popular python collections.
source
> partition (coll, f)
Partition a collection by a predicate
ts,fs = partition(range(10), mod(2))
test_eq(fs, [0,2,4,6,8])
test_eq(ts, [1,3,5,7,9])__
source
> flatten (o)
Concatenate all collections and items as a generator
source
> concat (colls)
Concatenate all collections and items as a list
concat([(o for o in range(2)),[2,3,4], 5])__
[0, 1, 2, 3, 4, 5]
concat([["abc", "xyz"], ["foo", "bar"]])__
['abc', 'xyz', 'foo', 'bar']
source
> strcat (its, sep:str='')
Concatenate stringified itemsits
test_eq(strcat(['a',2]), 'a2')
test_eq(strcat(['a',2], ';'), 'a;2')__
source
> detuplify (x)
Ifx
is a tuple with one thing, extract it
test_eq(detuplify(()),None)
test_eq(detuplify([1]),1)
test_eq(detuplify([1,2]), [1,2])
test_eq(detuplify(np.array([[1,2]])), np.array([[1,2]]))__
source
> replicate (item, match)
Create tuple ofitem
copied len(match)
times
t = [1,1]
test_eq(replicate([1,2], t),([1,2],[1,2]))
test_eq(replicate(1, t),(1,1))__
source
> setify (o)
Turn any list like-object into a set.
# test
test_eq(setify(None),set())
test_eq(setify('abc'),{'abc'})
test_eq(setify([1,2,2]),{1,2})
test_eq(setify(range(0,3)),{0,1,2})
test_eq(setify({1,2}),{1,2})__
source
> merge (*ds)
Merge all dictionaries inds
test_eq(merge(), {})
test_eq(merge(dict(a=1,b=2)), dict(a=1,b=2))
test_eq(merge(dict(a=1,b=2), dict(b=3,c=4), None), dict(a=1, b=3, c=4))__
source
> range_of (x)
All indices of collectionx
(i.e. list(range(len(x)))
)
test_eq(range_of([1,1,1,1]), [0,1,2,3])__
source
> groupby (x, key, val=<function noop>)
Likeitertools.groupby
but doesn’t need to be sorted, and isn’t lazy, plus some extensions
test_eq(groupby('aa ab bb'.split(), itemgetter(0)), {'a':['aa','ab'], 'b':['bb']})__
Here’s an example of how to invert a grouping, using an int
as key
(which uses itemgetter
; passing a str
will use attrgetter
), and using a val
function:
d = {0: [1, 3, 7], 2: [3], 3: [5], 4: [8], 5: [4], 7: [5]}
groupby(((o,k) for k,v in d.items() for o in v), 0, 1)__
{1: [0], 3: [0, 2], 7: [0], 5: [3, 7], 8: [4], 4: [5]}
source
> last_index (x, o)
Finds the last index of occurence ofx
in o
(returns -1 if no occurence)
test_eq(last_index(9, [1, 2, 9, 3, 4, 9, 10]), 5)
test_eq(last_index(6, [1, 2, 9, 3, 4, 9, 10]), -1)__
source
> filter_dict (d, func)
Filter adict
using func
, applied to keys and values
letters = {o:chr(o) for o in range(65,73)}
letters __
{65: 'A', 66: 'B', 67: 'C', 68: 'D', 69: 'E', 70: 'F', 71: 'G', 72: 'H'}
filter_dict(letters, lambda k,v: k<67 or v in 'FG')__
{65: 'A', 66: 'B', 70: 'F', 71: 'G'}
source
> filter_keys (d, func)
Filter adict
using func
, applied to keys
filter_keys(letters, lt(67))__
{65: 'A', 66: 'B'}
source
> filter_values (d, func)
Filter adict
using func
, applied to values
filter_values(letters, in_('FG'))__
{70: 'F', 71: 'G'}
source
> cycle (o)
Likeitertools.cycle
except creates list of None
s if o
is empty
test_eq(itertools.islice(cycle([1,2,3]),5), [1,2,3,1,2])
test_eq(itertools.islice(cycle([]),3), [None]*3)
test_eq(itertools.islice(cycle(None),3), [None]*3)
test_eq(itertools.islice(cycle(1),3), [1,1,1])__
source
> zip_cycle (x, *args)
Likeitertools.zip_longest
but cycle
s through elements of all but first argument
test_eq(zip_cycle([1,2,3,4],list('abc')), [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'a')])__
source
> sorted_ex (iterable, key=None, reverse=False)
Likesorted
, but if key is str use attrgetter
; if int use itemgetter
source
> not_ (f)
Create new function that negates result off
def f(a): return a>0
test_eq(f(1),True)
test_eq(not_(f)(1),False)
test_eq(not_(f)(a=-1),True)__
source
> argwhere (iterable, f, negate=False, **kwargs)
Likefilter_ex
, but return indices for matching items
source
> filter_ex (iterable, f=<function noop>, negate=False, gen=False,
> **kwargs)
Likefilter
, but passing kwargs
to f
, defaulting f
to noop
, and adding negate
and gen
source
> range_of (a, b=None, step=None)
All indices of collectiona
, if a
is a collection, otherwise range
test_eq(range_of([1,1,1,1]), [0,1,2,3])
test_eq(range_of(4), [0,1,2,3])__
source
> renumerate (iterable, start=0)
Same asenumerate
, but returns index as 2nd element instead of 1st
test_eq(renumerate('abc'), (('a',0),('b',1),('c',2)))__
source
> first (x, f=None, negate=False, **kwargs)
First element ofx
, optionally filtered by f
, or None if missing
test_eq(first(['a', 'b', 'c', 'd', 'e']), 'a')
test_eq(first([False]), False)
test_eq(first([False], noop), None)__
source
> only (o)
Return the only item ofo
, raise if o
doesn’t have exactly one item
source
> nested_attr (o, attr, default=None)
Same asgetattr
, but if attr
includes a .
, then looks inside nested objects
a = SimpleNamespace(b=(SimpleNamespace(c=1)))
test_eq(nested_attr(a, 'b.c'), getattr(getattr(a, 'b'), 'c'))
test_eq(nested_attr(a, 'b.d'), None)__
source
> nested_setdefault (o, attr, default)
Same assetdefault
, but if attr
includes a .
, then looks inside nested objects
source
> nested_callable (o, attr)
Same asnested_attr
but if not found will return noop
a = SimpleNamespace(b=(SimpleNamespace(c=1)))
test_eq(nested_callable(a, 'b.c'), getattr(getattr(a, 'b'), 'c'))
test_eq(nested_callable(a, 'b.d'), noop)__
source
> nested_idx (coll, *idxs)
Index into nested collections, dicts, etc, withidxs
a = {'b':[1,{'c':2}]}
test_eq(nested_idx(a, 'nope'), None)
test_eq(nested_idx(a, 'nope', 'nup'), None)
test_eq(nested_idx(a, 'b', 3), None)
test_eq(nested_idx(a), a)
test_eq(nested_idx(a, 'b'), [1,{'c':2}])
test_eq(nested_idx(a, 'b', 1), {'c':2})
test_eq(nested_idx(a, 'b', 1, 'c'), 2)__
a = SimpleNamespace(b=[1,{'c':2}])
test_eq(nested_idx(a, 'nope'), None)
test_eq(nested_idx(a, 'nope', 'nup'), None)
test_eq(nested_idx(a, 'b', 3), None)
test_eq(nested_idx(a), a)
test_eq(nested_idx(a, 'b'), [1,{'c':2}])
test_eq(nested_idx(a, 'b', 1), {'c':2})
test_eq(nested_idx(a, 'b', 1, 'c'), 2)__
source
> set_nested_idx (coll, value, *idxs)
Set value indexed like `nested_idx
set_nested_idx(a, 3, 'b', 0)
test_eq(nested_idx(a, 'b', 0), 3)__
source
> val2idx (x)
Dict from value to index
test_eq(val2idx([1,2,3]), {3:2,1:0,2:1})__
source
> uniqueify (x, sort=False, bidir=False, start=None)
Unique elements inx
, optional sort
, optional return reverse correspondence, optional prepend with elements.
t = [1,1,0,5,0,3]
test_eq(uniqueify(t),[1,0,5,3])
test_eq(uniqueify(t, sort=True),[0,1,3,5])
test_eq(uniqueify(t, start=[7,8,6]), [7,8,6,1,0,5,3])
v,o = uniqueify(t, bidir=True)
test_eq(v,[1,0,5,3])
test_eq(o,{1:0, 0: 1, 5: 2, 3: 3})
v,o = uniqueify(t, sort=True, bidir=True)
test_eq(v,[0,1,3,5])
test_eq(o,{0:0, 1: 1, 3: 2, 5: 3})__
source
> loop_first_last (values)
Iterate and generate a tuple with a flag for first and last value.
test_eq(loop_first_last(range(3)), [(True,False,0), (False,False,1), (False,True,2)])__
source
> loop_first (values)
Iterate and generate a tuple with a flag for first value.
test_eq(loop_first(range(3)), [(True,0), (False,1), (False,2)])__
source
> loop_last (values)
Iterate and generate a tuple with a flag for last value.
test_eq(loop_last(range(3)), [(False,0), (False,1), (True,2)])__
source
> first_match (lst, f, default=None)
First element oflst
matching predicate f
, or default
if none
a = [0,2,4,5,6,7,10]
test_eq(first_match(a, lambda o:o%2), 3)__
source
> last_match (lst, f, default=None)
Last element oflst
matching predicate f
, or default
if none
test_eq(last_match(a, lambda o:o%2), 5)__
A tuple with extended functionality.
source
> fastuple (x=None, *rest)
Atuple
with elementwise ops and more friendly init behavior
Common failure modes when trying to initialize a tuple in python:
tuple(3)
> TypeError: 'int' object is not iterable __
or
tuple(3, 4)
> TypeError: tuple expected at most 1 arguments, got 2 __
However, fastuple
allows you to define tuples like this and in the usual way:
test_eq(fastuple(3), (3,))
test_eq(fastuple(3,4), (3, 4))
test_eq(fastuple((3,4)), (3, 4))__
source
> fastuple.add (*args)
+
is already defined in tuple
for concat, so use add
instead
test_eq(fastuple.add((1,1),(2,2)), (3,3))
test_eq_type(fastuple(1,1).add(2), fastuple(3,3))
test_eq(fastuple('1','2').add('2'), fastuple('12','22'))__
source
> fastuple.mul (*args)
*
is already defined in tuple
for replicating, so use mul
instead
test_eq_type(fastuple(1,1).mul(2), fastuple(2,2))__
Additionally, the following elementwise operations are available: - le
: less than or equal - eq
: equal - gt
: greater than - min
: minimum of
test_eq(fastuple(3,1).le(1), (False, True))
test_eq(fastuple(3,1).eq(1), (False, True))
test_eq(fastuple(3,1).gt(1), (True, False))
test_eq(fastuple(3,1).min(2), (2,1))__
You can also do other elementwise operations like negate a fastuple
, or subtract two fastuple
s:
test_eq(-fastuple(1,2), (-1,-2))
test_eq(~fastuple(1,0,1), (False,True,False))
test_eq(fastuple(1,1)-fastuple(2,2), (-1,-1))__
test_eq(type(fastuple(1)), fastuple)
test_eq_type(fastuple(1,2), fastuple(1,2))
test_ne(fastuple(1,2), fastuple(1,3))
test_eq(fastuple(), ())__
Utilities for functional programming or for defining, modifying, or debugging functions.
source
> bind (func, *pargs, **pkwargs)
Same aspartial
, except you can use arg0
arg1
etc param placeholders
bind
is the same as partial
, but also allows you to reorder positional arguments using variable name(s) arg{i}
where i refers to the zero-indexed positional argument. bind
as implemented currently only supports reordering of up to the first 5 positional arguments.
Consider the function myfunc
below, which has 3 positional arguments. These arguments can be referenced as arg0
, arg1
, and arg1
, respectively.
def myfn(a,b,c,d=1,e=2): return(a,b,c,d,e)__
In the below example we bind the positional arguments of myfn
as follows:
- The second input
14
, referenced byarg1
, is substituted for the first positional argument. - We supply a default value of
17
for the second positional argument. - The first input
19
, referenced byarg0
, is subsituted for the third positional argument.
test_eq(bind(myfn, arg1, 17, arg0, e=3)(19,14), (14,17,19,1,3))__
In this next example:
- We set the default value to
17
for the first positional argument. - The first input
19
refrenced byarg0
, becomes the second positional argument. - The second input
14
becomes the third positional argument. - We override the default the value for named argument
e
to3
.
test_eq(bind(myfn, 17, arg0, e=3)(19,14), (17,19,14,1,3))__
This is an example of using bind
like partial
and do not reorder any arguments:
test_eq(bind(myfn)(17,19,14), (17,19,14,1,2))__
bind
can also be used to change default values. In the below example, we use the first input 3
to override the default value of the named argument e
, and supply default values for the first three positional arguments:
test_eq(bind(myfn, 17,19,14,e=arg0)(3), (17,19,14,1,3))__
source
> mapt (func, *iterables)
Tuplifiedmap
t = [0,1,2,3]
test_eq(mapt(operator.neg, t), (0,-1,-2,-3))__
source
> map_ex (iterable, f, *args, gen=False, **kwargs)
Likemap
, but use bind
, and supports str
and indexing
test_eq(map_ex(t,operator.neg), [0,-1,-2,-3])__
If f
is a string then it is treated as a format string to create the mapping:
test_eq(map_ex(t, '#{}#'), ['#0#','#1#','#2#','#3#'])__
If f
is a dictionary (or anything supporting __getitem__
) then it is indexed to create the mapping:
test_eq(map_ex(t, list('abcd')), list('abcd'))__
You can also pass the same arg
params that bind
accepts:
def f(a=None,b=None): return b
test_eq(map_ex(t, f, b=arg0), range(4))__
source
> compose (*funcs, order=None)
Create a function that composes all functions infuncs
, passing along remaining *args
and **kwargs
to all
f1 = lambda o,p=0: (o*2)+p
f2 = lambda o,p=1: (o+1)/p
test_eq(f2(f1(3)), compose(f1,f2)(3))
test_eq(f2(f1(3,p=3),p=3), compose(f1,f2)(3,p=3))
test_eq(f2(f1(3, 3), 3), compose(f1,f2)(3, 3))
f1.order = 1
test_eq(f1(f2(3)), compose(f1,f2, order="order")(3))__
source
> maps (*args, retain=<function noop>)
Likemap
, except funcs are composed first
test_eq(maps([1]), [1])
test_eq(maps(operator.neg, [1,2]), [-1,-2])
test_eq(maps(operator.neg, operator.neg, [1,2]), [1,2])__
source
> partialler (f, *args, order=None, **kwargs)
Likefunctools.partial
but also copies over docstring
def _f(x,a=1):
"test func"
return x-a
_f.order=1
f = partialler(_f, 2)
test_eq(f.order, 1)
test_eq(f(3), -1)
f = partialler(_f, a=2, order=3)
test_eq(f.__doc__, "test func")
test_eq(f.order, 3)
test_eq(f(3), _f(3,2))__
class partial0:
"Like `partialler`, but args passed to callable are inserted at started, instead of at end"
def __init__(self, f, *args, order=None, **kwargs):
self.f,self.args,self.kwargs = f,args,kwargs
self.order = ifnone(order, getattr(f,'order',None))
self.__doc__ = f.__doc__
def __call__(self, *args, **kwargs): return self.f(*args, *self.args, **kwargs, **self.kwargs)__
f = partial0(_f, 2)
test_eq(f.order, 1)
test_eq(f(3), 1) # NB: different to `partialler` example __
source
> instantiate (t)
Instantiatet
if it’s a type, otherwise do nothing
test_eq_type(instantiate(int), 0)
test_eq_type(instantiate(1), 1)__
source
> using_attr (f, attr)
Construct a function which appliesf
to the argument’s attribute attr
t = Path('/a/b.txt')
f = using_attr(str.upper, 'name')
test_eq(f(t), 'B.TXT')__
A Concise Way To Create Lambdas
This is a concise way to create lambdas that are calling methods on an object (note the capitalization!)
Self.sum()
, for instance, is a shortcut for lambda o: o.sum()
.
f = Self.sum()
x = np.array([3.,1])
test_eq(f(x), 4.)
# This is equivalent to above
f = lambda o: o.sum()
x = np.array([3.,1])
test_eq(f(x), 4.)
f = Self.argmin()
arr = np.array([1,2,3,4,5])
test_eq(f(arr), arr.argmin())
f = Self.sum().is_integer()
x = np.array([3.,1])
test_eq(f(x), True)
f = Self.sum().real.is_integer()
x = np.array([3.,1])
test_eq(f(x), True)
f = Self.imag()
test_eq(f(3), 0)
f = Self[1]
test_eq(f(x), 1)__
Self
is also callable, which creates a function which calls any function passed to it, using the arguments passed to Self
:
def f(a, b=3): return a+b+2
def g(a, b=3): return a*b
fg = Self(1,b=2)
list(map(fg, [f,g]))__
[5, 2]
source
> copy_func (f)
Copy a non-builtin function (NBcopy.copy
does not work for this)
Sometimes it may be desirable to make a copy of a function that doesn’t point to the original object. When you use Python’s built in copy.copy
or copy.deepcopy
to copy a function, you get a reference to the original object:
import copy as cp __
def foo(): pass
a = cp.copy(foo)
b = cp.deepcopy(foo)
a.someattr = 'hello' # since a and b point at the same object, updating a will update b
test_eq(b.someattr, 'hello')
assert a is foo and b is foo __
However, with copy_func
, you can retrieve a copy of a function without a reference to the original object:
c = copy_func(foo) # c is an indpendent object
assert c is not foo __
def g(x, *, y=3): return x+y
test_eq(copy_func(g)(4), 7)__
source
> patch_to (cls, as_prop=False, cls_method=False)
Decorator: addf
to cls
The @patch_to
decorator allows you to monkey patch a function into a class as a method:
class _T3(int): pass
@patch_to(_T3)
def func1(self, a): return self+a
t = _T3(1) # we initialized `t` to a type int = 1
test_eq(t.func1(2), 3) # we add 2 to `t`, so 2 + 1 = 3 __
You can access instance properties in the usual way via self
:
class _T4():
def __init__(self, g): self.g = g
@patch_to(_T4)
def greet(self, x): return self.g + x
t = _T4('hello ') # this sets self.g = 'hello '
test_eq(t.greet('world'), 'hello world') #t.greet('world') will append 'world' to 'hello '__
You can instead specify that the method should be a class method by setting cls_method=True
:
class _T5(int): attr = 3 # attr is a class attribute we will access in a later method
@patch_to(_T5, cls_method=True)
def func(cls, x): return cls.attr + x # you can access class attributes in the normal way
test_eq(_T5.func(4), 7)__
Additionally you can specify that the function you want to patch should be a class attribute with as_prop=True
:
@patch_to(_T5, as_prop=True)
def add_ten(self): return self + 10
t = _T5(4)
test_eq(t.add_ten, 14)__
Instead of passing one class to the @patch_to
decorator, you can pass multiple classes in a tuple to simulteanously patch more than one class with the same method:
class _T6(int): pass
class _T7(int): pass
@patch_to((_T6,_T7))
def func_mult(self, a): return self*a
t = _T6(2)
test_eq(t.func_mult(4), 8)
t = _T7(2)
test_eq(t.func_mult(4), 8)__
source
> patch (f=None, as_prop=False, cls_method=False)
Decorator: addf
to the first parameter’s class (based on f’s type annotations)
@patch
is an alternative to @patch_to
that allows you similarly monkey patch class(es) by using type annotations:
class _T8(int): pass
@patch
def func(self:_T8, a): return self+a
t = _T8(1) # we initilized `t` to a type int = 1
test_eq(t.func(3), 4) # we add 3 to `t`, so 3 + 1 = 4
test_eq(t.func.__qualname__, '_T8.func')__
Similarly to patch_to
, you can supply a union of classes instead of a single class in your type annotations to patch multiple classes:
class _T9(int): pass
@patch
def func2(x:_T8|_T9, a): return x*a # will patch both _T8 and _T9
t = _T8(2)
test_eq(t.func2(4), 8)
test_eq(t.func2.__qualname__, '_T8.func2')
t = _T9(2)
test_eq(t.func2(4), 8)
test_eq(t.func2.__qualname__, '_T9.func2')__
Just like patch_to
decorator you can use as_prop
and cls_method
parameters with patch
decorator:
@patch(as_prop=True)
def add_ten(self:_T5): return self + 10
t = _T5(4)
test_eq(t.add_ten, 14)__
class _T5(int): attr = 3 # attr is a class attribute we will access in a later method
@patch(cls_method=True)
def func(cls:_T5, x): return cls.attr + x # you can access class attributes in the normal way
test_eq(_T5.func(4), 7)__
source
> patch_property (f)
Deprecated; usepatch(as_prop=True)
instead
Patching classmethod
shouldn’t affect how python’s inheritance works
class FastParent: pass
@patch(cls_method=True)
def type_cls(cls: FastParent): return cls
class FastChild(FastParent): pass
parent = FastParent()
test_eq(parent.type_cls(), FastParent)
child = FastChild()
test_eq(child.type_cls(), FastChild)__
source
> compile_re (pat)
Compilepat
if it’s not None
assert compile_re(None) is None
assert compile_re('a').match('ab')__
source
> ImportEnum (value, names=None, module=None, qualname=None, type=None,
> start=1)
AnEnum
that can have its values imported
_T = ImportEnum('_T', {'foobar':1, 'goobar':2})
_T.imports()
test_eq(foobar, _T.foobar)
test_eq(goobar, _T.goobar)__
source
> StrEnum (value, names=None, module=None, qualname=None, type=None,
> start=1)
AnImportEnum
that behaves like a str
source
> str_enum (name, *vals)
Simplified creation ofStrEnum
types
source
> ValEnum (value, names=None, module=None, qualname=None, type=None,
> start=1)
AnImportEnum
that stringifies using values
_T = str_enum('_T', 'a', 'b')
test_eq(f'{_T.a}', 'a')
test_eq(_T.a, 'a')
test_eq(list(_T.__members__), ['a','b'])
print(_T.a, _T.a.upper())__
a A
source
> Stateful (*args, **kwargs)
A base class/mixin for objects that should not serialize all their state
class _T(Stateful):
def __init__(self):
super().__init__()
self.a=1
self._state['test']=2
t = _T()
t2 = pickle.loads(pickle.dumps(t))
test_eq(t.a,1)
test_eq(t._state['test'],2)
test_eq(t2.a,1)
test_eq(t2._state,{})__
Override _init_state
to do any necessary setup steps that are required during __init__
or during deserialization (e.g. pickle.load
). Here’s an example of how Stateful
simplifies the official Python example for Handling Stateful Objects.
class TextReader(Stateful):
"""Print and number lines in a text file."""
_stateattrs=('file',)
def __init__(self, filename):
self.filename,self.lineno = filename,0
super().__init__()
def readline(self):
self.lineno += 1
line = self.file.readline()
if line: return f"{self.lineno}: {line.strip()}"
def _init_state(self):
self.file = open(self.filename)
for _ in range(self.lineno): self.file.readline()__
reader = TextReader("00_test.ipynb")
print(reader.readline())
print(reader.readline())
new_reader = pickle.loads(pickle.dumps(reader))
print(reader.readline())__
1: {
2: "cells": [
3: {
source
> NotStr (s)
Behaves like astr
, but isn’t an instance of one
s = NotStr("hello")
assert not isinstance(s, str)
test_eq(s, 'hello')
test_eq(s*2, 'hellohello')
test_eq(len(s), 5)__
source
Little hack to get strings to show properly in Jupyter.
Allow strings with special characters to render properly in Jupyter. Without calling print()
strings with special characters are displayed like so:
with_special_chars='a string\nwith\nnew\nlines and\ttabs'
with_special_chars __
'a string\nwith\nnew\nlines and\ttabs'
We can correct this with PrettyString
:
PrettyString(with_special_chars)__
a string
with
new
lines and tabs
source
> even_mults (start, stop, n)
Build log-stepped array fromstart
to stop
in n
steps.
test_eq(even_mults(2,8,3), [2,4,8])
test_eq(even_mults(2,32,5), [2,4,8,16,32])
test_eq(even_mults(2,8,1), 8)__
source
> num_cpus ()
Get number of cpus
num_cpus()__
8
source
> add_props (f, g=None, n=2)
Create properties passing each ofrange(n)
to f
class _T(): a,b = add_props(lambda i,x:i*2)
t = _T()
test_eq(t.a,0)
test_eq(t.b,2)__
class _T():
def __init__(self, v): self.v=v
def _set(i, self, v): self.v[i] = v
a,b = add_props(lambda i,x: x.v[i], _set)
t = _T([0,2])
test_eq(t.a,0)
test_eq(t.b,2)
t.a = t.a+1
t.b = 3
test_eq(t.a,1)
test_eq(t.b,3)__
source
> typed (f)
Decorator to check param and return types at runtime
typed
validates argument types at runtime. This is in contrast to MyPy which only offers static type checking.
For example, a TypeError
will be raised if we try to pass an integer into the first argument of the below function:
@typed
def discount(price:int, pct:float):
return (1-pct) * price
with ExceptionExpected(TypeError): discount(100.0, .1)__
We can also optionally allow multiple types by enumarating the types in a tuple as illustrated below:
def discount(price:int|float, pct:float):
return (1-pct) * price
assert 90.0 == discount(100.0, .1)__
@typed
def foo(a:int, b:str='a'): return a
test_eq(foo(1, '2'), 1)
with ExceptionExpected(TypeError): foo(1,2)
@typed
def foo()->str: return 1
with ExceptionExpected(TypeError): foo()
@typed
def foo()->str: return '1'
assert foo()__
typed
works with classes, too:
class Foo:
@typed
def __init__(self, a:int, b: int, c:str): pass
@typed
def test(cls, d:str): return d
with ExceptionExpected(TypeError): Foo(1, 2, 3)
with ExceptionExpected(TypeError): Foo(1,2, 'a string').test(10)__
source
> exec_new (code)
Executecode
in a new environment and return it
g = exec_new('a=1')
test_eq(g['a'], 1)__
source
> exec_import (mod, sym)
Importsym
from mod
in a new environment
source
> str2bool (s)
Case-insensitive convert strings
too a bool (y
,yes
,t
,true
,on
,1
->True
)
True values are ‘y’, ‘yes’, ‘t’, ‘true’, ‘on’, and ‘1’; false values are ‘n’, ‘no’, ‘f’, ‘false’, ‘off’, and ‘0’. Raises ValueError
if ‘val’ is anything else.
for o in "y YES t True on 1".split(): assert str2bool(o)
for o in "n no FALSE off 0".split(): assert not str2bool(o)
for o in 0,None,'',False: assert not str2bool(o)
for o in 1,True: assert str2bool(o)__
> ipython_shell ()
Same asget_ipython
but returns False
if not in IPython
> in_ipython ()
Check if code is running in some kind of IPython environment
> in_colab ()
Check if the code is running in Google Colaboratory
> in_jupyter ()
Check if the code is running in a jupyter notebook
> in_notebook ()
Check if the code is running in a jupyter notebook
These variables are available as booleans in fastcore.basics
as IN_IPYTHON
, IN_JUPYTER
, IN_COLAB
and IN_NOTEBOOK
.
IN_IPYTHON, IN_JUPYTER, IN_COLAB, IN_NOTEBOOK __
(True, True, False, True)