A regular expression is a string that describes a text pattern occurring in other strings, m'kay.
With which one can go quite far.
* metacharacters
* character escapes \
* anchors \A\Z or ^$
""" | |
Simple script to search a Z39.50 target using Python | |
and PyZ3950. | |
""" | |
from PyZ3950 import zoom | |
ISBNs = ['9781905017799', '9780596513986'] |
#!/usr/bin/env python | |
""" | |
This is mainly a demonstration of OCLC's experimental Worldcat Live API [1] | |
from Python. You should be able to use this module like so: | |
import worldcat_live | |
for item in worldcat_live.items(): | |
print item["title"] |
// XPath CheatSheet | |
// To test XPath in your Chrome Debugger: $x('/html/body') | |
// http://www.jittuu.com/2012/2/14/Testing-XPath-In-Chrome/ | |
// 0. XPath Examples. | |
// More: http://xpath.alephzarro.com/content/cheatsheet.html | |
'//hr[@class="edge" and position()=1]' // every first hr of 'edge' class |
from itertools import chain, starmap | |
def flatten_json_iterative_solution(dictionary): | |
"""Flatten a nested json file""" | |
def unpack(parent_key, parent_value): | |
"""Unpack one level of nesting in json file""" | |
# Unpack one level only!!! | |
if isinstance(parent_value, dict): |
I'm a long-time fan of the graph visualization tool Gephi and since Wikimania 2019 I got involved with Wikidata. I was aware of the Gephi plugin "Semantic Web Importer", but when I check it out, I only find old tutorials connecting to DBpedia, not Wikidata:
<data> | |
{ | |
for $Record in /ead | |
where $Record/archdesc/scopecontent/p[contains(., 'Mrs')] | |
let $id := $Record/archdesc/did/unitid[1]/text() | |
let $title := $Record/archdesc/did/unittitle | |
let $repo := $Record/eadheader/eadid/@mainagencycode | |
let $scopeMrs := $Record/archdesc/scopecontent/p[contains(., 'Mrs')] |
Each citation includes an abstract or annotation. Feel free to suggest an addition!
Davis, Robin Camille. 2015. “Git and GitHub for Librarians.” Publications and Research, January. https://academicworks.cuny.edu/jj_pubs/34.
""" Adaptation of Kelly Bolding's Terms of Aggrandizement xquery script to report on aggrandizing | |
language in archival finding aid "Biography or History" (bioghist) notes. | |
The original script uses xquery on a directory of EAD XML files, and produces an XML report. | |
This version uses Python to query the ArchivesSpace REST API, and produces a JSON report. | |
""" | |
import re | |
import json | |
from asnake.aspace import ASpace |
influential | |
renowned | |
not(able|ed) | |
distinguished | |
reputable | |
prestigious | |
prominent | |
significant | |
respected | |
expert |