Asking ChatGPT3
Write an apology from Wizards of the Coast and Hasbro for announcing a change to the open gaming license that angered many D&D players and other game designers
Dear D&D players and game designers,
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . | |
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . | |
@prefix ontolex: <http://www.w3.org/ns/lemon/ontolex#> . | |
@prefix dct: <http://purl.org/dc/terms/> . | |
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . | |
@prefix owl: <http://www.w3.org/2002/07/owl#> . | |
@prefix wikibase: <http://wikiba.se/ontology#> . | |
@prefix skos: <http://www.w3.org/2004/02/skos/core#> . | |
@prefix schema: <http://schema.org/> . | |
@prefix cc: <http://creativecommons.org/ns#> . |
{ | |
"props": { | |
"pageProps": { | |
"data": [ | |
{ | |
"entity": { | |
"id": "E39PBJh8RBvdBvpxqFm6CFptrq", | |
"labels": { | |
"ca": { | |
"language": "ca", |
import json, csv | |
with open('all_tweets.csv', 'w', newline='') as CSVfile: | |
fieldnames= ["id","tweet","created_at"] | |
writer = csv.DictWriter(CSVfile, fieldnames=fieldnames) | |
writer.writeheader() | |
with open("../tweet.js", "r") as myJSON: | |
recorddata = json.loads(myJSON.read()) | |
for item in recorddata: | |
the_ID = '"' + item["id"] + '"' |
Make sure you've got these:
from lxml import etree
import csv
The snippets will all be related to parsing XPaths with etree vs. CSV stuff.
A useful function to test how many nodes match a single xpath. e.g. if you have 3 dc:creator fields at 0, 1, 2, it will return 3. This is super helpful if you need to set up a condition for processing 1 element vs. multiples.
curl http://id.loc.gov/authorities/names/nb2015017302.madsrdf.rdf -o nb2015017302.rdf
The difference here is you have to know what kind of authority you're querying. So you'll need to do names, subjects, etc. separately, whereas for the lccn query, it's straight-up lccn.
Working from a simple list:
names: for lccn in nb2015017302 n89223874; do curl http://id.loc.gov/authorities/names/$lccn.madsrdf.rdf -o $lccn.rdf; sleep 3; done
subjects: for lccn in sh2016002835 sh86003754; do curl http://id.loc.gov/authorities/subjects/$lccn.madsrdf.rdf -o $lccn.rdf; sleep 3; done
curl https://lccn.loc.gov/sh2018002914/marcxml -o sh2018002914.xml
Working from a simple list:
for lccn in nb2015017302 sh2018002914 n89223874; do curl https://lccn.loc.gov/$lccn/marcxml -o $lccn.xml; sleep 1; done
Separate the lccns with a single space. Added a second of sleep to avoid getting shut out.
If it gets longer, consider making a full bash script, as below:
These were originally collected as research/benchmarking for Penn State's Blacklight catalog. They range from core features like the call number browse, which will enable us to see materials across 30+ libraries displayed on a single "shelf," to desired "someday" enhancements like the incorporation of knowledge cards and digital collections display.
Catalogers generally apply the most specific subject heading possible. This heading is made up of increasingly more specific parts, when the most general might be helpful in finding results. If someone has found a book on African-American quiltmakers, for example, and the subject is African American quiltmakers—Michigan—Biography
, how can we make it easy for them to search only for African American quiltmakers
?
Ruler of the Night, Guarantor of the day ...
This day — a gift from you.
This day — like none other you have ever given, or we have ever received.
This Wednesday dazzles us with gift and newness and possibility.
This Wednesday burdens us with the tasks of the day, for we are already halfway home
halfway back to committees and memos,
halfway back to calls and appointments,
halfway on to next Sunday,