I hereby claim:
- I am tomsaleeba on github.
- I am tomsaleeba (https://keybase.io/tomsaleeba) on keybase.
- I have a public key ASCQLgwf5nOen0sIm5tyL5yq_ZO2slYSfrfH69tm02Y91go
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
#!/usr/bin/env bash | |
# short demo on how to wait for multiple jobs | |
echo 'start' | |
sleep 2 & | |
job1=$! | |
echo "do things that don't require job 1 output" | |
sleep 4 & | |
job2=$! | |
echo "do things that don't require job 2 output" | |
wait $job1 |
When you have a series of *.ttl files in a directory and you want to cat them all together, you need to make sure you strip out the @prefix
and only prepend it once to the output.
Use the following commands
# run *in* the directory with the TTL files
head -n 50 -q *.ttl | grep '^@prefix' | sort -u > header
time cat *.ttl | grep -v '^@prefix' | cat header - | gzip > $(basename $(pwd)).ttl.gz
rm header
I learned this when trying to clear our records in AWS Neptune. I was hitting the query timeout when trying to drop an entire graph. If you don't want to/can't raise the timeout, you can drop smaller parts of the graph in each transaction.
curl -sX POST http://<cluster-prefix>.rds.amazonaws.com:8182/sparql --data-urlencode 'update=
DELETE {
GRAPH <http://aws.amazon.com/neptune/vocab/v01/DefaultNamedGraph> { ?s ?p ?o }
}
WHERE {
GRAPH <http://aws.amazon.com/neptune/vocab/v01/DefaultNamedGraph> {
{
Assume have a directory of files in an Angular.io project. We need to clone them all but rename and do a find+replace in them to work with another model name. The old model is plot
and the new model is photo
.
cp -r plot photo
cd photo
find
:
// direct as promise | |
;(function () { | |
const prefix = '[direct-promise level]' | |
async function direct () { | |
throw new Error(`${prefix} explosion`) | |
} | |
direct().then(() => { | |
console.log(`${prefix} success`) | |
}).catch(err => { |
# this query extracts a subgraph from the selected subject (level 1) and two child levels | |
PREFIX aekos: <http://www.aekos.org.au/ontology/1.0.0#> | |
PREFIX some_dataset: <http://www.aekos.org.au/ontology/1.0.0/some_dataset#> | |
CONSTRUCT { | |
?s1 ?p1 ?o1 . | |
?s1 ?pv1 ?v1 . | |
?s2 ?p2 ?o2 . | |
?s2 ?pv2 ?v2 . | |
?s3 ?p3 ?o3 . |
This will let you see the request and response headers for traffic going through.
We're going to run this as a reverse proxy, rather than a usual proxy, so you don't get completely flooded with traffic.
8080
to the public internetfish_vi_key_bindings
cat << EOF > ~/.config/fish/functions/fish_user_key_bindings.fish
function fish_user_key_bindings
bind -M insert \e\[1\;5C nextd-or-forward-word # ctrl-right
bind -M insert \e[1;5D prevd-or-backward-word # ctrl-left
If you have a bunch of photos to upload, and they have location information embedded in the photo, then you have a few options.
If you have observations that do not have a photo, or do have a photo but the photo does not have location information embedded in it, then the CSV based bulk upload if the best choice.