https://www.cookdandbombd.co.uk/forums/
[ | |
{ | |
"biomarkers": [ | |
"1022191000000100", | |
"994351000000103", | |
"1006191000000106", | |
"1005691000000109", | |
"1001371000000100", | |
"1003671000000109", | |
"1010611000000107", |
It is quite difficult to reliably identify:
- NHS GP surgeries
- Accident & Emergency Departments
- Minor Injury Units
in UK OpenStreetMap data.
It would be useful to be able to identify these things for a) more clearly exposing the purpose of a site for general mapping or search purposes and b) producing travel time/accessibility analyses.
library(tidyverse) | |
my_h3s <- tibble(id = unlist(h3jsr::polyfill(WalesIshOTPGraph:::bounds(), 8))) %>% | |
rowwise() %>% mutate( | |
pnt = h3jsr::h3_to_point(id), | |
plygn = h3jsr::h3_to_polygon(id) | |
) | |
sf::st_geometry(my_h3s) <- my_h3s$pnt |
library(magrittr) | |
library(tibble) | |
library(dplyr) | |
# https://api.os.uk/downloads/v1/products/Terrain50/downloads?area=GB&format=ASCII+Grid+and+GML+%28Grid%29&redirect | |
vrt_sources_from_paths <- function(source_paths){ | |
sources_tibble <- tibble() |
# Make sure that Java 21 is installed and available | |
# https://adoptium.net/?variant=openjdk21 | |
# On a Mac, try `export JAVA_HOME=$(/usr/libexec/java_home -v 21)` before running R | |
library(tidyverse) | |
options(java.parameters = "-Xmx4G") | |
install.packages("r5r") | |
library(r5r) |
Data has been collected from publicly available webpages that form part of the $WEBSITE. These pages contain timestamped posts made by $WEBSITE users, associated with their usernames.
I recognise that individuals' posts constitute personal data. The individual that a post relates to is often likely to be identifiable in combination with registration data held by $WEBSITE, for example.
I note that I generally do not have the means to identify individuals from the information being processed. However, it is possible that some individuals will have chosen to post information that is readily identifiable.
The purposes of processing is to better understand the use of the $WEBSITE, the nature of its content over time, and how content is produced and engaged with by $WEBSITE users.
My plan is to generate a set of LSOA-to-LSOA matrices covering:
a. A drive-time matrix (this won't take account of variation incongestion or anything like that, so nominal departure/arrival time are largely irrelevant).
b. Travel-time matrices based on a small (3-5?) set of depart-after times.
c. Travel-plus-waiting-time matrices based on a small set (again 3-5?) of arrive-by times.
For the sake of easier explanation, I'm going to use the following LSOAs as examples:
#!/bin/sh | |
server_properties_path=$(dirname "$0")"/../server/server.properties" | |
rcon_port=$(awk -F "=" '/rcon.port/ {print $2}' "$server_properties_path") | |
rcon_password=$(awk -F "=" '/rcon.password/ {print $2}' "$server_properties_path") | |
"$(dirname "$0")/mcrcon" -H 127.0.0.1 -P "$rcon_port" -p "$rcon_password" "$1" |
{ | |
"description": "Find out all you need know about chickenpox, including what the symptoms are, how to treat it and when to get medical advice.", | |
"author": { | |
"logo": "https://www.nhs.uk/nhscwebservices/documents/logo1.jpg", | |
"email": "[email protected]", | |
"url": "https://www.nhs.uk", | |
"name": "NHS website", | |
"@type": "Organization" | |
}, | |
"lastReviewed": ["2017-05-26T00:00:00+00:00", "2020-05-26T00:00:00+00:00"], |