This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
library(tidyverse) | |
library(sf) | |
library(tictoc) | |
library(future) | |
library(units) | |
#library(remotes) | |
#install_github("yonghah/esri2sf") | |
library("esri2sf") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
library(tidyverse) | |
library(sf) | |
# quick aside... testing out sql querying in R - never done this before | |
# works well | |
# library(sqldf) | |
# VARB <- read_csv("C:\\Users\\...\\VictimisationsAgeROVBoundary.csv") | |
# slice <- sqldf("select * from VARB limit 10") | |
################################################################### |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import arcpy | |
# Input and output feature classes | |
in_fc = r"C:\Users\BhodanSzymanilk\wd\SimplifiedVictimisationByStationAtTAGrain\SimplifiedVictimisationByStationAtTAGrain.gdb\census2023TALBpopulations" | |
output_fc = r"C:\Users\BhodanSzymanilk\wd\SimplifiedVictimisationByStationAtTAGrain\SimplifiedVictimisationByStationAtTAGrain.gdb\Census2023TApopulations" | |
# create a new field that uniquely identifies each TA - we grab the first 3 numbers from TALB2023_V - check if it exists firts | |
# all the Auckland local boards are identified with an initial '076' followed by numbers specific to the local board | |
TAID = "TAID" | |
for fieldname in [field.name for field in arcpy.ListFields(in_fc)]: |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
When trying to find the locations of CCTV cameras from public maps on arcgis online... | |
Use this approach https://gis.stackexchange.com/questions/394291/how-to-scrape-extract-data-from-esri-arcgis-from-website | |
eg for https://wcc.maps.arcgis.com/apps/webappviewer/index.html?id=17900f3db77548c689911c8180de1eb6 | |
you can see | |
GET https://services1.arcgis.com/CPYspmTk3abe6d7i/arcgis/rest/services/CCTV_City_Safety_Camera_Locations_(View_layer)/FeatureServer/0/query?f=pbf&where=1%3D1&returnGeometry=true&spatialRel=esriSpatialRelIntersects&outFields=*&maxRecordCountFactor=4&outSR=102100&resultOffset=0&resultRecordCount=8000&cacheHint=true&quantizationParameters=%7B%22mode%22%3A%22edit%22%7D | |
ESRI default is to return protobuf |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Before I forget... | |
Starting with NZ Police Station boundaries - clipping to nz polygon coastline, and | |
using an extract from policedata.nz to get victimisation data. | |
Victimisation data is year, police station name, sumvictimisation. | |
To get the one to many join to work for some reason I had to copy the imported csv victimisation data over to the geodatabase and | |
in the docs it said I also needed to create a unique row id - so I did that first using monotonically_increasing_id() AS vid |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Load in the shape file with the boundaries | |
Load in the csv file with victimisations or some other measure + boundary key eg meshblock id. Little issue here - by default ArcGIS Pro will strip preceding zeros on a number field and make it an int/long. You can create and apply a function in ArcGIS Pro to convert back to string ids with preceeding zeros or you can use a schema.ini file to enforce a schema on the csv import (that's what I did). | |
Use the Join tool (not join features tool - because in this instance joining a feature layer to a data table) to join the data to the feature layer containing the boundaries - keyed on eg meshblock id. | |
Now here's the trick - a simple share to ArcGIS Online will fail because in memory joins aren't supported 00226: In-memory joins are not supported—ArcGIS Pro | Documentation so I did the option in the help doc of exporting out the features to a new feature layer, removing the join and sharing the new feature layer to ArcGIS Online. This worked. | |
Click the Analyze button first |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Create an environment to ensure correct python packages are loaded | |
# Not sure what exactly is needed - I went with the arcgis package | |
# Create a notebook to hold the python code to filter/aggregate data and push up via the arcgis api | |
# Create a pipeline to exec the notebook on schedule with parameters eg credentials etc | |
# Set necessary security / access control on who can see the parameters | |
# Create a parameters cell in the notebook | |
pwd="og_value" | |
# Set up logging to the Lakehouse filesystem so we can track what's happening eg | |
import logging |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
library(sf) | |
library(dplyr) | |
# you need the polygon dataset not the polyline one! | |
nz_coastline <- st_read("/lakehouse/default/Files/lds-nz-coastlines-and-islands-polygons-topo-150k-SHP/nz-coastlines-and-islands-polygons-topo-150k.shp") | |
mb2013_akld <- st_read("/lakehouse/default/Files/statsnz-meshblock-2013-SHP-akld/meshblock-2013.shp") | |
# different CRS for the two regions | |
# nz_coastline is NZGD2000 while mb2013_akld is WGS 84 | |
# Let's reconcile both to WGS84 |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
library(tidyverse) | |
1:1000 |> | |
as_tibble_col(column_name="some_column") |> | |
select(some_column) |> | |
mutate(some_other_column = map(some_column, function(x){Sys.sleep(1/100);(1/x)}, .progress = list( | |
type = "iterator", | |
format = "Calculating {cli::pb_bar} {cli::pb_percent}", | |
clear = TRUE))) |
NewerOlder