I hereby claim:
- I am matt-dray on github.
- I am mattdray (https://keybase.io/mattdray) on keybase.
- I have a public key ASD-cOiAVRNCc2XIcTt3U-nwNwA32uzNhPZPmHDiBDH8pAo
To claim this, I am signing this object:
| library(ggplot2) # for plotting | |
| library(gridExtra) # for arranging plots | |
| # Create example dataframe | |
| df <- data.frame(x = letters[1:10] | |
| , y1 = sample(1:100, 10) | |
| , y2 = sample(1:100, 10) | |
| , y3 = sample(1:100, 10) | |
| , y4 = sample(1:100, 10) | |
| , y5 = sample(1:100, 10) |
| # purpose: loop through plots of specified column pairs of a dataframe | |
| # don't want to compare all possible permutations | |
| # also avoids nested loop of x and y | |
| library(tidyverse) # for tibble and ggplot | |
| # generate example dataframe | |
| test_data <- tibble::tibble(x1 = rnorm(100) | |
| , x2 = rnorm(100) |
| library(tidyverse) | |
| library(stringr) | |
| # list files in directory (in the format "x_y_z.csv") | |
| list_files <- list.files("file/path") | |
| for (i in seq_along(1:length(list_files))) { | |
| # read the data | |
| # Basic googlesheets functions | |
| # Jan 2018 | |
| # https://github.com/jennybc/googlesheets | |
| # Browser sign-in required on first function execution | |
| # 1. Load package ---- | |
| #install.packages(googlesheets) | |
| library(googlesheets) |
| --- | |
| title: "Title" | |
| subtitle: "Subtitle" | |
| author: "Name" | |
| date: "`r format(Sys.time(), '%d %B, %Y')`" | |
| output: | |
| html_document: | |
| theme: cerulean | |
| highlight: tango | |
| number_sections: yes |
I hereby claim:
To claim this, I am signing this object:
| # Function to automate generation of RDS, simple CSV and plot using rtweet | |
| # Matt Dray | |
| # March 2018 | |
| # Purpose: create an RDS, simplified CSV and plot of tweets containing search term | |
| # from rtweet::get_tweets function and save to to folder with unique descriptive | |
| # name related to search term. Assumes you have an 'output' folder in your home | |
| # directory to store these files. Assumes you've already sorted out a twitter | |
| # token as per http://rtweet.info/articles/auth.html |
| # 1. Fake dataset | |
| df <- data.frame(id = 1:1000, value = sample(10000:50000, 1000)) | |
| # 2. Histogram object for accessing binwidths | |
| hist_df <- hist( | |
| df$value, # column of data to be binned | |
| (50000-10000)/500 # bins of width 500 from 10k to 50k | |
| ) |
| # Approved colour palette for DfE | |
| # Convert to hex the RGB values provided by the dept | |
| # Matt Dray | |
| # May 2018 | |
| # the main need is for plotting stats in publications | |
| # inspiration from https://github.com/ukgovdatascience/govstyle | |
| # this may eventually be added to the dfeR package | |
| # function for converting from RGB to hex |
| # The Rcrawler package for website crawling | |
| # Matt Dray | |
| # May 2018 | |
| # the need: extract hyperlinks containing certain string | |
| # this code hasn't actually been tested | |
| # install.packages("Rcrawler") | |
| library(Rcrawler) |