This guide shows some of the nice features of Sublime Text 3 (ST3):
Get it from here:
library(lmtest) | |
data(ChickEgg) | |
chickegg <- as.data.frame(ChickEgg) | |
chickegg$Year <- 1:nrow(chickegg) + 1929 | |
attach(chickegg) | |
library(forecast) | |
ndiffs(chicken, alpha=0.05, test=c("kpss")) | |
ndiffs(egg, alpha=0.05, test=c("kpss")) |
qqplotpval <- function(P, filename=NULL) | |
{ | |
require(GenABEL) | |
l <- estlambda(P, method="median") | |
nom <- paste("lambda = ", round(l$estimate, 3), sep="") | |
if(!is.null(filename)) | |
{ | |
png(filename) | |
} | |
estlambda(P, method="median", plot=TRUE, main=nom) |
#!/bin/bash | |
all=${1} | |
subset=${2} | |
out=${3} | |
temp1=temp${RANDOM} | |
temp2=temp${RANDOM} | |
awk '{ print $1, $2 }' $all > ${temp1} |
# Use this code to extract HM3 SNPs https://github.com/explodecomputer/data_manipulation | |
plink1.90 --bfile alspac_hm3_kids --keep alspac_hm3_kids_unrelated_all.grm.id --make-bed --out ~/testdata --thread-num 16 | |
shuf -n 1000 ~/testdata.bim | cut -f 2 > ~/snplist.txt | |
gcta64 --bfile ~/testdata --simu-qt --simu-causal-loci ~/snplist.txt --simu-hsq 0.8 --out trait1 --thread-num 16 | |
gcta64 --bfile ~/testdata --simu-qt --simu-causal-loci ~/snplist.txt --simu-hsq 0.3 --out trait2 |
#!/bin/bash | |
# Objective: Take data/mouse.csv and convert to binary plink format | |
# Method: It is currently in SNPs=rows and IDs=columns format. So first convert to tped format and then convert to binary plink format using the plink programme | |
# See tped format here: | |
# http://pngu.mgh.harvard.edu/~purcell/plink/data.shtml#tr | |
# 1. Export excel format to text file | |
# - Just export sheet 2 to a Windows format csv file | |
# - This means 'comma separated file', e.g. the columns are separated by commas |
This guide shows some of the nice features of Sublime Text 3 (ST3):
Get it from here:
The ALSPAC imputation to UK10K+1000genomes reference panels is underway. Each chromosome is split into approximately 5Mb chunks, and each 5Mb chunk is split into subsets of 700 individuals. For example, chromosome 22 is split into 8 chunks, and each chunk is split into 26 subsets. Therefore, there are 8 x 26 = 208 jobs to run to complete chromosome 22.
On Bluecrystal3 you are only allowed to submit batch jobs of 100. So, for chromosome 22 you need to submit 3 batch jobs of 100, 100 and 8 to complete it.
All the data and scripts are ready and waiting to be submitted. They are in a shared location to which everyone should have write access, so all that is needed is to choose a chromosome and submit the submission scripts.
test_association(cpg_fixed3, batch) | |
test_association(cpg_fixed3, slide) | |
test_association(cpg_fixed3, real_y) | |
test_association(cpg_fixed3, confounder) |
#!/bin/bash | |
cd /panfs/panasas01/shared/alspac/studies/latest/alspac/genetic/variants/arrays/gwas/imputed/1000genomes/released/2015-06-25/data/quality | |
rm -f ~/out.txt | |
for i in {1..23} | |
do | |
echo ${i} | |
awk -v chr=$i '{if($1 == chr) print $2}' 1000GP_16-06-14_flipped_SNPs.txt > ~/temp | |
ii=`printf %02d $i` | |
zfgrep -wf ~/temp data_chr${ii}.txt.gz | awk '{ print $2 }' >> ~/out.txt |
import sys | |
import urllib2 | |
from bs4 import BeautifulSoup, NavigableString, Tag | |
def get_headfirstbristol(): | |
web_page = urllib2.urlopen("http://www.headfirstbristol.co.uk/gig-listings").read() | |
soup = BeautifulSoup(web_page) | |
band = [] | |
place = [] |