Skip to content

Instantly share code, notes, and snippets.

View avishekrk's full-sized avatar

Avishek Kumar avishekrk

  • The University of Chicago
  • Chicago, IL
View GitHub Profile
#!/usr/bin/env python
# coding: utf-8
"""
Summary Stats for Water Work Orders
Description
-----------
Python Script for producing figures and tables
for summary stats of Water Work Orders.
Figures are stored in **./figure** directory
and tables are stored in **./tables** directory.
newmapping_dict = {}
with open('modeled_interactions.dat','r') as infile:
for line in infile:
line = line.strip('\n')
print line
pdbid, sites = line.split(':')
print pdbid
print sites.split()
newmapping_dict[pdbid.upper()] = sites.split()
@avishekrk
avishekrk / ColorBar.py
Created April 25, 2016 22:58
Make a vertical colorbar
import pylab as pl
import numpy as np
pl.rcParams.update({'font.size': 22})
a = np.array([[-0.5,1.75]])
pl.figure(figsize=(4.0, 16))
img = pl.imshow(a,cmap="seismic")
pl.gca().set_visible(False)
#cax = pl.axes([0.1, 0.2, 0.8, 0.6])
cax= pl.axes([0.35,0.1,0.25,.95])
pl.colorbar(cax=cax,orientation='vertical')
def get_seq(ATOMS):
from dfi.fafsaseq import mapres
return ''.join([mapres[i.res_name] for i in ATOMS])
@avishekrk
avishekrk / calculate_dfi_values.py
Created March 24, 2016 21:15
Calculate bulk DFI
def calculate_dfi_windows(pdbid):
covarmat = glob.glob(pdbid+'_[3-5]?00_????_mwcovarmat.dat')
pdbfile = pdbid+'.pdb'
fdfi_residues = ['A235','A340']
prefix = map(lambda x: x.replace('_mwcovarmat.dat','-dfianalysis.csv'),covarmat)
for mat, pre in zip(covarmat, prefix):
print mat, pre
dfi.calc_dfi(pdbfile,mdhess=mat,ls_reschain=fdfi_residues,writetofile=True,dfianalfile=pre)
@avishekrk
avishekrk / gist:15fc419a9a9edc59b7ac
Created March 8, 2016 20:53
Python Codes for pulling out timeseries
def get_ranges(ranges,delta):
"""
Get a range of numberical values
ranges: ls or numpy array
delta: sort out range of a certain interval
"""
for col in ranges:
if not('_' in col):
continue
start, end = col.split('_')
#!/bin/bash
##run cpptraj for cases where
##there are not covariance matrics
module load amber/14
for f in *.cpptraj;
do
echo ${f}
prefix=$(echo $f | cut -d. -f1);
def fivens_cols(parm):
sort = {}
fivens_cols = []
for col in fivedfi_dict['pctdfi'].columns:
if 'md' in col:
continue
if 'ENM' in col:
continue
start, end = col.split('_')
start = int(start)
@avishekrk
avishekrk / gist:bde1020854a5875b9a54
Created March 1, 2016 07:39
Script for 2VEP DFI Analysis
#!/bin/bash
for d in $(ls -F | grep "^[a-z].*/$");
do
echo $d;
printf "=========================\n";
printf "Find covariance matrices\n";
nm=$(echo $d | cut -d/ -f1)
fdfi=$(grep -E "^${nm}:" catalytic_residues.txt | awk -F: '{print $2}');
cd $d;
@avishekrk
avishekrk / .1.miniconda.md
Created February 21, 2016 22:57 — forked from dan-blanchard/.1.miniconda.md
Quicker Travis builds that rely on numpy and scipy using Miniconda

For ETS's SKLL project, we found out the hard way that Travis-CI's support for numpy and scipy is pretty abysmal. There are pre-installed versions of numpy for some versions of Python, but those are seriously out of date, and scipy is not there are at all. The two most popular approaches for working around this are to (1) build everything from scratch, or (2) use apt-get to install more recent (but still out of date) versions of numpy and scipy. Both of these approaches lead to longer build times, and with the second approach, you still don't have the most recent versions of anything. To circumvent these issues, we've switched to using Miniconda (Anaconda's lightweight cousin) to install everything.

A template for installing a simple Python package that relies on numpy and scipy using Miniconda is provided below. Since it's a common s