- repo -> repository
clone
-> bring a repo down from the internet (remote repository like Github) to your local machineadd
-> track your files and changes with Gitcommit
-> save your changes into Gitpush
-> push your changes to your remote repo on Github (or another website)pull
-> pull changes down from the remote repo to your local machine
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
plt.figure(figsize=(12,8)) | |
plt.scatter(vol_arr, ret_arr, c=sharpe_arr, cmap='viridis') | |
plt.colorbar(label='Sharpe Ratio') | |
plt.xlabel('Volatility') | |
plt.ylabel('Return') | |
plt.scatter(max_sr_vol, max_sr_ret,c='red', s=50) # red dot | |
plt.show() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
np.random.seed(42) | |
num_ports = 6000 | |
all_weights = np.zeros((num_ports, len(stocks.columns))) | |
ret_arr = np.zeros(num_ports) | |
vol_arr = np.zeros(num_ports) | |
sharpe_arr = np.zeros(num_ports) | |
for x in range(num_ports): | |
# Weights | |
weights = np.array(np.random.random(4)) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# coding=UTF-8 | |
import nltk | |
from nltk.corpus import brown | |
# This is a fast and simple noun phrase extractor (based on NLTK) | |
# Feel free to use it, just keep a link back to this post | |
# http://thetokenizer.com/2013/05/09/efficient-way-to-extract-the-main-topics-of-a-sentence/ | |
# Create by Shlomi Babluki | |
# May, 2013 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Evaluating 4 Indian English NewsPapers for 10th May 2020 for their | |
## Vocabulary or No of Unique words per Paragraphs | |
## Factual Presentation | |
## Sentimental Analysis | |
## Graphic content/ images : Needs preprocessing | |
## Visualising | |
import matplotlib.pyplot as plt | |
import numpy as np | |
import pandas as pd |
Source Info: https://linuxize.com/post/how-to-install-visual-studio-code-on-debian-10/
Install dependencies
sudo apt update
sudo apt install software-properties-common apt-transport-https curl
This gist contains out.tex
, a tex file that adds a PDF outline ("bookmarks") to the freely available pdf file of the book
An Introduction to Statistical Learning with Applications in R, by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani
http://www-bcf.usc.edu/~gareth/ISL/index.html
The bookmarks allow to navigate the contents of the book while reading it on a screen.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import numpy as np | |
import pandas as pd | |
from collections import defaultdict | |
from scipy.stats import hmean | |
from scipy.spatial.distance import cdist | |
from scipy import stats | |
import numbers | |
def weighted_hamming(data): |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#cleaned up original code to work with python 3.6 | |
#results match the output from the python 2.7 version | |
#filter and map functions have been changed between 3.6 and 2.7 | |
import numpy as np | |
import pandas as pd | |
import os | |
import re | |
from sklearn.feature_extraction.text import CountVectorizer | |
from sklearn.metrics.pairwise import euclidean_distances | |
from sklearn.metrics.pairwise import cosine_similarity |