Skip to content

Instantly share code, notes, and snippets.

View praveen-palanisamy's full-sized avatar
:octocat:

Praveen Palanisamy praveen-palanisamy

:octocat:
View GitHub Profile
@praveen-palanisamy
praveen-palanisamy / git-ssh-command-ado-pull.md
Created June 23, 2022 20:19
Git ssh command for clone, pull from Azure Dev Ops remote using SSH-RSA keys

Sometimes git clone/pull using SSH keys would fail with the following message even when the SSH keypair is setup on the server:

Unable to negotiate with <GIT_SERVER_IP> port 22: no matching host key type found. Their offer: ssh-rsa
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

The resolution is to ask git to use the the following SSH options:

@praveen-palanisamy
praveen-palanisamy / setup-latest-nvidia-drivers410-cuda10-cudnn7.4.md
Last active November 12, 2018 18:47
Setup latest NVIDIA drivers (410.73), cuda (10.0) , cuDNN (7.4) on Ubuntu

NOTE: This is for the latest drivers, cuda and cudnn versions available as of 11/2018. Change the version numbers & URL for future releases

0. Prerequisites
  1. Drop to a tty shell, stop the display manager (sudo service lightdm stop ). Make sure no X server is running.
  2. Uninstall & purge installed/previous nvidia drivers
    • sudo apt purge nvidia*
    • sudo apt autoremove
    • sudo dpkg -P cuda-*<TAB>
  3. Blacklist nouveau drivers if not already done
@praveen-palanisamy
praveen-palanisamy / build_boost_for_python2_and_python3.sh
Created November 10, 2018 00:28
Script to build Boost C++ libraries for python2 and python3
BOOST_VERSION=1.67.0
BOOST_TOOLSET="clang-5.0"
BOOST_CFLAGS="-fPIC -std=c++14 -DBOOST_ERROR_CODE_HEADER_ONLY"
BOOST_BASENAME="boost-${BOOST_VERSION}"
BOOST_INCLUDE=${PWD}/${BOOST_BASENAME}-install/include
BOOST_LIBPATH=${PWD}/${BOOST_BASENAME}-install/lib
echo "Downloading boost."
wget "https://dl.bintray.com/boostorg/release/${BOOST_VERSION}/source/boost_${BOOST_VERSION//./_}.tar.gz"
echo "Extracting boost."
@praveen-palanisamy
praveen-palanisamy / wp-backup-script.sh
Created August 19, 2018 16:25
WordPress backup script: A bash script to compress and backup a complete wordpress site including the database
#!/usr/bin/env bash
# wp-backup-script.sh - Creates a complete, compressed backup of your WordPress database and files. You can then transfer it to your preferred location (local disk, cloud backup storage etc)
# Author: Praveen Palanisamy | Twitter: @PraveenPsamy | GitHub: https://github.com/praveen-palanisamy| Website: https://praveenp.com
# Dependencies: mailutils
# 0. Change the variables below to suit your environment
WP_FOLDER="$HOME/public_html/" # Folder where your wordpress root installation is
BACKUP_FOLDER="$HOME/backups" # Folder where you want to store the backups
@praveen-palanisamy
praveen-palanisamy / tree_to_github_markdown.sh
Last active February 10, 2024 06:14
Convert output of tree utility to Markdown in a pretty format. Useful to display code/directory structure
#!/usr/bin/env bash
#File: tree2githubmd
#Description: Convert output of unix tree utility to Github flavoured Markdown
tree=$(tree -f --noreport --charset ascii $1 |
sed -e 's/| \+/ /g' -e 's/[|`]-\+/ */g' -e 's:\(* \)\(\(.*/\)\([^/]\+\)\):\1[\4](\2):g')
printf "# Code/Directory Structure:\n\n${tree}"
@praveen-palanisamy
praveen-palanisamy / rl_model.py
Last active December 13, 2015 03:19
Cost update under bandit setting
def single_action_cost(self,y):
index=numpy.nonzero(y)
true_cost=y[index]
y = self.output.copy()
cost = T.scalar()
y_update = (y, T.set_subtensor(y[index], cost))
f = function([cost], updates=[y_update])
f([true_cost])
return (y-self.output)**2
@praveen-palanisamy
praveen-palanisamy / gist:3802cec2b8ad67fd667f
Created December 12, 2015 22:49
Weight update step for reward/loss based learning under bandit settings
lossScalar = 1 - reward; % This is loss of the chosen action
lossVector = zeros(1,self.nbActions);
lossVector(astAction) = lossScalar;
self.timeStep=self.timeStep+1;
%The weight update step below depends on the learning policy. This will probably be handled by the NN/RL-net
self.weights=self.weights.*(exp(-sqrt(log(self.numActions)/self.timeStep)*lossVector))';