Skip to content

Instantly share code, notes, and snippets.

@natbusa
Last active April 3, 2021 11:52
Show Gist options
  • Save natbusa/932eaa9c0b4eb68a20c12b6087b376fd to your computer and use it in GitHub Desktop.
Save natbusa/932eaa9c0b4eb68a20c12b6087b376fd to your computer and use it in GitHub Desktop.
Dell XPS for Ubuntu 18.04

Dell XPS for Ubuntu 18.04

Bios:

press F12 on the Dell startup screen

  • disable safe boot
  • Change SATA Operation from "RAID On" to "AHCI"
  • Enable Legacy Boot as well as UEFI

Install Ubuntu from USB drive

Instructions are here: https://www.ubuntu.com/download/desktop Follow the install procedure, then reboot.

1st Restart: nomodeset, disable nuveau nvidia drivers

On first run, press escape during the Ubuntu startup. Edit grub by adding 'nomodeset' to the init command. Adding the nomodeset parameter instructs the kernel to not load video drivers and use BIOS modes instead until X is loaded.

#build essentials
sudo dpkg --add-architecture i386
sudo apt update
sudo apt install build-essential libc6:i386

#disable nouveau
sudo bash -c "echo blacklist nouveau > /etc/modprobe.d/blacklist-nvidia-nouveau.conf"
sudo bash -c "echo options nouveau modeset=0 >> /etc/modprobe.d/blacklist-nvidia-nouveau.con
sudo update-initramfs -u

#reboot
sudo reboot

2nd restart: Install the video card driver

Press escape during the Ubuntu startup. Edit grub by adding 'nomodeset' to the init command. Adding the nomodeset parameter instructs the kernel to not load video drivers and use BIOS modes instead until X is loaded.

Download the video card driver, CUDA library and cuDNN library Note: The video card driver version must match the one in the CUDA install.

You can find:

Download the runfile (local) installers. At the moment of my last install these is my list of files

-rwxr-xr-x  1 natbusa natbusa   72871665 May 29 21:09  cuda_9.2.88.1_linux.run
-rwxr-xr-x  1 natbusa natbusa 1758421686 May 29 21:13  cuda_9.2.88_396.26_linux.run
-rw-r--r--  1 natbusa natbusa  421083972 May 29 23:23  cudnn-9.2-linux-x64-v7.1.tgz
-rwxr-xr-x  1 natbusa natbusa   86759359 May 29 23:29  NVIDIA-Linux-x86_64-396.26.run

Now switch off the graphic display server with the command sudo telinit 3, and proceed to terminal only hitting CRTL+ALT+F1. Login on the terminal then install the video driver:

cd $HOME/Downloads
chmod +x cuda* NVIDIA*
sudo NVIDIA-Linux-x86_64-396.26.run
  1. Accept License
  2. The distribution-provided pre-install script failed! Are you sure you want to continue? -> CONTINUE INSTALLATION
  3. Would you like to run the nvidia-xconfig utility? -> YES

The Nvidia driver is now installed. Reboot your system: sudo reboot

3rd Restart: Install CUDA and cuDNN

cd $HOME/Downloads
sudo cuda_9.2.88_396.26_linux.run

Accept the terms and conditions, say yes to installing with an unsupported configuration, and no to “Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 384.81?”. Make sure you don’t agree to install the new driver. Say 'yes' to samples, cuda symlink, etc. Install the CUDA patch installers if needed.

cd $HOME/Downloads
sudo cuda_9.2.88.1_linux.run

CUDA post-install:

echo '#CUDA path and library path' >> $HOME/.bashrc
echo 'export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}' >> $HOME/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> $HOME/.bashrc
source ~/.bashrc 

Now install the cuDNN library. these is simple just unzip and copy the files to the right CUDA location.

cd $HOME/Donwloads

# Unpack the archive
tar -zxvf cudnn-9.2-linux-x64-v7.1.tgz

# Move the unpacked contents to your CUDA directory
sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda-9.2/lib64/
sudo cp  cuda/include/cudnn.h /usr/local/cuda-9.2/include/

# Give read access to all users
sudo chmod a+r /usr/local/cuda-9.2/include/cudnn.h /usr/local/cuda/lib64/libcudnn*

Install the libcupti library (https://developer.nvidia.com/cuda-profiling-tools-interface):

sudo apt-get install libcupti-dev

Check GPU and CUDA install

nvidia-settings -q NvidiaDriverVersion
cat /proc/driver/nvidia/version
nvidia-smi
lspci | grep -i nvidia
lsmod | grep nvidia
lsmod | grep nouveau

This shoudl produce the following, in particular notice that lsmod | grep nouveau should not produce any output

natbusa@xino:~/Downloads$ nvidia-settings -q NvidiaDriverVersion

  Attribute 'NvidiaDriverVersion' (xino:1.0): 396.26
  Attribute 'NvidiaDriverVersion' (xino:1[gpu:0]): 396.26

natbusa@xino:~/Downloads$ cat /proc/driver/nvidia/version
NVRM version: NVIDIA UNIX x86_64 Kernel Module  396.26  Mon Apr 30 18:01:39 PDT 2018
GCC version:  gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3) 
natbusa@xino:~/Downloads$ nvidia-smi
Wed May 30 10:10:38 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.26                 Driver Version: 396.26                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1050    Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   44C    P3    N/A /  N/A |   1539MiB /  4042MiB |     13%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0       959      G   /usr/lib/xorg/Xorg                            71MiB |
|    0      1019      G   /usr/bin/gnome-shell                          50MiB |
|    0      1235      G   /usr/lib/xorg/Xorg                           438MiB |
|    0      1373      G   /usr/bin/gnome-shell                         293MiB |
|    0      2292      G   ...-token=4C71E15C4269DFF1299B450ED68DCF95   683MiB |
+-----------------------------------------------------------------------------+
natbusa@xino:~/Downloads$ lspci | grep -i nvidia
01:00.0 3D controller: NVIDIA Corporation GP107M [GeForce GTX 1050 Mobile] (rev a1)
natbusa@xino:~/Downloads$ lsmod | grep nvidia
nvidia_drm             40960  12
nvidia_modeset       1085440  6 nvidia_drm
nvidia              14016512  557 nvidia_modeset
ipmi_msghandler        53248  2 nvidia,ipmi_devintf
drm_kms_helper        167936  2 i915,nvidia_drm
drm                   401408  17 i915,nvidia_drm,drm_kms_helper
natbusa@xino:~/Downloads$ lsmod | grep nouveau
natbusa@xino:~/Downloads$ 

check the CUDA dev tools

cd $HOME/NVIDIA_CUDA-9.2_Samples/
make clean && make
1_Utilities/deviceQuery/deviceQuery

This should produce something like this, and the result of the test should be PASS.

1_Utilities/deviceQuery/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 1050"
  CUDA Driver Version / Runtime Version          9.2 / 9.2
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory:                 4042 MBytes (4238737408 bytes)
  ( 5) Multiprocessors, (128) CUDA Cores/MP:     640 CUDA Cores
  GPU Max Clock rate:                            1493 MHz (1.49 GHz)
  Memory Clock rate:                             3504 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 524288 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.2, CUDA Runtime Version = 9.2, NumDevs = 1
Result = PASS

Bash history fix for multitabs

# Avoid duplicates
export HISTCONTROL=ignoredups:erasedups  
# When the shell exits, append to the history file instead of overwriting it
shopt -s histappend

# After each command, append to the history file and reread it
export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"

Download Chrome

Google for chrome, follow the download instructions.

Apt management and Software commons

sudo apt-get install -y \
    apt-transport-https \
    ca-certificates \
    software-properties-common

Networking utils

sudo apt-get install -y socat

Install squid proxy (optional)

Procedure adapted from http://www.rushiagr.com/blog/2015/06/05/cache-apt-packages-with-squid-proxy/

sudo apt -y install squid
sudo cat << EOF > /etc/squid/squid.conf
# allow this service to be accessible from any internal network ip
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network

acl SSL_ports port 443
acl Safe_ports port 80		# http
acl Safe_ports port 21		# ftp
acl Safe_ports port 443		# https
acl Safe_ports port 70		# gopher
acl Safe_ports port 210		# wais
acl Safe_ports port 1025-65535	# unregistered ports
acl Safe_ports port 280		# http-mgmt
acl Safe_ports port 488		# gss-http
acl Safe_ports port 591		# filemaker
acl Safe_ports port 777		# multiling http
acl CONNECT method CONNECT

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access deny to_localhost
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3128

cache_mem 512 MB
maximum_object_size 1024 MB
cache_dir aufs /var/spool/squid 5000 24 256
coredump_dir /var/spool/squid

refresh_pattern ^ftp:		1440	20%	10080
refresh_pattern ^gopher:	1440	0%	1440
refresh_pattern -i (/cgi-bin/|\?) 0	0%	0
refresh_pattern (Release|Packages(.gz)*)$      0       20%     2880
refresh_pattern .		0	20%	4320
EOF

sudo squid -z
sudo systemctl restart squid
sudo systemctl status squid

sudo cat /var/log/squid/cache.log

Check it work fine, setup env proxy vars, try to download some files...

export LOCAL_IP = 192.168.81.94
export HTTP_PROXY=${LOCAL_IP}:3128
export http_proxy=${HTTP_PROXY}
export HTTPS_PROXY=${HTTP_PROXY}
export https_proxy=${HTTP_PROXY}
export no_proxy=localhost

curl http://www.apache.org/dist/META/ROOT.asc
curl http://www.apache.org/dist/META/ROOT.asc

sudo cat /var/log/squid/access.log

Productivity

# Keyboard -> Additional Layout Options -> Ctrl Position -> Swap left ctrl with left alt
sudo apt-get install -y gnome-tweaks
gnome-tweaks

# better terminal
sudo apt install -y terminator

#sublime editor
wget -qO - https://download.sublimetext.com/sublimehq-pub.gpg | sudo apt-key add -
echo "deb https://download.sublimetext.com/ apt/stable/" | sudo tee /etc/apt/sources.list.d/sublime-text.list
sudo apt-get update; sudo apt-get install -y sublime-text

# atom editor 

Multimedia, Docs

sudo apt-get install -y gimp

#Install pandoc, texstudio
wget --output-document=/home/$USER/Downloads/pandoc-2.1.1-1-amd64.deb \
     https://github.com/jgm/pandoc/releases/download/2.1.1/pandoc-2.1.1-1-amd64.deb  
sudo dpkg --install /home/$USER/Downloads/pandoc-2.1.1-1-amd64.deb
sudo apt-get install -y texlive texstudio 

Dev tools

sudo apt-get install -y git jq curl

Java

#default
sudo apt-get install -y default-jre
sudo apt-get install -y default-jdk
#openjdk
sudo apt install -y openjdk-8-jdk

#oracle java 8 (default)
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install -y oracle-java8-installer
sudo apt install -y oracle-java8-set-default

Install conda

wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh --directory-prefix=$HOME/Downloads
bash $HOME/Downloads/Miniconda3-latest-Linux-x86_64.sh
conda upgrade conda

Creating a tensorflow / anaconda python3 environment

conda create  -n tensorflow-gpu python=3
conda install -n tensorflow-gpu tensorflow-gpu
conda install -n tensorflow-gpu keras-gpu

Testing the conda environment

source activate tensorflow-gpu
mkdir $HOME/keras_gpu_examples
cd $HOME/keras_gpu_examples
wget https://raw.githubusercontent.com/keras-team/keras/master/examples/mnist_cnn.py

#gpu masked away
CUDA_VISIBLE_DEVICES='' time python mnist_cnn.py

#gpu visible for cuda processing
CUDA_VISIBLE_DEVICES=0 time python mnist_cnn.py

Installing Docker

Elementary OS Loki matches Ubuntu xenial. $(lsb_release -cs) returns Loki but docker doesn not have this particular ubuntu os mod. Use xenial instead in the add-apt-repository command

sudo apt-get remove docker docker-engine docker.io
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
apt-cache policy docker-ce
sudo apt-get install -y docker-ce

Adding users to the docker group

sudo groupadd docker
sudo usermod -aG docker $USER
sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
sudo chmod g+rwx "/home/$USER/.docker" -R
sudo reboot

Testing docker

docker run hello-world

Installing Virtualbox

sudo apt-get remove --purge virtualbox virtualbox-dkms
sudo rmmod vboxpci vboxnetadp vboxnetflt vboxdrv
sudo add-apt-repository "deb https://download.virtualbox.org/virtualbox/debian xenial contrib"
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
sudo apt update
sudo apt-get install virtualbox-5.2
sudo /sbin/vboxconfig
dpkg -l | grep virtualbox

Installing Vagrant

Download the debian 64bit version of vagrant from the vagrant website (download page)[https://www.vagrantup.com/downloads.html]

sudo apt install ~/Downloads/vagrant*

testing vagrant

mkdir -p $HOME/vagrant_example && cd $HOME/vagrant_example
vagrant init
vagrant box add hashicorp/precise64
echo '
Vagrant.configure("2") do |config|
  config.vm.box = "hashicorp/precise64"
end
' > Vagrantfile
vagrant up
vagrant ssh
# try uname -a on the vm
vagrant destroy

Installing Kubernetes

Setup MiniKube

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube 
sudo mv minikube /usr/local/bin/

Install the cli tool kubectl

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

check install of the cli tools

ls /usr/local/bin/*kube*

should return /usr/local/bin/kubectl /usr/local/bin/minikube

Config dir for kubernetes (optional)

rm -rf $HOME/.minikube/ $HOME/.kube/

mkdir -p $HOME/.kube
touch $HOME/.kube/config

Running Minikube

export MINIKUBE_HOME=$HOME
export KUBECONFIG=$HOME/.kube/config
minikube start --vm-driver=virtualbox --bootstrapper kubeadm --disk-size 64G --memory 12288 --cpus 4

Patch Minikube

# from https://gist.github.com/minrk/22abe39fbc270c3f3f1d4771a287c0b5

minikube ssh "
  sudo ip link set docker0 promisc on
  # make hostpath volumes world-writable by default
  sudo chmod -R a+rwX /tmp/hostpath-provisioner/
  sudo setfacl -d -m u::rwX /tmp/hostpath-provisioner/
"

Testing Minikube


kubectl get nodes

# starting pod
kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
kubectl get pod
minikube service hello-minikube --url

# accessing the service 
curl $(minikube service hello-minikube --url)

# stopping pod
minikube service hello-minikube --url
kubectl delete deployment hello-minikube

Stop/Delete Minikube

minikube stop
minikube delete

Install Helm

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash

# access to the incubator charts
helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
helm update

Init Helm and Tiller

kubectl create clusterrolebinding permissive-binding \
 --clusterrole=cluster-admin \
 --user=admin \
 --user=kubelet \
 --group=system:serviceaccounts

# make sure the kubernetes cluster is running
kubectl --namespace kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller

#secure tiller-deploy
kubectl --namespace=kube-system patch deployment tiller-deploy --type=json --patch='[{"op": "add", "path": "/spec/template/spec/containers/0/command", "value": ["/tiller", "--listen=localhost:44134"]}]'

Run a container registry

local registry

 docker run -d \
  -p 5000:5000 \
  --restart=always \
  --name registry \
  -v $HOME/registry:/var/lib/registry \
  registry:2

helm service

helm install stable/docker-registry --set persistence.size=1Gi,persistence.enabled=true --name registry --namespace dsw

Test the registry

export DOCKER_PODNAME=$(kubectl get pods --namespace dsw -l "app=docker-registry,release=registry" -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward -n dsw $DOCKER_PODNAME 5000:5000 &

docker pull hello-world
docker images
docker tag hello-world 127.0.0.1:5000/natbusa/hello-world
docker images
docker push 127.0.0.1:5000/natbusa/hello-world
docker rmi 127.0.0.1:5000/natbusa/hello-world
docker images
docker pull 127.0.0.1:5000/natbusa/hello-world

Setup Gogs

helm install --name gogs --namespace dsw incubator/gogs --set service.gogs.databaseType=sqlite3,postgresql.install=false

access service

export NODE_PORT=$(kubectl get --namespace dsw -o jsonpath="{.spec.ports[0].nodePort}" services gogs-gogs)
export NODE_IP=$(kubectl get nodes --namespace dsw -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT/

Setup JupyterHub

Configuration

cat << EOF > jupyterhub.yaml
proxy:
  secretToken: "$(openssl rand -hex 32)"
hub:
  cookieSecret: "$(openssl rand -hex 32)"
singleuser:
  storage:
    capacity: 1Gi
  cpu:
    limit: 0.5
    guarantee: 0.5
  memory:
    limit: 0.5
    guarantee: 1G
EOF

Installing

helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
helm install jupyterhub/jupyterhub --version=v0.6 --name=jh --namespace=dac -f jupyterhub.yaml

Concourse CI

helm install --name concourse --namespace dsw stable/concourse

Getting started with concourse

./fly -t lite login -c http://127.0.0.1:8080

Monocular

#Install with Helm: 
helm install stable/nginx-ingress

#Minikube/Kubeadm: 
helm install stable/nginx-ingress --set controller.hostNetwork=true

helm repo add monocular https://kubernetes-helm.github.io/monocular
helm install monocular/monocular


Binderhub

#check https://jupyterhub.github.io/helm-chart/ for the last version
helm install jupyterhub/binderhub --version=0.1.0-3f81760 --name=binder --namespace=dsw -f binderhub.minikube.yaml 

### Testing

kubectl --namespace=dsw get pods minikube service proxy-public -n dsw --url

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment