Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save dongzhuoyao/ea578c4aa1720ee6eab764621bbf03c5 to your computer and use it in GitHub Desktop.
Save dongzhuoyao/ea578c4aa1720ee6eab764621bbf03c5 to your computer and use it in GitHub Desktop.
Caffe + Ubuntu 12.04 / 14.04 64bit + CUDA 6.5 / 7.0 配置说明

Caffe + Ubuntu 14.04 64bit + CUDA 6.5 配置说明

本步骤能实现用Intel核芯显卡来进行显示, 用NVIDIA GPU进行计算。

1. 安装开发所需的依赖包

安装开发所需要的一些基本包

sudo apt-get install build-essential  # basic requirement
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler #required by caffe

2. 安装CUDA及驱动

2.1 准备工作

在关闭桌面管理 lightdm 的情况下安装驱动似乎可以实现Intel 核芯显卡 来显示 + NVIDIA 显卡来计算。具体步骤如下:

  1. 首先在BIOS设置里选择用Intel显卡来显示或作为主要显示设备

  2. 进入Ubuntu, 按 ctrl+alt+F1 进入tty, 登录tty后输入如下命令

    sudo service lightdm stop

该命令会关闭lightdm。如果你使用 gdm或者其他的desktop manager, 请在安装NVIDIA驱动前关闭他。

2.2 下载deb包及安装CUDA

使用deb包安装CUDA及驱动能省去很多麻烦(参见CUDA Starting Guide)。下载对应于你系统的CUDA deb包(我的神舟笔记本为linux->x86_64->ubuntu->14.04->deb(network)), 然后用下列命令添加软件源

 sudo dpkg -i cuda-repo-<distro>_<version>_<architecture>.deb
 sudo apt-get update

然后用下列命令安装CUDA

 sudo apt-get install cuda

安装完成后 reboot.

sudo reboot

2.3 安装cuDNN

(2015-5-25:目前caffe只支持cudnn v4,所以下载的时候注意选择cudnn v4) cuDNN能加速caffe中conv及pooling的计算。首先下载cuDNN, 然后执行下列命令解压并安装

tar -zxvf cudnn-7.0-linux-x64-v4.0-prod.tgz
cd cuda
sudo cp lib/* /usr/local/cuda/lib64/
sudo cp cudnn.h /usr/local/cuda/include/

更新软链接

cd /usr/local/cuda/lib64/
sudo rm -rf libcudnn.so libcudnn.so.4
sudo ln -s libcudnn.so.4.0.7 libcudnn.so.4
sudo ln -s libcudnn.so.4 libcudnn.so

2.4 设置环境变量

安装完成后需要在/etc/profile中添加环境变量, 在文件最后添加:

PATH=/usr/local/cuda/bin:$PATH
export PATH

保存后, 执行下列命令, 使环境变量立即生效

source /etc/profile

同时需要添加lib库路径: 在 /etc/ld.so.conf.d/加入文件 cuda.conf, 内容如下

/usr/local/cuda/lib64

保存后,执行下列命令使之立刻生效

sudo ldconfig

3. 安装CUDA SAMPLE

进入/usr/local/cuda/samples, 执行下列命令来build samples

sudo make all -j8

整个过程大概10分钟左右, 全部编译完成后, 进入 samples/bin/x86_64/linux/release, 运行deviceQuery

./deviceQuery

如果出现显卡信息, 则驱动及显卡安装成功:

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 670"
  CUDA Driver Version / Runtime Version          6.5 / 6.5
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 4095 MBytes (4294246400 bytes)
  ( 7) Multiprocessors, (192) CUDA Cores/MP:     1344 CUDA Cores
  GPU Clock rate:                                1098 MHz (1.10 GHz)
  Memory Clock rate:                             3105 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 524288 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GeForce GTX 670
Result = PASS

4. 安装Intel MKL 或Atlas

如果没有Intel MKL, 可以用下列命令安装免费的atlas

sudo apt-get install libatlas-base-dev

如果有mkl安装包,首先解压安装包,下面有一个install_GUI.sh文件, 执行该文件,会出现图形安装界面,根据说明一步一步执行即可。

注意: 安装完成后需要添加library路径, 创建/etc/ld.so.conf.d/intel_mkl.conf文件, 在文件中添加内容

/opt/intel/lib
/opt/intel/mkl/lib/intel64

注意把路径替换成自己的安装路径。 编辑完后执行

sudo ldconfig

5. 安装OpenCV (Optional, 如果运行caffe时opencv报错, 可以重新按照此步骤安装)

虽然我们已经安装了libopencv-dev , 但该库似乎会导致libtiff的相关问题, 所以我们需要从源代码build 自己的版本。这个尽量不要手动安装.

安装2.4.10 (推荐)

  1. 下载安装脚本
  2. 进入目录 Install-OpenCV/Ubuntu/2.4
  3. 执行脚本
    sudo ./opencv2_4_10.sh

安装2.4.9 (deprecated)

Github上有人已经写好了完整的安装脚本, 能自动安装所有dependencies. 下载该脚本,进入Ubuntu/2.4 目录, 给所有shell脚本加上可执行权限

chmod +x *.sh

修改脚本opencv2_4_X.sh, 在cmake中加入参数

-D BUILD_TIFF=ON

然后安装(当前为2.4.9)

sudo ./opencv2_4_9.sh

脚本会自动安装依赖项,下载安装包,编译并安装OpenCV。整个过程大概半小时左右。

注意,安装2.4.9时中途可能会报错

opencv-2.4.9/modules/gpu/src/nvidia/core/NCVPixelOperations.hpp(51): error: a storage class is not allowed in an explicit specialization

解决方法在此 下载 NCVPixelOperations.hpp, 替换掉opencv2.4.9内的文件, *并注释掉opencv2_4_9.sh中下载opencv包的代码, 重新执行sudo ./opencv2_4_9.sh`.

6. 安装Caffe所需要的Python环境

6.1 安装anaconda包

在此下载最新的安装包, 用默认设置安装在用户目录下。

6.2 安装python依赖库

打开新的终端(重要!), 用which pythonwhich pip确定使用的是anaconda提供的python环境,然后进入caffe_root/python, 执行下列命令

for req in $(cat requirements.txt); do pip install $req; done

6.3 修正Anaconda存在的bug

加入在编译或者运行caffe时遇到这样的错误

/usr/lib/x86_64-linux-gnu/libx264.so.142:undefined reference to ' 

那么请删除掉anaconda/lib中的libm.*. 参考this issue

实际编译caffe的时候还碰到anaconda和系统的libreadline冲突的状况,需要conda remove readline (感谢@jastarex ).

6.4 添加Anaconda Library Path

这里需要注意,在运行Caffe时,可能会报一些找不到libxxx.so的错误,而用 locate libxxx.so命令发现已经安装在anaconda中,这时首先想到的是在/etc/ld.so.conf.d/ 下面将 your_anaconda_path/lib 加入 LD_LIBRARY_PATH中。 但是*这样做可能导致登出后无法再进入桌面!!!*原因(猜测)可能是anaconda的lib中有些内容于系统自带的lib产生冲突。

正确的做法是:为了不让系统在启动时就将anaconda/lib加入系统库目录,可以在用户自己的~/.bashrc 中添加library path, 比如我就在最后添加了两行

# add library path
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:your_anaconda_path/lib"

开启另一个终端后即生效,并且重启后能够顺利加载lightdm, 进入桌面环境。

7. 安装MATLAB

Caffe提供了MATLAB接口, 有需要用MATLAB的同学可以额外安装MATLAB。 安装教程请自行搜索。

安装完成后添加图标

sudo vi /usr/share/applications/Matlab.desktop

输入以下内容

[Desktop Entry]
Type=Application
Name=Matlab
GenericName=Matlab R2013b
Comment=Matlab:The Language of Technical Computing
Exec=sh /usr/local/MATLAB/R2013b/bin/matlab -desktop
Icon=/usr/local/MATLAB/Matlab.png
Terminal=false
Categories=Development;Matlab;

(I use the R2013b patched package. First you should uncompress the .iso file. Then use sudo cp to copy the patch file)

8. 编译Caffe

8.1 编译主程序

终于完成了所有环境的配置,可以愉快的编译Caffe了! 进入caffe根目录, 首先复制一份Makefile.config, 然后修改里面的内容,主要需要修改的参数包括

  • CPU_ONLY 是否只使用CPU模式,没有GPU没安装CUDA的同学可以打开这个选项
  • BLAS (使用intel mkl还是atlas)
  • MATLAB_DIR 如果需要使用MATLAB wrapper的同学需要指定matlab的安装路径, 如我的路径为 /usr/local/MATLAB/R2013b (注意该目录下需要包含bin文件夹,bin文件夹里应该包含mex二进制程序)
  • DEBUG 是否使用debug模式,打开此选项则可以在eclipse或者NSight中debug程序
## Refer to http://caffe.berkeleyvision.org/installation.html
# Contributions simplifying and improving our build system are welcome!

# cuDNN acceleration switch (uncomment to build with cuDNN).
 USE_CUDNN := 1

# CPU-only switch (uncomment to build without GPU support).
# CPU_ONLY := 1

# uncomment to disable IO dependencies and corresponding data layers
# USE_OPENCV := 0
# USE_LEVELDB := 0
# USE_LMDB := 0

# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
#	You should not set this flag if you will be reading LMDBs with any
#	possibility of simultaneous read and write
# ALLOW_LMDB_NOLOCK := 1

# Uncomment if you're using OpenCV 3
# OPENCV_VERSION := 3

# To customize your choice of compiler, uncomment and set the following.
# N.B. the default for Linux is g++ and the default for OSX is clang++
# CUSTOM_CXX := g++

# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda
# On Ubuntu 14.04, if cuda tools are installed via
# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
# CUDA_DIR := /usr

# CUDA architecture setting: going with all of them.
# For CUDA < 6.0, comment the *_50 lines for compatibility.
CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
		-gencode arch=compute_20,code=sm_21 \
		-gencode arch=compute_30,code=sm_30 \
		-gencode arch=compute_35,code=sm_35 \
		-gencode arch=compute_50,code=sm_50 \
		-gencode arch=compute_50,code=compute_50

# BLAS choice:
# atlas for ATLAS (default)
# mkl for MKL
# open for OpenBlas
BLAS := atlas
# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
# Leave commented to accept the defaults for your choice of BLAS
# (which should work)!
# BLAS_INCLUDE := /path/to/your/blas
# BLAS_LIB := /path/to/your/blas

# Homebrew puts openblas in a directory that is not on the standard search path
# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
# BLAS_LIB := $(shell brew --prefix openblas)/lib

# This is required only if you will compile the matlab interface.
# MATLAB directory should contain the mex binary in /bin.
# MATLAB_DIR := /usr/local
# MATLAB_DIR := /Applications/MATLAB_R2012b.app

# NOTE: this is required only if you will compile the python interface.
# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
		/usr/lib/python2.7/dist-packages/numpy/core/include
# Anaconda Python distribution is quite popular. Include path:
# Verify anaconda location, sometimes it's in root.
ANACONDA_HOME := $(HOME)/anaconda
PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
		 $(ANACONDA_HOME)/include/python2.7 \
		 $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \

# Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
#                 /usr/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
PYTHON_LIB := $(ANACONDA_HOME)/lib

# Homebrew installs numpy in a non standard path (keg only)
# PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
# PYTHON_LIB += $(shell brew --prefix numpy)/lib

# Uncomment to support layers written in Python (will link against Python libs)
#WITH_PYTHON_LAYER := 1

# Whatever else you find you need goes here.
INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
# INCLUDE_DIRS += $(shell brew --prefix)/include
# LIBRARY_DIRS += $(shell brew --prefix)/lib

# Uncomment to use `pkg-config` to specify OpenCV library paths.
# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
# USE_PKG_CONFIG := 1

BUILD_DIR := build
DISTRIBUTE_DIR := distribute

# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
# DEBUG := 1

# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0

# enable pretty build (comment to see full commands)
Q ?= @

完成设置后, 开始编译

sudo make all -j4
sudo make test
sudo make runtest

注意 -j4 是指使用几个线程来同时编译, 可以加快速度, j后面的数字可以根据CPU core的个数来决定, 我的CPU使4核, 所以-j4.

直接使用上面的make可能会出现如下错误:

.build_release/lib/libcaffe.so:对‘cv::imencode(cv::String const&, cv::_InputArray const&, std::vector<unsigned char, std::allocator<unsigned char> >&, std::vector<int, std::allocator<int> > const&)’未定义的引用
.build_release/lib/libcaffe.so:对‘cv::imdecode(cv::_InputArray const&, int)’未定义的引用
.build_release/lib/libcaffe.so:对‘cv::imread(cv::String const&, int)’未定义的引用

所以最好是使用下面的cmake来进行编译http://caffe.berkeleyvision.org/installation.html#compilation

mkdir build
cd build
cmake ..
make all
make install
make runtest

8.2 编译Matlab wrapper

执行如下命令

sudo make matcaffe

然后就可以跑官方的matlab demo啦。

8.3 编译Python wrapper

 sudo make pycaffe 

然后基本就全部安装完拉.

接下来大家尽情地跑demo吧~

Ubuntu 12.04 / 14.04 + CUDA 7.0.md

亲测Ubuntu 12.04 / 14.04 装 CUDA 7.0 用deb包暂时是无法安装成功, 要用run文件安装. 并且安装完成后需要禁用集显, 用NVIDIA显卡显示.

安装run步骤

首先在NVIDIA官网下载.run安装文件(在此下载对应版本)

禁用nouveau驱动

  1. 首先用下面命令查看是否正在使用nouveau

    lsmod | grep nouveau
  2. 创建文件 /etc/modprobe.d/blacklist-nouveau.conf

blacklist nouveau
options nouveau modeset=0
  1. 重新生成 kernel initramfs
sudo update-initramfs -u

安装 .run 文件

  1. 重启电脑用 ctrl+alt+F1 进入 tty1, 输入用户名密码登录.

  2. 用下列命令关闭lightdm

    sudo service lightdm stop
  3. 安装run文件: 用默认设置, 并且一路选择accept, yes 即可.

    sudo sh cuda_<version>_linux.run
  4. 安装完成后重启.

后续工作

  1. 重启后需要在BIOS里禁用集显, 选择从PCI-e口, 也就是你的显卡中显示. 否则登录后只有桌面背景.

  2. 输出环境变量: 安装完成后需要在/etc/profile中添加环境变量, 在文件最后添加:

    PATH=/usr/local/cuda/bin:$PATH
    export PATH

保存后, 执行下列命令, 使环境变量立即生效

source /etc/profile

同时需要添加lib库路径: 在/etc/ld.so.conf.d/加入文件 cuda.conf, 内容如下

/usr/local/cuda/lib64

保存后,执行下列命令使之立刻生效

sudo ldconfig

Ubuntu 备份与恢复

Ubuntu可以将系统备份为一个tar压缩文件,也能很方便地从该文件恢复系统。

备份

我们的目标是备份/目录,但是不备份/home, 以及/proc, /sys, /mnt, /media, /run, /dev. 要实现这一点,执行下列命令

cd / 
tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz --one-file-system / 
  1. tar: 将文件打包成压缩包.
  2. --exclude=/example/path: 不需要备份的文件或目录的路径
  3. --one-file-system: 该命令能自动exclude /home, 以及/proc, /sys, /mnt, /media, /run, /dev.
  4. /: 需要backup的partition

恢复

进入livecd,用gparted工具对硬盘进行分区和格式化。然后mount你想恢复的分区。 一般会挂载在/mnt下。然后用下述命令恢复:

sudo mount /dev/sda2 /mnt
sudo tar -xvpzf /path/to/backup.tar.gz -C /mnt --numeric-owner

--numeric-owner - This option tells tar to restore the numeric owners of the files in the archive, rather than matching to any user names in the environment you are restoring from. This is due to that the user id:s in the system you want to restore don't necessarily match the system you use to restore (eg a live CD).

修复grub

sudo su
mount --bind /dev /mnt/dev
mount --bind /dev/pts /mnt/dev/pts 
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
chroot /mnt
grub-install --recheck /dev/sda
update-grub

umout

exit
sudo umount /mnt/sys
sudo umount /mnt/proc
sudo umount /mnt/dev/pts
sudo umount /mnt/dev
sudo umount /mnt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment