近期在做体态行为识别的项目,采购了一块Jetson TX2,拿过来板子上的系统和软件有点老,准备更新一下。ARM板子虽好,遍地是坑,记录一下以供参考。
第一步:刷机,这里面的坑非常多
列个列表
1.虚拟机刷机
刷机失败情况一,我用的是mac pro苹果电脑,虚拟机软件是Paralles Desktop,装的是ubuntu 16.0.4,在上面环境中查阅网上没有这方面的资料,关键是要设置桥接物理模式,大部分的资料是在win7,win10下的Vmware,我在 Paralles Desktop上装了ubuntu,win7都以失败告终
原因1\. Paralles Desktop 无法设置桥接物理模式
原因2.改用无线网络,在Determing the IP address of garget...找不到IP
原因3.强烈建议用交换机,组个局域网
原因4.虚拟机上刷机部分成功,Tx2鼠标,键盘无法使用
注意,Tx2电源线是美式插口,需要一个美式的插板
我的方案
环境:台式机一台,安装好ubuntu 16.0.4操作系统,交换机一个,Jetson Tx2,网线至少要三根,Nvidia帐号一个
软件:Jetpack3.3
步骤
1.下载jetpack到ubuntu
更改执行权限:
$ chmod +x ./JetPack-L4T-3.3-linux-x64.run
执行安装
$ sudo ./JetPack-L4T-3.3-linux-x64.run
2.运行jetpack 直到最后一步出现命令行界面
3.新打开一个命令行界面,检测 type-c连接台式机情况
lsusb
4.在第一个命令行界面,回车,开始刷机
5.刷新过程大概半小时到1小时,需要耐心等待
6.成功后,自动重启
第二步:安装Caffe
在jetson TX2上安装caffe的资料很少,arm结构,是aarch-linux结构,好多软件无法使用。
通过几天的尝试,最后安装成功。
步骤
1.安装依赖包
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential cmake git pkg-config
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libhdf5-serial-dev protobuf-compiler
sudo apt-get install libatlas-base-dev
sudo apt-get install --no-install-recommends libboost-all-dev
sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-de
2.切换python,默认的TX2上会有python2.7.12 和python3.5.2
查看一下
nvidia@tegra-ubuntu:~$ `python2 --version`
Python 2.7.12
nvidia@tegra-ubuntu:~$python --version
Python 3.5.2
nvidia@tegra-ubuntu:~$
切换直接执行这两个命令即可:
sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 100
sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 150
如果要切换到Python2,执行:
sudo update-alternatives --config python
3.python依赖包安装
sudo apt-get install python-numpy python-scipy python-matplotlib ipython ipython-notebook python-pandas python-sympy python-nose
4.安装opencv 3.4.0
查看opencv版本:
pkg-config --modversion opencv
若没有跳出\
`opencv``的版本,则需安装``opencv``:`
下载 opencv和opencv_contrib
https://github.com/opencv/opencv/archive/3.4.0.zip
https://github.com/opencv/opencv_contrib/archive/3.4.0.zip
下载解压出opencv-3.4.0和 opencv_contrib-3.4.0(解压放在在opencv-3.4.0目录下)
切换目录
cd /home/nvidia/opencv-3.4.0
创建一个build文件夹
mkdir build
生成一个脚本文件my_cmake.sh
touch my_cmake.sh
sudo gedit my_cmake.sh
输入cmake命令内容
#!/bin/bash
cmake \
-D CMAKE\\\_BUILD\\\_TYPE=Release \
-D CMAKE\\\_INSTALL\\\_PREFIX=/usr \
-D BUILD_PNG=OFF \
-D BUILD_TIFF=OFF \
-D BUILD_TBB=OFF \
-D BUILD_JPEG=OFF \
-D BUILD_JASPER=OFF \
-D BUILD_ZLIB=OFF \
-D BUILD_EXAMPLES=OFF \
-D BUILD\\\_opencv\\\_java=OFF \
-D BUILD\\\_opencv\\\_python2=ON \
-D BUILD\\\_opencv\\\_python3=ON \
-D ENABLE\\\_PRECOMPILED\\\_HEADERS=OFF \
-D WITH_OPENCL=OFF \
-D WITH_OPENMP=OFF \
-D WITH_FFMPEG=ON \
-D WITH_GSTREAMER=ON \
-D WITH\\\_GSTREAMER\\\_0_10=OFF \
-D WITH_LIBV4L=ON \
-D WITH_CUDA=ON \
-D WITH_GTK=ON \
-D WITH_VTK=OFF \
-D WITH_TBB=ON \
-D WITH_1394=OFF \
-D WITH_OPENEXR=OFF \
-D CUDA\\\_TOOLKIT\\\_ROOT_DIR=/usr/local/cuda-9.0 \
-D CUDA\\\_ARCH\\\_BIN=6.2 \
-D CUDA\\\_ARCH\\\_PTX="" \
-D OPENCV\\\_EXTRA\\\_MODULES\\\_PATH=../opencv\\\_contrib-3.4.0/modules/ \
../
5.编译
cd /home/nvidia/opencv-3.4.0
cd build
更改权限
chmod +x my_cmake.sh
执行生成Makefile
./my_cmake.sh
正式编译
make -j4
make install
6.安装Cuda
刷机成功后安装的是CUDA9.0
查看cuda版本:
cat /usr/local/cuda-9.0/version.txt
nvidia@tegra-ubuntu:~$ cat /usr/local/cuda-9.0/version.txt
CUDA Version 9.0.252
7.下载caffe
cd /home/nvidia/
git clone https://github.com/BVLC/caffe.git
cd caffe
cp Makefile.config.example Makefile.config
sudo gedit Makefile.config
//打开Makefile.config文件
修改内容
//若使用cudnn,则将# USE_CUDNN := 1
修改成:
USE_CUDNN := 1
//若使用的opencv版本是3的,则将# OPENCV_VERSION := 3
修改为:
OPENCV_VERSION := 3
//重要的一项
将# Whatever else you find you need goes here.下面的
INCLUDE\_DIRS := $(PYTHON\_INCLUDE) /usr/local/include
LIBRARY\_DIRS := $(PYTHON\_LIB) /usr/local/lib /usr/lib
修改为:
INCLUDE\_DIRS := $(PYTHON\_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY\_DIRS := $(PYTHON\_LIB) /usr/local/lib /usr/lib /usr/lib/aarch64-linux-gnu /usr/lib/aarch64-linux-gnu/hdf5/serial
Lhdf5 Makefile.config修改
将# Whatever else you find you need goes here.下面的
INCLUDE\_DIRS := $(PYTHON\_INCLUDE) /usr/local/include
LIBRARY\_DIRS := $(PYTHON\_LIB) /usr/local/lib /usr/lib
修改为:
INCLUDE\_DIRS := $(PYTHON\_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY\_DIRS := $(PYTHON\_LIB) /usr/local/lib /usr/lib /usr/lib/aarch64-linux-gnu /usr/lib/aarch64-linux-gnu/hdf5/serial
Makefile修改
在Makefile文件中找到:
LIBRARIES +=glog gflags protobuf boost\_system boost\_filesystem m hdf5 hl hdf5
把它改成:
LIBRARIES += glog gflags protobuf boost\_system boost\_filesystem m hdf5\_serial\_hl hdf5_seria
compute_20错误解决方法,注释掉 gencode项
原来的:
CUDA\_ARCH := -gencode arch=compute\_20,code=sm_20 \
-gencode arch=compute\_20,code=sm\_21 \\
修改后的
CUDA\_ARCH := #-gencode arch=compute\_20,code=sm_20 \
#-gencode arch=compute\_20,code=sm\_21 \\
Makefile.config内容:
\## Refer to http://caffe.berkeleyvision.org/installation.html
\# Contributions simplifying and improving our build system are welcome!
\# cuDNN acceleration switch (uncomment to build with cuDNN).
USE_CUDNN := 1
\# CPU-only switch (uncomment to build without GPU support).
\# CPU_ONLY := 1
\# uncomment to disable IO dependencies and corresponding data layers
\# USE_OPENCV := 0
\# USE_LEVELDB := 0
\# USE_LMDB := 0
\# This code is taken from https://github.com/sh1r0/caffe-android-lib
\# USE_HDF5 := 0
\# uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
\# You should not set this flag if you will be reading LMDBs with any
\# possibility of simultaneous read and write
\# ALLOW\_LMDB\_NOLOCK := 1
\# Uncomment if you're using OpenCV 3
OPENCV_VERSION := 3
\# To customize your choice of compiler, uncomment and set the following.
\# N.B. the default for Linux is g++ and the default for OSX is clang++
\# CUSTOM_CXX := g++
\# CUDA directory contains bin/ and lib/ directories that we need.
CUDA_DIR := /usr/local/cuda-9.0
\# On Ubuntu 14.04, if cuda tools are installed via
\# "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
\# CUDA_DIR := /usr
\# CUDA architecture setting: going with all of them.
\# For CUDA < 6.0, comment the \*\_50 through \*\_61 lines for compatibility.
\# For CUDA < 8.0, comment the \*\_60 and \*\_61 lines for compatibility.
\# For CUDA >= 9.0, comment the \*\_20 and \*\_21 lines for compatibility.
CUDA\_ARCH := #-gencode arch=compute\_20,code=sm_20 \
#-gencode arch=compute\_20,code=sm\_21 \
-gencode arch=compute\_30,code=sm\_30 \
-gencode arch=compute\_35,code=sm\_35 \
-gencode arch=compute\_50,code=sm\_50 \
-gencode arch=compute\_52,code=sm\_52 \
-gencode arch=compute\_60,code=sm\_60 \
-gencode arch=compute\_61,code=sm\_61 \
-gencode arch=compute\_61,code=compute\_61
\# BLAS choice:
\# atlas for ATLAS (default)
\# mkl for MKL
\# open for OpenBlas
BLAS := atlas
\# Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
\# Leave commented to accept the defaults for your choice of BLAS
\# (which should work)!
\# BLAS_INCLUDE := /path/to/your/blas
\# BLAS_LIB := /path/to/your/blas
\# Homebrew puts openblas in a directory that is not on the standard search path
\# BLAS_INCLUDE := $(shell brew --prefix openblas)/include
\# BLAS_LIB := $(shell brew --prefix openblas)/lib
\# This is required only if you will compile the matlab interface.
\# MATLAB directory should contain the mex binary in /bin.
\# MATLAB_DIR := /usr/local
\# MATLAB\_DIR := /Applications/MATLAB\_R2012b.app
\# NOTE: this is required only if you will compile the python interface.
\# We need to be able to find Python.h and numpy/arrayobject.h.
PYTHON_INCLUDE := /usr/include/python2.7 \
/usr/lib/python2.7/dist-packages/numpy/core/include
\# Anaconda Python distribution is quite popular. Include path:
\# Verify anaconda location, sometimes it's in root.
\# ANACONDA_HOME := $(HOME)/anaconda
\# PYTHON\_INCLUDE := $(ANACONDA\_HOME)/include \
\# $(ANACONDA_HOME)/include/python2.7 \
\# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include
\# Uncomment to use Python 3 (default is Python 2)
\# PYTHON\_LIBRARIES := boost\_python3 python3.5m
\# PYTHON_INCLUDE := /usr/include/python3.5m \
\# /usr/lib/python3.5/dist-packages/numpy/core/include
\# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
\# PYTHON\_LIB := $(ANACONDA\_HOME)/lib
\# Homebrew installs numpy in a non standard path (keg only)
\# PYTHON\_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.\_\_file__)'))/include
\# PYTHON_LIB += $(shell brew --prefix numpy)/lib
\# Uncomment to support layers written in Python (will link against Python libs)
\# WITH\_PYTHON\_LAYER := 1
\# Whatever else you find you need goes here.
INCLUDE\_DIRS := $(PYTHON\_INCLUDE) /usr/local/include /usr/include/hdf5/serial
LIBRARY\_DIRS := $(PYTHON\_LIB) /usr/local/lib /usr/lib /usr/lib/aarch64-linux-gnu /usr/lib/aarch64-linux-gnu/hdf5/serial
\# If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
\# INCLUDE_DIRS += $(shell brew --prefix)/include
\# LIBRARY_DIRS += $(shell brew --prefix)/lib
\# NCCL acceleration switch (uncomment to build with NCCL)
\# https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
\# USE_NCCL := 1
\# Uncomment to use \`pkg-config\` to specify OpenCV library paths.
\# (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
\# USE\_PKG\_CONFIG := 1
\# N.B. both build and distribute dirs are cleared on \`make clean\`
BUILD_DIR := build
DISTRIBUTE_DIR := distribute
\# Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
\# DEBUG := 1
\# The ID of the GPU that 'make runtest' will use to run unit tests.
TEST_GPUID := 0
\# enable pretty build (comment to see full commands)
Q ?= @
Makefile内容:
PROJECT := caffe
CONFIG_FILE := Makefile.config
\# Explicitly check for the config file, otherwise make -k will proceed anyway.
ifeq ($(wildcard $(CONFIG_FILE)),)
$(error $(CONFIG\_FILE) not found. See $(CONFIG\_FILE).example.)
endif
include $(CONFIG_FILE)
BUILD\_DIR\_LINK := $(BUILD_DIR)
ifeq ($(RELEASE\_BUILD\_DIR),)
RELEASE\_BUILD\_DIR := .$(BUILD\_DIR)\_release
endif
ifeq ($(DEBUG\_BUILD\_DIR),)
DEBUG\_BUILD\_DIR := .$(BUILD\_DIR)\_debug
endif
DEBUG ?= 0
ifeq ($(DEBUG), 1)
BUILD\_DIR := $(DEBUG\_BUILD_DIR)
OTHER\_BUILD\_DIR := $(RELEASE\_BUILD\_DIR)
else
BUILD\_DIR := $(RELEASE\_BUILD_DIR)
OTHER\_BUILD\_DIR := $(DEBUG\_BUILD\_DIR)
endif
\# All of the directories containing code.
SRC_DIRS := $(shell find * -type d -exec bash -c "find {} -maxdepth 1 \
\\( -name '*.cpp' -o -name '*.proto' \\) | grep -q ." \\; -print)
\# The target shared library name
LIBRARY_NAME := $(PROJECT)
LIB\_BUILD\_DIR := $(BUILD_DIR)/lib
STATIC\_NAME := $(LIB\_BUILD\_DIR)/lib$(LIBRARY\_NAME).a
DYNAMIC\_VERSION\_MAJOR := 1
DYNAMIC\_VERSION\_MINOR := 0
DYNAMIC\_VERSION\_REVISION := 0
DYNAMIC\_NAME\_SHORT := lib$(LIBRARY_NAME).so
#DYNAMIC\_SONAME\_SHORT := $(DYNAMIC\_NAME\_SHORT).$(DYNAMIC\_VERSION\_MAJOR)
DYNAMIC\_VERSIONED\_NAME\_SHORT := $(DYNAMIC\_NAME\_SHORT).$(DYNAMIC\_VERSION\_MAJOR).$(DYNAMIC\_VERSION\_MINOR).$(DYNAMIC\_VERSION_REVISION)
DYNAMIC\_NAME := $(LIB\_BUILD\_DIR)/$(DYNAMIC\_VERSIONED\_NAME\_SHORT)
COMMON\_FLAGS += -DCAFFE\_VERSION=$(DYNAMIC\_VERSION\_MAJOR).$(DYNAMIC\_VERSION\_MINOR).$(DYNAMIC\_VERSION\_REVISION)
##############################
\# Get all source files
##############################
\# CXX_SRCS are the source files excluding the test ones.
CXX\_SRCS := $(shell find src/$(PROJECT) ! -name "test\_*.cpp" -name "*.cpp")
\# CU_SRCS are the cuda source files
CU\_SRCS := $(shell find src/$(PROJECT) ! -name "test\_*.cu" -name "*.cu")
\# TEST_SRCS are the test source files
TEST\_MAIN\_SRC := src/$(PROJECT)/test/test\_caffe\_main.cpp
TEST\_SRCS := $(shell find src/$(PROJECT) -name "test\_*.cpp")
TEST\_SRCS := $(filter-out $(TEST\_MAIN\_SRC), $(TEST\_SRCS))
TEST\_CU\_SRCS := $(shell find src/$(PROJECT) -name "test_*.cu")
GTEST_SRC := src/gtest/gtest-all.cpp
\# TOOL_SRCS are the source files for the tool binaries
TOOL_SRCS := $(shell find tools -name "*.cpp")
\# EXAMPLE_SRCS are the source files for the example binaries
EXAMPLE_SRCS := $(shell find examples -name "*.cpp")
\# BUILD\_INCLUDE\_DIR contains any generated header files we want to include.
BUILD\_INCLUDE\_DIR := $(BUILD_DIR)/src
\# PROTO_SRCS are the protocol buffer definitions
PROTO\_SRC\_DIR := src/$(PROJECT)/proto
PROTO\_SRCS := $(wildcard $(PROTO\_SRC_DIR)/*.proto)
\# PROTO\_BUILD\_DIR will contain the .cc and obj files generated from
\# PROTO\_SRCS; PROTO\_BUILD\_INCLUDE\_DIR will contain the .h header files
PROTO\_BUILD\_DIR := $(BUILD\_DIR)/$(PROTO\_SRC_DIR)
PROTO\_BUILD\_INCLUDE\_DIR := $(BUILD\_INCLUDE_DIR)/$(PROJECT)/proto
\# NONGEN\_CXX\_SRCS includes all source/header files except those generated
\# automatically (e.g., by proto).
NONGEN\_CXX\_SRCS := $(shell find \
src/$(PROJECT) \
include/$(PROJECT) \
python/$(PROJECT) \
matlab/+$(PROJECT)/private \
examples \
tools \
-name "*.cpp" -or -name "*.hpp" -or -name "*.cu" -or -name "*.cuh")
LINT\_SCRIPT := scripts/cpp\_lint.py
LINT\_OUTPUT\_DIR := $(BUILD_DIR)/.lint
LINT_EXT := lint.txt
LINT\_OUTPUTS := $(addsuffix .$(LINT\_EXT), $(addprefix $(LINT\_OUTPUT\_DIR)/, $(NONGEN\_CXX\_SRCS)))
EMPTY\_LINT\_REPORT := $(BUILD\_DIR)/.$(LINT\_EXT)
NONEMPTY\_LINT\_REPORT := $(BUILD\_DIR)/$(LINT\_EXT)
\# PY$(PROJECT)_SRC is the python wrapper for $(PROJECT)
PY$(PROJECT)\_SRC := python/$(PROJECT)/\_$(PROJECT).cpp
PY$(PROJECT)\_SO := python/$(PROJECT)/\_$(PROJECT).so
PY$(PROJECT)\_HXX := include/$(PROJECT)/layers/python\_layer.hpp
\# MAT$(PROJECT)_SRC is the mex entrance point of matlab package for $(PROJECT)
MAT$(PROJECT)\_SRC := matlab/+$(PROJECT)/private/$(PROJECT)\_.cpp
ifneq ($(MATLAB_DIR),)
MAT\_SO\_EXT := $(shell $(MATLAB_DIR)/bin/mexext)
endif
MAT$(PROJECT)\_SO := matlab/+$(PROJECT)/private/$(PROJECT)\_.$(MAT\_SO\_EXT)
##############################
\# Derive generated files
##############################
\# The generated files for protocol buffers
PROTO\_GEN\_HEADER\_SRCS := $(addprefix $(PROTO\_BUILD_DIR)/, \
$(notdir ${PROTO_SRCS:.proto=.pb.h}))
PROTO\_GEN\_HEADER := $(addprefix $(PROTO\_BUILD\_INCLUDE_DIR)/, \
$(notdir ${PROTO_SRCS:.proto=.pb.h}))
PROTO\_GEN\_CC := $(addprefix $(BUILD\_DIR)/, ${PROTO\_SRCS:.proto=.pb.cc})
PY\_PROTO\_BUILD_DIR := python/$(PROJECT)/proto
PY\_PROTO\_INIT := python/$(PROJECT)/proto/\_\_init\_\_.py
PROTO\_GEN\_PY := $(foreach file,${PROTO\_SRCS:.proto=\_pb2.py}, \
$(PY\_PROTO\_BUILD_DIR)/$(notdir $(file)))
\# The objects corresponding to the source files
\# These objects will be linked into the final shared library, so we
\# exclude the tool, example, and test objects.
CXX\_OBJS := $(addprefix $(BUILD\_DIR)/, ${CXX_SRCS:.cpp=.o})
CU\_OBJS := $(addprefix $(BUILD\_DIR)/cuda/, ${CU_SRCS:.cu=.o})
PROTO\_OBJS := ${PROTO\_GEN_CC:.cc=.o}
OBJS := $(PROTO\_OBJS) $(CXX\_OBJS) $(CU_OBJS)
\# tool, example, and test objects
TOOL\_OBJS := $(addprefix $(BUILD\_DIR)/, ${TOOL_SRCS:.cpp=.o})
TOOL\_BUILD\_DIR := $(BUILD_DIR)/tools
TEST\_CXX\_BUILD\_DIR := $(BUILD\_DIR)/src/$(PROJECT)/test
TEST\_CU\_BUILD\_DIR := $(BUILD\_DIR)/cuda/src/$(PROJECT)/test
TEST\_CXX\_OBJS := $(addprefix $(BUILD\_DIR)/, ${TEST\_SRCS:.cpp=.o})
TEST\_CU\_OBJS := $(addprefix $(BUILD\_DIR)/cuda/, ${TEST\_CU_SRCS:.cu=.o})
TEST\_OBJS := $(TEST\_CXX\_OBJS) $(TEST\_CU_OBJS)
GTEST\_OBJ := $(addprefix $(BUILD\_DIR)/, ${GTEST_SRC:.cpp=.o})
EXAMPLE\_OBJS := $(addprefix $(BUILD\_DIR)/, ${EXAMPLE_SRCS:.cpp=.o})
\# Output files for automatic dependency generation
DEPS := ${CXX\_OBJS:.o=.d} ${CU\_OBJS:.o=.d} ${TEST\_CXX\_OBJS:.o=.d} \
${TEST\_CU\_OBJS:.o=.d} $(BUILD\_DIR)/${MAT$(PROJECT)\_SO:.$(MAT\_SO\_EXT)=.d}
\# tool, example, and test bins
TOOL\_BINS := ${TOOL\_OBJS:.o=.bin}
EXAMPLE\_BINS := ${EXAMPLE\_OBJS:.o=.bin}
\# symlinks to tool bins without the ".bin" extension
TOOL\_BIN\_LINKS := ${TOOL_BINS:.bin=}
\# Put the test binaries in build/test for convenience.
TEST\_BIN\_DIR := $(BUILD_DIR)/test
TEST\_CU\_BINS := $(addsuffix .testbin,$(addprefix $(TEST\_BIN\_DIR)/, \
$(foreach obj,$(TEST\_CU\_OBJS),$(basename $(notdir $(obj))))))
TEST\_CXX\_BINS := $(addsuffix .testbin,$(addprefix $(TEST\_BIN\_DIR)/, \
$(foreach obj,$(TEST\_CXX\_OBJS),$(basename $(notdir $(obj))))))
TEST\_BINS := $(TEST\_CXX\_BINS) $(TEST\_CU_BINS)
\# TEST\_ALL\_BIN is the test binary that links caffe dynamically.
TEST\_ALL\_BIN := $(TEST\_BIN\_DIR)/test_all.testbin
##############################
\# Derive compiler warning dump locations
##############################
WARNS_EXT := warnings.txt
CXX\_WARNS := $(addprefix $(BUILD\_DIR)/, ${CXX\_SRCS:.cpp=.o.$(WARNS\_EXT)})
CU\_WARNS := $(addprefix $(BUILD\_DIR)/cuda/, ${CU\_SRCS:.cu=.o.$(WARNS\_EXT)})
TOOL\_WARNS := $(addprefix $(BUILD\_DIR)/, ${TOOL\_SRCS:.cpp=.o.$(WARNS\_EXT)})
EXAMPLE\_WARNS := $(addprefix $(BUILD\_DIR)/, ${EXAMPLE\_SRCS:.cpp=.o.$(WARNS\_EXT)})
TEST\_WARNS := $(addprefix $(BUILD\_DIR)/, ${TEST\_SRCS:.cpp=.o.$(WARNS\_EXT)})
TEST\_CU\_WARNS := $(addprefix $(BUILD\_DIR)/cuda/, ${TEST\_CU\_SRCS:.cu=.o.$(WARNS\_EXT)})
ALL\_CXX\_WARNS := $(CXX\_WARNS) $(TOOL\_WARNS) $(EXAMPLE\_WARNS) $(TEST\_WARNS)
ALL\_CU\_WARNS := $(CU\_WARNS) $(TEST\_CU_WARNS)
ALL\_WARNS := $(ALL\_CXX\_WARNS) $(ALL\_CU_WARNS)
EMPTY\_WARN\_REPORT := $(BUILD\_DIR)/.$(WARNS\_EXT)
NONEMPTY\_WARN\_REPORT := $(BUILD\_DIR)/$(WARNS\_EXT)
##############################
\# Derive include and lib directories
##############################
CUDA\_INCLUDE\_DIR := $(CUDA_DIR)/include
CUDA\_LIB\_DIR :=
\# add <cuda>/lib64 only if it exists
ifneq ("$(wildcard $(CUDA_DIR)/lib64)","")
CUDA\_LIB\_DIR += $(CUDA_DIR)/lib64
endif
CUDA\_LIB\_DIR += $(CUDA_DIR)/lib
INCLUDE\_DIRS += $(BUILD\_INCLUDE_DIR) ./src ./include
ifneq ($(CPU_ONLY), 1)
INCLUDE\_DIRS += $(CUDA\_INCLUDE_DIR)
LIBRARY\_DIRS += $(CUDA\_LIB_DIR)
LIBRARIES := cudart cublas curand
endif
LIBRARIES += glog gflags protobuf boost\_system boost\_filesystem m
\# handle IO dependencies
USE_LEVELDB ?= 1
USE_LMDB ?= 1
\# This code is taken from https://github.com/sh1r0/caffe-android-lib
USE_HDF5 ?= 1
USE_OPENCV ?= 1
ifeq ($(USE_LEVELDB), 1)
LIBRARIES += leveldb snappy
endif
ifeq ($(USE_LMDB), 1)
LIBRARIES += lmdb
endif
\# This code is taken from https://github.com/sh1r0/caffe-android-lib
ifeq ($(USE_HDF5), 1)
LIBRARIES += hdf5_hl hdf5
endif
ifeq ($(USE_OPENCV), 1)
LIBRARIES += opencv\_core opencv\_highgui opencv_imgproc
ifeq ($(OPENCV_VERSION), 3)
LIBRARIES += opencv_imgcodecs
endif
endif
PYTHON\_LIBRARIES ?= boost\_python python2.7
WARNINGS := -Wall -Wno-sign-compare
##############################
\# Set build directories
##############################
DISTRIBUTE_DIR ?= distribute
DISTRIBUTE\_SUBDIRS := $(DISTRIBUTE\_DIR)/bin $(DISTRIBUTE_DIR)/lib
DIST_ALIASES := dist
ifneq ($(strip $(DISTRIBUTE_DIR)),distribute)
DIST_ALIASES += distribute
endif
ALL\_BUILD\_DIRS := $(sort $(BUILD\_DIR) $(addprefix $(BUILD\_DIR)/, $(SRC_DIRS)) \
$(addprefix $(BUILD\_DIR)/cuda/, $(SRC\_DIRS)) \
$(LIB\_BUILD\_DIR) $(TEST\_BIN\_DIR) $(PY\_PROTO\_BUILD\_DIR) $(LINT\_OUTPUT_DIR) \
$(DISTRIBUTE\_SUBDIRS) $(PROTO\_BUILD\_INCLUDE\_DIR))
##############################
\# Set directory for Doxygen-generated documentation
##############################
DOXYGEN\_CONFIG\_FILE ?= ./.Doxyfile
\# should be the same as OUTPUT_DIRECTORY in the .Doxyfile
DOXYGEN\_OUTPUT\_DIR ?= ./doxygen
DOXYGEN_COMMAND ?= doxygen
\# All the files that might have Doxygen documentation.
DOXYGEN_SOURCES := $(shell find \
src/$(PROJECT) \
include/$(PROJECT) \
python/ \
matlab/ \
examples \
tools \
-name "*.cpp" -or -name "*.hpp" -or -name "*.cu" -or -name "*.cuh" -or \
-name "*.py" -or -name "*.m")
DOXYGEN\_SOURCES += $(DOXYGEN\_CONFIG_FILE)
##############################
\# Configure build
##############################
\# Determine platform
UNAME := $(shell uname -s)
ifeq ($(UNAME), Linux)
LINUX := 1
else ifeq ($(UNAME), Darwin)
OSX := 1
OSX\_MAJOR\_VERSION := $(shell sw_vers -productVersion | cut -f 1 -d .)
OSX\_MINOR\_VERSION := $(shell sw_vers -productVersion | cut -f 2 -d .)
endif
\# Linux
ifeq ($(LINUX), 1)
CXX ?= /usr/bin/g++
GCCVERSION := $(shell $(CXX) -dumpversion | cut -f1,2 -d.)
\# older versions of gcc are too dumb to build boost with -Wuninitalized
ifeq ($(shell echo | awk '{exit $(GCCVERSION) < 4.6;}'), 1)
WARNINGS += -Wno-uninitialized
endif
\# boost::thread is reasonably called boost_thread (compare OS X)
\# We will also explicitly add stdc++ to the link target.
LIBRARIES += boost_thread stdc++
VERSIONFLAGS += -Wl,-soname,$(DYNAMIC\_VERSIONED\_NAME_SHORT) -Wl,-rpath,$(ORIGIN)/../lib
endif
\# OS X:
\# clang++ instead of g++
\# libstdc++ for NVCC compatibility on OS X >= 10.9 with CUDA < 7.0
ifeq ($(OSX), 1)
CXX := /usr/bin/clang++
ifneq ($(CPU_ONLY), 1)
CUDA\_VERSION := $(shell $(CUDA\_DIR)/bin/nvcc -V | grep -o 'release \[0-9.\]*' | tr -d '\[a-z \]')
ifeq ($(shell echo | awk '{exit $(CUDA_VERSION) < 7.0;}'), 1)
CXXFLAGS += -stdlib=libstdc++
LINKFLAGS += -stdlib=libstdc++
endif
\# clang throws this warning for cuda headers
WARNINGS += -Wno-unneeded-internal-declaration
\# 10.11 strips DYLD_* env vars so link CUDA (rpath is available on 10.5+)
OSX\_10\_OR\_LATER := $(shell \[ $(OSX\_MAJOR_VERSION) -ge 10 \] && echo true)
OSX\_10\_5\_OR\_LATER := $(shell \[ $(OSX\_MINOR\_VERSION) -ge 5 \] && echo true)
ifeq ($(OSX\_10\_OR_LATER),true)
ifeq ($(OSX\_10\_5\_OR\_LATER),true)
LDFLAGS += -Wl,-rpath,$(CUDA\_LIB\_DIR)
endif
endif
endif
\# gtest needs to use its own tuple to not conflict with clang
COMMON\_FLAGS += -DGTEST\_USE\_OWN\_TR1_TUPLE=1
\# boost::thread is called boost_thread-mt to mark multithreading on OS X
LIBRARIES += boost_thread-mt
\# we need to explicitly ask for the rpath to be obeyed
ORIGIN := @loader_path
VERSIONFLAGS += -Wl,-install\_name,@rpath/$(DYNAMIC\_VERSIONED\_NAME\_SHORT) -Wl,-rpath,$(ORIGIN)/../../build/lib
else
ORIGIN := \\$$ORIGIN
endif
\# Custom compiler
ifdef CUSTOM_CXX
CXX := $(CUSTOM_CXX)
endif
\# Static linking
ifneq (,$(findstring clang++,$(CXX)))
STATIC\_LINK\_COMMAND := -Wl,-force\_load $(STATIC\_NAME)
else ifneq (,$(findstring g++,$(CXX)))
STATIC\_LINK\_COMMAND := -Wl,--whole-archive $(STATIC_NAME) -Wl,--no-whole-archive
else
\# The following line must not be indented with a tab, since we are not inside a target
$(error Cannot static link with the $(CXX) compiler)
endif
\# Debugging
ifeq ($(DEBUG), 1)
COMMON_FLAGS += -DDEBUG -g -O0
NVCCFLAGS += -G
else
COMMON_FLAGS += -DNDEBUG -O2
endif
\# cuDNN acceleration configuration.
ifeq ($(USE_CUDNN), 1)
LIBRARIES += cudnn
COMMON\_FLAGS += -DUSE\_CUDNN
endif
\# NCCL acceleration configuration
ifeq ($(USE_NCCL), 1)
LIBRARIES += nccl
COMMON\_FLAGS += -DUSE\_NCCL
endif
\# configure IO libraries
ifeq ($(USE_OPENCV), 1)
COMMON\_FLAGS += -DUSE\_OPENCV
endif
ifeq ($(USE_LEVELDB), 1)
COMMON\_FLAGS += -DUSE\_LEVELDB
endif
ifeq ($(USE_LMDB), 1)
COMMON\_FLAGS += -DUSE\_LMDB
ifeq ($(ALLOW\_LMDB\_NOLOCK), 1)
COMMON\_FLAGS += -DALLOW\_LMDB_NOLOCK
endif
endif
\# This code is taken from https://github.com/sh1r0/caffe-android-lib
ifeq ($(USE_HDF5), 1)
COMMON\_FLAGS += -DUSE\_HDF5
endif
\# CPU-only configuration
ifeq ($(CPU_ONLY), 1)
OBJS := $(PROTO\_OBJS) $(CXX\_OBJS)
TEST\_OBJS := $(TEST\_CXX_OBJS)
TEST\_BINS := $(TEST\_CXX_BINS)
ALL\_WARNS := $(ALL\_CXX_WARNS)
TEST\_FILTER := --gtest\_filter="-\*GPU\*"
COMMON\_FLAGS += -DCPU\_ONLY
endif
\# Python layer support
ifeq ($(WITH\_PYTHON\_LAYER), 1)
COMMON\_FLAGS += -DWITH\_PYTHON_LAYER
LIBRARIES += $(PYTHON_LIBRARIES)
endif
\# BLAS configuration (default = ATLAS)
BLAS ?= atlas
ifeq ($(BLAS), mkl)
\# MKL
LIBRARIES += mkl_rt
COMMON\_FLAGS += -DUSE\_MKL
MKLROOT ?= /opt/intel/mkl
BLAS_INCLUDE ?= $(MKLROOT)/include
BLAS_LIB ?= $(MKLROOT)/lib $(MKLROOT)/lib/intel64
else ifeq ($(BLAS), open)
\# OpenBLAS
LIBRARIES += openblas
else
\# ATLAS
ifeq ($(LINUX), 1)
ifeq ($(BLAS), atlas)
\# Linux simply has cblas and atlas
LIBRARIES += cblas atlas
endif
else ifeq ($(OSX), 1)
\# OS X packages atlas as the vecLib framework
LIBRARIES += cblas
\# 10.10 has accelerate while 10.9 has veclib
XCODE\_CLT\_VER := $(shell pkgutil --pkg-info=com.apple.pkg.CLTools_Executables | grep 'version' | sed 's/\[^0-9\]*\\(\[0-9\]\\).*/\\1/')
XCODE\_CLT\_GEQ\_7 := $(shell \[ $(XCODE\_CLT_VER) -gt 6 \] && echo 1)
XCODE\_CLT\_GEQ\_6 := $(shell \[ $(XCODE\_CLT_VER) -gt 5 \] && echo 1)
ifeq ($(XCODE\_CLT\_GEQ_7), 1)
BLAS_INCLUDE ?= /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/$(shell ls /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/ | sort | tail -1)/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/Headers
else ifeq ($(XCODE\_CLT\_GEQ_6), 1)
BLAS_INCLUDE ?= /System/Library/Frameworks/Accelerate.framework/Versions/Current/Frameworks/vecLib.framework/Headers/
LDFLAGS += -framework Accelerate
else
BLAS_INCLUDE ?= /System/Library/Frameworks/vecLib.framework/Versions/Current/Headers/
LDFLAGS += -framework vecLib
endif
endif
endif
INCLUDE\_DIRS += $(BLAS\_INCLUDE)
LIBRARY\_DIRS += $(BLAS\_LIB)
LIBRARY\_DIRS += $(LIB\_BUILD_DIR)
\# Automatic dependency generation (nvcc is handled separately)
CXXFLAGS += -MMD -MP
\# Complete build flags.
COMMON\_FLAGS += $(foreach includedir,$(INCLUDE\_DIRS),-I$(includedir))
CXXFLAGS += -pthread -fPIC $(COMMON_FLAGS) $(WARNINGS)
#NVCCFLAGS += -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
NVCCFLAGS+= -D\_FORCE\_INLINES -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
\# mex may invoke an older gcc that is too liberal with -Wuninitalized
MATLAB_CXXFLAGS := $(CXXFLAGS) -Wno-uninitialized
LINKFLAGS += -pthread -fPIC $(COMMON_FLAGS) $(WARNINGS)
USE\_PKG\_CONFIG ?= 0
ifeq ($(USE\_PKG\_CONFIG), 1)
PKG_CONFIG := $(shell pkg-config opencv --libs)
else
PKG_CONFIG :=
endif
LDFLAGS += $(foreach librarydir,$(LIBRARY\_DIRS),-L$(librarydir)) $(PKG\_CONFIG) \
$(foreach library,$(LIBRARIES),-l$(library))
PYTHON\_LDFLAGS := $(LDFLAGS) $(foreach library,$(PYTHON\_LIBRARIES),-l$(library))
\# 'superclean' target recursively* deletes all files ending with an extension
\# in $(SUPERCLEAN_EXTS) below. This may be useful if you've built older
\# versions of Caffe that do not place all generated files in a location known
\# to the 'clean' target.
#
\# 'supercleanlist' will list the files to be deleted by make superclean.
#
\# * Recursive with the exception that symbolic links are never followed, per the
\# default behavior of 'find'.
SUPERCLEAN\_EXTS := .so .a .o .bin .testbin .pb.cc .pb.h \_pb2.py .cuo
\# Set the sub-targets of the 'everything' target.
EVERYTHING_TARGETS := all py$(PROJECT) test warn lint
\# Only build matcaffe as part of "everything" if MATLAB_DIR is specified.
ifneq ($(MATLAB_DIR),)
EVERYTHING_TARGETS += mat$(PROJECT)
endif
##############################
\# Define build targets
##############################
.PHONY: all lib test clean docs linecount lint lintclean tools examples $(DIST_ALIASES) \
py mat py$(PROJECT) mat$(PROJECT) proto runtest \
superclean supercleanlist supercleanfiles warn everything
all: lib tools examples
lib: $(STATIC\_NAME) $(DYNAMIC\_NAME)
everything: $(EVERYTHING_TARGETS)
linecount:
cloc --read-lang-def=$(PROJECT).cloc \
src/$(PROJECT) include/$(PROJECT) tools examples \
python matlab
lint: $(EMPTY\_LINT\_REPORT)
lintclean:
@ $(RM) -r $(LINT\_OUTPUT\_DIR) $(EMPTY\_LINT\_REPORT) $(NONEMPTY\_LINT\_REPORT)
docs: $(DOXYGEN\_OUTPUT\_DIR)
@ cd ./docs ; ln -sfn ../$(DOXYGEN\_OUTPUT\_DIR)/html doxygen
$(DOXYGEN\_OUTPUT\_DIR): $(DOXYGEN\_CONFIG\_FILE) $(DOXYGEN_SOURCES)
$(DOXYGEN\_COMMAND) $(DOXYGEN\_CONFIG_FILE)
$(EMPTY\_LINT\_REPORT): $(LINT\_OUTPUTS) | $(BUILD\_DIR)
@ cat $(LINT_OUTPUTS) > $@
@ if \[ -s "$@" \]; then \
cat $@; \
mv $@ $(NONEMPTY\_LINT\_REPORT); \
echo "Found one or more lint errors."; \
exit 1; \
fi; \
$(RM) $(NONEMPTY\_LINT\_REPORT); \
echo "No lint errors!";
$(LINT\_OUTPUTS): $(LINT\_OUTPUT\_DIR)/%.lint.txt : % $(LINT\_SCRIPT) | $(LINT\_OUTPUT\_DIR)
@ mkdir -p $(dir $@)
@ python $(LINT_SCRIPT) $< 2>&1 \
| grep -v "^Done processing " \
| grep -v "^Total errors found: 0" \
\> $@ \
|| true
test: $(TEST\_ALL\_BIN) $(TEST\_ALL\_DYNLINK\_BIN) $(TEST\_BINS)
tools: $(TOOL\_BINS) $(TOOL\_BIN_LINKS)
examples: $(EXAMPLE_BINS)
py$(PROJECT): py
py: $(PY$(PROJECT)\_SO) $(PROTO\_GEN_PY)
$(PY$(PROJECT)\_SO): $(PY$(PROJECT)\_SRC) $(PY$(PROJECT)\_HXX) | $(DYNAMIC\_NAME)
@ echo CXX/LD -o $@ $<
$(Q)$(CXX) -shared -o $@ $(PY$(PROJECT)_SRC) \
-o $@ $(LINKFLAGS) -l$(LIBRARY\_NAME) $(PYTHON\_LDFLAGS) \
-Wl,-rpath,$(ORIGIN)/../../build/lib
mat$(PROJECT): mat
mat: $(MAT$(PROJECT)_SO)
$(MAT$(PROJECT)\_SO): $(MAT$(PROJECT)\_SRC) $(STATIC_NAME)
@ if \[ -z "$(MATLAB_DIR)" \]; then \
echo "MATLAB\_DIR must be specified in $(CONFIG\_FILE)" \
"to build mat$(PROJECT)."; \
exit 1; \
fi
@ echo MEX $<
$(Q)$(MATLAB\_DIR)/bin/mex $(MAT$(PROJECT)\_SRC) \
CXX="$(CXX)" \
CXXFLAGS="\\$$CXXFLAGS $(MATLAB_CXXFLAGS)" \
CXXLIBS="\\$$CXXLIBS $(STATIC\_LINK\_COMMAND) $(LDFLAGS)" -output $@
@ if \[ -f "$(PROJECT)_.d" \]; then \
mv -f $(PROJECT)_.d $(BUILD\_DIR)/${MAT$(PROJECT)\_SO:.$(MAT\_SO\_EXT)=.d}; \
fi
runtest: $(TEST\_ALL\_BIN)
$(TOOL\_BUILD\_DIR)/caffe
$(TEST\_ALL\_BIN) $(TEST\_GPUID) --gtest\_shuffle $(TEST_FILTER)
pytest: py
cd python; python -m unittest discover -s caffe/test
mattest: mat
cd matlab; $(MATLAB\_DIR)/bin/matlab -nodisplay -r 'caffe.run\_tests(), exit()'
warn: $(EMPTY\_WARN\_REPORT)
$(EMPTY\_WARN\_REPORT): $(ALL\_WARNS) | $(BUILD\_DIR)
@ cat $(ALL_WARNS) > $@
@ if \[ -s "$@" \]; then \
cat $@; \
mv $@ $(NONEMPTY\_WARN\_REPORT); \
echo "Compiler produced one or more warnings."; \
exit 1; \
fi; \
$(RM) $(NONEMPTY\_WARN\_REPORT); \
echo "No compiler warnings!";
$(ALL\_WARNS): %.o.$(WARNS\_EXT) : %.o
$(BUILD\_DIR\_LINK): $(BUILD_DIR)/.linked
\# Create a target ".linked" in this BUILD_DIR to tell Make that the "build" link
\# is currently correct, then delete the one in the OTHER\_BUILD\_DIR in case it
\# exists and $(DEBUG) is toggled later.
$(BUILD_DIR)/.linked:
@ mkdir -p $(BUILD_DIR)
@ $(RM) $(OTHER\_BUILD\_DIR)/.linked
@ $(RM) -r $(BUILD\_DIR\_LINK)
@ ln -s $(BUILD\_DIR) $(BUILD\_DIR_LINK)
@ touch $@
$(ALL\_BUILD\_DIRS): | $(BUILD\_DIR\_LINK)
@ mkdir -p $@
$(DYNAMIC\_NAME): $(OBJS) | $(LIB\_BUILD_DIR)
@ echo LD -o $@
$(Q)$(CXX) -shared -o $@ $(OBJS) $(VERSIONFLAGS) $(LINKFLAGS) $(LDFLAGS)
@ cd $(BUILD\_DIR)/lib; rm -f $(DYNAMIC\_NAME\_SHORT); ln -s $(DYNAMIC\_VERSIONED\_NAME\_SHORT) $(DYNAMIC\_NAME\_SHORT)
$(STATIC\_NAME): $(OBJS) | $(LIB\_BUILD_DIR)
@ echo AR -o $@
$(Q)ar rcs $@ $(OBJS)
$(BUILD\_DIR)/%.o: %.cpp $(PROTO\_GEN\_HEADER) | $(ALL\_BUILD_DIRS)
@ echo CXX $<
$(Q)$(CXX) $< $(CXXFLAGS) -c -o $@ 2> $@.$(WARNS_EXT) \
|| (cat $@.$(WARNS_EXT); exit 1)
@ cat $@.$(WARNS_EXT)
$(PROTO\_BUILD\_DIR)/%.pb.o: $(PROTO\_BUILD\_DIR)/%.pb.cc $(PROTO\_GEN\_HEADER) \
| $(PROTO\_BUILD\_DIR)
@ echo CXX $<
$(Q)$(CXX) $< $(CXXFLAGS) -c -o $@ 2> $@.$(WARNS_EXT) \
|| (cat $@.$(WARNS_EXT); exit 1)
@ cat $@.$(WARNS_EXT)
$(BUILD\_DIR)/cuda/%.o: %.cu | $(ALL\_BUILD_DIRS)
@ echo NVCC $<
$(Q)$(CUDA\_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA\_ARCH) -M $< -o ${@:.o=.d} \
-odir $(@D)
$(Q)$(CUDA\_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA\_ARCH) -c $< -o $@ 2> $@.$(WARNS_EXT) \
|| (cat $@.$(WARNS_EXT); exit 1)
@ cat $@.$(WARNS_EXT)
$(TEST\_ALL\_BIN): $(TEST\_MAIN\_SRC) $(TEST\_OBJS) $(GTEST\_OBJ) \
| $(DYNAMIC\_NAME) $(TEST\_BIN_DIR)
@ echo CXX/LD -o $@ $<
$(Q)$(CXX) $(TEST\_MAIN\_SRC) $(TEST\_OBJS) $(GTEST\_OBJ) \
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib
$(TEST\_CU\_BINS): $(TEST\_BIN\_DIR)/%.testbin: $(TEST\_CU\_BUILD_DIR)/%.o \
$(GTEST\_OBJ) | $(DYNAMIC\_NAME) $(TEST\_BIN\_DIR)
@ echo LD $<
$(Q)$(CXX) $(TEST\_MAIN\_SRC) $< $(GTEST_OBJ) \
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib
$(TEST\_CXX\_BINS): $(TEST\_BIN\_DIR)/%.testbin: $(TEST\_CXX\_BUILD_DIR)/%.o \
$(GTEST\_OBJ) | $(DYNAMIC\_NAME) $(TEST\_BIN\_DIR)
@ echo LD $<
$(Q)$(CXX) $(TEST\_MAIN\_SRC) $< $(GTEST_OBJ) \
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib
\# Target for extension-less symlinks to tool binaries with extension '*.bin'.
$(TOOL\_BUILD\_DIR)/%: $(TOOL\_BUILD\_DIR)/%.bin | $(TOOL\_BUILD\_DIR)
@ $(RM) $@
@ ln -s $(notdir $<) $@
$(TOOL\_BINS): %.bin : %.o | $(DYNAMIC\_NAME)
@ echo CXX/LD -o $@
$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(LDFLAGS) \
-Wl,-rpath,$(ORIGIN)/../lib
$(EXAMPLE\_BINS): %.bin : %.o | $(DYNAMIC\_NAME)
@ echo CXX/LD -o $@
$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(LDFLAGS) \
-Wl,-rpath,$(ORIGIN)/../../lib
proto: $(PROTO\_GEN\_CC) $(PROTO\_GEN\_HEADER)
$(PROTO\_BUILD\_DIR)/%.pb.cc $(PROTO\_BUILD\_DIR)/%.pb.h : \
$(PROTO\_SRC\_DIR)/%.proto | $(PROTO\_BUILD\_DIR)
@ echo PROTOC $<
$(Q)protoc --proto\_path=$(PROTO\_SRC\_DIR) --cpp\_out=$(PROTO\_BUILD\_DIR) $<
$(PY\_PROTO\_BUILD\_DIR)/%\_pb2.py : $(PROTO\_SRC\_DIR)/%.proto \
$(PY\_PROTO\_INIT) | $(PY\_PROTO\_BUILD_DIR)
@ echo PROTOC \\(python\\) $<
$(Q)protoc --proto\_path=src --python\_out=python $<
$(PY\_PROTO\_INIT): | $(PY\_PROTO\_BUILD_DIR)
touch $(PY\_PROTO\_INIT)
clean:
@\- $(RM) -rf $(ALL\_BUILD\_DIRS)
@\- $(RM) -rf $(OTHER\_BUILD\_DIR)
@\- $(RM) -rf $(BUILD\_DIR\_LINK)
@\- $(RM) -rf $(DISTRIBUTE_DIR)
@\- $(RM) $(PY$(PROJECT)_SO)
@\- $(RM) $(MAT$(PROJECT)_SO)
supercleanfiles:
$(eval SUPERCLEAN_FILES := $(strip \
$(foreach ext,$(SUPERCLEAN_EXTS), $(shell find . -name '*$(ext)' \
-not -path './data/*'))))
supercleanlist: supercleanfiles
@ \
if \[ -z "$(SUPERCLEAN_FILES)" \]; then \
echo "No generated files found."; \
else \
echo $(SUPERCLEAN_FILES) | tr ' ' '\\n'; \
fi
superclean: clean supercleanfiles
@ \
if \[ -z "$(SUPERCLEAN_FILES)" \]; then \
echo "No generated files found."; \
else \
echo "Deleting the following generated files:"; \
echo $(SUPERCLEAN_FILES) | tr ' ' '\\n'; \
$(RM) $(SUPERCLEAN_FILES); \
fi
$(DIST\_ALIASES): $(DISTRIBUTE\_DIR)
$(DISTRIBUTE\_DIR): all py | $(DISTRIBUTE\_SUBDIRS)
\# add proto
cp -r src/caffe/proto $(DISTRIBUTE_DIR)/
\# add include
cp -r include $(DISTRIBUTE_DIR)/
mkdir -p $(DISTRIBUTE_DIR)/include/caffe/proto
cp $(PROTO\_GEN\_HEADER\_SRCS) $(DISTRIBUTE\_DIR)/include/caffe/proto
\# add tool and example binaries
cp $(TOOL\_BINS) $(DISTRIBUTE\_DIR)/bin
cp $(EXAMPLE\_BINS) $(DISTRIBUTE\_DIR)/bin
\# add libraries
cp $(STATIC\_NAME) $(DISTRIBUTE\_DIR)/lib
install -m 644 $(DYNAMIC\_NAME) $(DISTRIBUTE\_DIR)/lib
cd $(DISTRIBUTE\_DIR)/lib; rm -f $(DYNAMIC\_NAME\_SHORT); ln -s $(DYNAMIC\_VERSIONED\_NAME\_SHORT) $(DYNAMIC\_NAME\_SHORT)
\# add python - it's not the standard way, indeed...
cp -r python $(DISTRIBUTE_DIR)/
-include $(DEPS)
8为hdf5创建链接
find . -type f -exec sed -i -e 's^"hdf5.h"^"hdf5/serial/hdf5.h"^g' -e 's^"hdf5\_hl.h"^"hdf5/serial/hdf5\_hl.h"^g' '{}' \\;
cd /usr/lib/aarch64-linux-gnu
执行下面两句连接命令:
sudo ln -s libhdf5_serial.so.10.1.0 libhdf5.so
sudo ln -s libhdf5\_serial\_hl.so.10.0.2 libhdf5_hl.so
9.Make 编译
cd /home/nvidia/caffe
make all -j4
make test -j4
make runtest j4
10.caffe python编译
cd /home/nvidia/caffe
cd python
创建一个编译脚本gen.sh
touch gen.sh
sudo gedit gen.sh
gen.sh写入脚本内容
for req in $(cat requirements.txt); do pip install $req; done
添加权限
chmod +x gen.sh
执行脚本
./gen.sh
更新会比较慢,耐心等待
添加环境变量
sudo gedit /etc/profile
添加内容
export PYTHONPATH=${HOME}/caffe-master/python:$PYTHONPATH
export LD\_LIBRARY\_PATH=${HOME}/caffe/build/lib:$LD\_LIBRARY\_PATH
export LD\_LIBRARY\_PATH=/usr/local/lib/:$LD\_LIBRARY\_PATH
执行source命令生效
source /etc/profile
pycaffe编译
cd /home/nvidia/caffe
make pycaffe
make distribute
验证是否成功
nvidia@tegra-ubuntu:~$python
Python 3.5.2 (default, Nov 12 2018, 13:43:14)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import caffe
>>> caffe.__
caffe.__class__( caffe.__hash__( caffe.__reduce__(
caffe.__delattr__( caffe.__init__( caffe.__reduce_ex__(
caffe.__dict__ caffe.__le__( caffe.__repr__(
caffe.__dir__( caffe.__loader__ caffe.__setattr__(
caffe.__doc__ caffe.__lt__( caffe.__sizeof__(
caffe.__eq__( caffe.__name__ caffe.__spec__
caffe.__format__( caffe.__ne__( caffe.__str__(
caffe.__ge__( caffe.__new__( caffe.__subclasshook__(
caffe.__getattribute__( caffe.__package__
caffe.__gt__( caffe.__path__
>>> caffe.__
看到这儿了,应该成功了,祝你好运!