sh 树莓派安装Intel Movidius NCSDKv2 + OpenCV最佳化
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了sh 树莓派安装Intel Movidius NCSDKv2 + OpenCV最佳化相关的知识,希望对你有一定的参考价值。
# Set Time zone
#timedatectl set-timezone Asia/Taipei
# Make sure all operations were under home directory
cd ~
# Clean and remove softwares
sudo apt-get -y purge wolfram-engine
sudo apt-get -y purge libreoffice*
sudo apt-get -y clean
sudo apt-get -y autoremove
# Install dependencies
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install -y git mc python-pip python3-pip cython3
sudo apt-get install -y build-essential cmake pkg-config
sudo apt-get install -y libjpeg-dev libtiff5-dev libjasper1 libjasper-dev libpng12-dev
sudo apt-get install -y libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get install -y libxvidcore-dev libx264-dev
sudo apt-get install -y libgtk2.0-dev libgtk-3-dev
sudo apt-get install -y libcanberra-gtk*
sudo apt-get install -y libatlas-base-dev gfortran
sudo apt-get install -y python2.7-dev python3-dev
# Create your Python virtual environment
wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py
sudo python3 get-pip.py
sudo pip install virtualenv virtualenvwrapper numpy
sudo rm -rf ~/.cache/pip
# virtualenv and virtualenvwrapper
echo "# virtualenv and virtualenvwrapper" >> ~/.profile
cat ~/.profile
echo "export WORKON_HOME=\$HOME/.virtualenvs" >> ~/.profile
cat ~/.profile
echo "export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3" >> ~/.profile
cat ~/.profile
echo "source /usr/local/bin/virtualenvwrapper.sh" >> ~/.profile
cat ~/.profile
# Reload profile
source ~/.profile
# Create a virtualenv named 'cv'
mkvirtualenv cv -p python3
# Activate and get in cv
workon cv
# Activate virtualenv 'cv'
cd ~
# Increase swapfile size
sudo nano /etc/dphys-swapfile
#Change: CONF_SWAPSIZE=100 => CONF_SWAPSIZE=2048
# Restart swapfile service
sudo /etc/init.d/dphys-swapfile restart swapon -s
# Install required packages
sudo -H pip3 install cython numpy pillow
# Install tensorflow 1.9.0
sudo pip3 uninstall tensorflow
wget https://github.com/lhelontra/tensorflow-on-arm/releases/download/v1.9.0/tensorflow-1.9.0-cp35-none-linux_armv7l.whl
sudo pip3 install tensorflow-1.9.0-cp35-none-linux_armv7l.whl
rm tensorflow-1.9.0-cp35-none-linux_armv7l.whl
git clone -b ncsdk2 http://github.com/Movidius/ncsdk
cd ncsdk
nano ncsdk.conf
# MAKE_NJOBS=1
# USE_VIRTUALENV=yes
#sudo nano install-opencv.sh
# Default OpenCV is 3.3.0, if you want
# Change version to 3.4.2
sudo make install
#
#export PYTHONPATH="${PYTHONPATH}:/opt/movidius/caffe/python"
git clone -b ncsdk2 https://github.com/movidius/ncappzoo
cd ncappzoo/apps/hello_ncs_py
sudo ./install-opencv.sh
# Restore swapfile size to default
sudo nano /etc/dphys-swapfile
#Change: CONF_SWAPSIZE=2048 => CONF_SWAPSIZE=100
# Restart swapfile service
sudo /etc/init.d/dphys-swapfile restart swapon -s
# Download opencv version 3.3.0
cd ~
wget -O opencv.zip https://github.com/opencv/opencv/archive/3.3.0.zip
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/3.3.0.zip
unzip opencv.zip
unzip opencv_contrib.zip
# Get into virtualenv
source ~/.profile
workon cv
# Configure and Build opencv
cd ~/opencv-3.3.0/
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules \
-D ENABLE_NEON=ON \
-D ENABLE_VFPV3=ON \
-D BUILD_TESTS=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D BUILD_EXAMPLES=OFF ..
make -j4
# Install optimized opencv
sudo make install
sudo ldconfig
# Link OpenCV 4 into your Python 3 virtual environment
cd ~/.virtualenvs/cv/lib/python3.5/site-packages/
$ ln -s /usr/local/python/cv2 cv2
$ cd ~
Thanks for this great piece of work.
You can test a training state or a trained dataset in 2 ways:
Directly with this line:
./flow --imgdir sample_img/ --model cfg/tiny-yolo-voc-1c.cfg --load -1
or
You can first save your training set and after this show the result directly from the .pb file.
a.) /.flow --model cfg/tiny-yolo-voc-1c.cfg --load -1 --savepb
b.) ./flow --pbLoad built_graph/tiny-yolo-voc-1c.pb --metaLoad built_graph/tiny-yolo-voc-1c.meta --imgdir sample_img/
The second way only works if you have .pb files with less than 14000 stages - info from @Savash2016.
If you for example create an .pb file having 37000 stages you see no more bounding boxes in the predicted pictures. Even by applying a threshold to 0.0000001 won't help.
This issue that has already been mentioned in #297, #330 and #597
@abagshaw you changed a title from I FOUND AN ERROR pleaseeeee fix it @thtrieu to savepb files, after training with more than 14000 stages on 6 Jul 2017 in #330. Do you know why the issue has been closed. Am I missing something? Or am I doing something wrong?
I am a bit lost here since in #170 an iOS application with a .pb file seems to work even though I am not sure which .weights or trained models were used.
I hope we can solve this issue togehter!
npm i -g pm2
## Very important!!! ##
sudo env PATH=$PATH:/usr/local/bin pm2 startup systemd -u pi --hp /home/pi
# start up jupyter lab
pm2 start "jupyter lab --ip=0.0.0.0"
pm2 save
以上是关于sh 树莓派安装Intel Movidius NCSDKv2 + OpenCV最佳化的主要内容,如果未能解决你的问题,请参考以下文章