Photo by Harrison Broadbent on Unsplash

Building OpenCV 4.1.0 on Raspbian Buster and Raspberry PI4

The Raspberry PI 4 hit the streets and it is an exciting upgrade. More memory — MUCH more memory and a faster cpu, dual 4k video, and many more features. One of the updates is that now Python 3.7.x is part of the OS distribution. So you do not have to install Python 3.7.

The first thing I wanted to do with the new RaspberryPI 4 was to install OpenCV and my favorite image libraries to perform some facial recognition.

I have a Github repo with the scripts that I keep current as the things change. You can reach that repo here: youngsoul/rpi_opencv_install

A lot of my inspiration around working with computer vision and the Raspberry PI comes from Adrian Rosebrock who runs I highly encourage you to check out his site and the amazing articles. He has written a number of books/courses that you can purchase and the ones I have gone through have been excellent.

Raspbian Buster with desktop

I started with the June 20, 2019 version of the Raspbian and I am going to assume you know how to burn the image onto an SD card. I will mentioned that I use ApplePI-Baker V2. ApplePI-Baker lets me write images to the SD card, but it also lets me backup images from the SD card to my computer so I can re-burn new images with OpenCV and all of the computer vision libraries that I like to use.

Things to consider before you build

  • Have a fan and/or heatsink on your RaspberryPI cpu. There will be a number of libraries that have to be built, that take hours to build. Your cpu will get very hot
  • Run the commands with nohup, or use ‘screen’. I use screen to run the commands in the background because they take a very long time and I can come back later after my ssh connection has dropped and still see the output from the command.
  • Read through my instructions and make changes as you see fit. For example, I created a virtual python environment in: /home/pi/.virtualenvs/cv2_env however you might prefer another location.
  • I have never actually executed the scripts as scripts. Instead I run each command individually to make sure I see any and all errors. At some point I will kick it off as a script and walk away but for now think of the script as a recipe.

Install OpenCV 4.10

Below is the script/recipe I used to build/install OpenCV 4.1.0 on the Buster version of Raspbian. For the latest see my github repo using the link at the beginning of the article.

# script or instructions to install opencv 4.1.0
# on raspberry pi with buster version of raspbian

sudo apt-get -y update
sudo apt-get -y upgrade
sudo apt-get -y install screen
sudo apt-get -y install htop

# -------------------
# run screen command to put window in background
# -------------------

# -i update the file
# substitute string1 for string2 globally
sudo sed -i 's/CONF_SWAPSIZE=100/CONF_SWAPSIZE=2048/g' /etc/dphys-swapfile

sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start

# -i update the file
# $ regex for end of the file
# a append
# then the text to append.
sudo sed -i '$a gpu_mem=128' /boot/config.txt

sudo apt-get -y install build-essential cmake pkg-config
sudo apt-get -y install libjpeg-dev libtiff-dev libjasper-dev libpng12-dev
sudo apt-get -y install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
sudo apt-get -y install libxvidcore-dev libx264-dev
sudo apt-get -y install libgtk2.0-dev libgtk-3-dev
sudo apt-get -y install libatlas-base-dev gfortran

sudo apt-get -y install python3-dev

wget -O

wget -O


# yes you really need to rename the directories
mv opencv-4.1.0/ opencv
mv opencv_contrib-4.1.0/ opencv_contrib

# you have to make python3.6 env so it uses python3.6 for the build
mkdir -p .virtualenvs
python3 -m venv .virtualenvs/cv2_env
source .virtualenvs/cv2_env/bin/activate

# took a really long time
pip3 install numpy

cd ~/opencv
mkdir build
cd build

-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \

make -j4

sudo make install
sudo ldconfig
sudo apt-get update

cd ~/opencv/build/lib/python3
mkdir -p ~/lib/cv2
cp ~/lib/cv2
ln -s ~/lib/cv2/ ~/.virtualenvs/cv2_env/lib/python3.7/site-packages/

cd ~

# update /etc/dphys-swapfile to increase size
# set CONF_SWAPSIZE=2048
sudo sed -i 's/CONF_SWAPSIZE=2048/CONF_SWAPSIZE=100/g' /etc/dphys-swapfile

sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start

# update /boot/config.txt
sudo sed -i 's/gpu_mem=128/gpu_mem=16/g' /boot/config.txt


sudo rm -r opencv
sudo rm -r opencv_contrib

# to test
# source ~/.virtualenvs/cv2_env/bin/activate
# python
# import cv2
# print(cv2.__version__)
# you should see 4.1.0

That will take hours to complete. At this point, I would use ApplePI-Baker to copy that image to your computer so you can always have a fresh Buster image with OpenCV 4.1.0

Install Image Libraries

My initial interest was working through some of the tutorials and articles on Those articles required the following libraries:

- numpy

- scipy

- scikit-image

- dlib

- face_recognition

- imutils

- picamera

- cython

- zmq

The one that might be surprising is cython. For some reason, for me to install ‘zmq’ an error occurred that instructed me to install Cython. If anyone knows why that would be the case, or a better way to install zmq — please leave a comment.

Below is the script/recipe that I used to install the necessary libraries.

# install image libraries to raspberry pi

sudo sed -i 's/CONF_SWAPSIZE=100/CONF_SWAPSIZE=1024/g' /etc/dphys-swapfile

sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start

free -m

sudo apt-get update
sudo apt-get install -y build-essential cmake
sudo apt-get install -y libgtk-3-dev
sudo apt-get install -y libboost-all-dev

source .virtualenvs/cv2_env/bin/activate

pip install numpy
pip install scipy
pip install scikit-image

pip install dlib
pip install face_recognition

pip install imutils
pip install picamera

# if you are building on buster python 3.7
# and you want zmq you need to install cython
# not sure why it was not already there
pip install cython
pip install zmq

sudo sed -i 's/CONF_SWAPSIZE=1024/CONF_SWAPSIZE=100/g' /etc/dphys-swapfile

sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start

Again, I would recommend you use ApplePI-Baker to copy your image to your computer so you can burn multiple copies later. If you are like me, I have multiple Raspberry PIs with cameras to work with.

Using the new Raspberry PI Image

Lets put that hard work into action.

Live video streaming with ImageZMQ

This project is based off of a PyImageSearch article. That article talked about a very handy Image messaging library that is built on top of ZMQ specifically adapted to send images. That library is called, ImageZMQ written by Jeff Bass. I really liked the library and I wanted to add a couple of items so I forked Jeff’s repo and my version can be found here. The primary difference is that the client can be written to automatically reconnect to the server should the server restart. I also added a helper class called ‘AsyncImageSender’ that handles the details of sending the image in the background and leveraging the auto-reconnect and a dashboard class called ‘ImageMontage’

Caveat: I have only run the server on a Mac. If there is a Windows user out there that can try to run the server on Windows and let me know what changes are necessary I will incorporate them.

Step 1 — Setup your computer

Start by cloning my imagezmq repo and installing the requirements into your Python 3.6/3.7 virtual env.

In the tests directory there is a file named: ‘’.

This test file uses a utility class called ‘ImageMontage’ which is a helpful wrapper around the imutils ‘build_montage’. This is very handy utility to view multiple image streams. Depending upon how many Raspberry Pi’s you are running you should change the number of rows and columns. For example, 1 row, 3 columns will create a dashboard of 3 sub-windows in a row each showing the view from a different camera. ImageMontage uses the Raspberry PI hostname to know where to put the updated images in the dashboard so be sure to update the Raspberry PI hostname for each Raspberry PI.

Assuming you have Python 3.6 or better, you will just need to execute in a terminal window:

source <your venv>


Step 2— Setup the Raspberry PI

With your new Raspbian Buster image, with OpenCV and the additional image libraries, start by cloning my imagezmq repo onto each Raspberry PI you want to stream images.

On the Raspberry PI (assuming you are running in a terminal window on the PI or via ssh )

git clone

In the test directory there is a file named: ‘’.

You can execute this test script as:

python — — backlog=5


python —

When using the async version, the backlog parameter sets an upper limit on the number of messages in the queue waiting to be sent so you dont get too, well, back logged.

This will start the test script which will send video frames to the server from Step 1.

Below is an image of the dashboard using 4 Raspberry Pis ( 2 RPI4, and 2 RPI3 ) to show images of Timmy the Geek Monkey.

After you have this base install, you can now start to use the Raspberry PIs to do facial recognition, or any number of computer vision projects.

While the Raspberry PI is not very fast — it is still pretty amazing what you can do with a little $35 mini linux computer.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store