Welcome to SparkFun's MicroPython port of OpenCV! This is the first known MicroPython port of OpenCV, which opens up a whole new world of vision processing abilities on embedded devices in a Python environment!
As the first port, there may be incomplete or missing features, and some rough edges. For example, we have only implemented support for the Raspberry Pi RP2350 so far, and some of the build procedures are hard-coded for that. We'd be happy to work with the community to create an official port in the future, but until then, this repo is available and fully open-source for anyone to use!
Below are example code snippets of features avaiable in this port of OpenCV. We've done our best to make it as similar as possible to standard OpenCV, but there are some necessary API changes due to the limitations of MicroPython.
# Import OpenCV, just like any other Python environment!
import cv2 as cv
# Import ulab NumPy and initialize an image, almost like any other Python
# environment!
from ulab import numpy as np
img = np.zeros((240, 320, 3), dtype=np.uint8)
# Call OpenCV functions, just like standard OpenCV!
img = cv.putText(img, "Hello OpenCV!", (50, 200), cv.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
img = cv.Canny(img, 100, 200)
# Call `cv.imshow()`, almost like standard OpenCV! Instead of passing a window
# name string, you pass a display driver that implements an `imshow()` method
# that takes a NumPy array as input.
cv.imshow(display, img)
# Call `cv.waitKey()`, just like standard OpenCV! Unlike standard OpenCV, this
# waits for a key press on the REPL instead of a window, and it is not necessary
# to call after `cv.imshow()` because display drivers show images immediately.
key = cv.waitKey(0)
# Use a camera, similar to standard OpenCV! `cv.VideoCapture()` is not used in
# MicroPython-OpenCV, because a separate camera driver that implements the same
# methods as the OpenCV `VideoCapture` class must be initialized separately.
camera.open()
success, frame = camera.read()
camera.release()
# Call `cv.imread()` and `cv.imwrite()` to read and write images to and from
# the MicroPython filesystem, just like standard OpenCV! It can also point to an
# SD card if one is mounted for extra storage space.
img = cv.imread("path/to/image.png")
success = cv.imwrite("path/to/image.png", img)For full examples, see our Red Vision repo.
Limit your expectations. OpenCV typically runs on full desktop systems containing processors running at GHz speeds with dozens of cores optimized for computing speed and GB of RAM. In contrast, microcontrollers processors typically run at a few hundred MHz with 1 or 2 cores optimized for low power consumtion with a few MB of RAM. Exact performance depends on many things, including the processor, vision pipeline, image resolution, colorspaces used, RAM available, etc.
If you want best performance, keep in mind is that MicroPython uses a garbage collector for memory management. If images are repeatedly created in a vision pipeline, RAM will be consumed until the garbage collector runs. The collection process takes longer with more RAM, so this can result in noticable delays during collection (typically a few hundred milliseconds). To mitigate this, it's best to pre-allocate arrays and utilize the optional dst argument of OpenCV functions so memory consumption is minimized. Pre-allocation also helps improve performance, because allocating memory takes time.
Below are some typical execution times for various OpenCV functions. All were tested on a Raspberry Pi RP2350 with a 320x240 test image.
| Function | Execution Time |
|---|---|
dst = cv.blur(src, (5, 5)) |
115ms |
dst = cv.blur(src, (5, 5), dst) |
87ms |
retval, dst = cv.threshold(src, 127, 255, cv.THRESH_BINARY) |
76ms |
retval, dst = cv.threshold(src, 127, 255, cv.THRESH_BINARY, dst) |
46ms |
dst = cv.cvtColor(src, cv.COLOR_BGR2HSV) |
114ms |
dst = cv.cvtColor(src, cv.COLOR_BGR2HSV, dst) |
84ms |
dst = cv.Canny(src, 100, 200) |
504ms |
dst = cv.Canny(src, 100, 200, dst) |
482ms |
Below is a list of all OpenCV functions included in the MicroPython port of OpenCV. This section follows OpenCV's module structure.
Only the most useful OpenCV functions are included. The MicroPython environment is extremely limited, so many functions are omitted due to prohibitively high RAM and firmware size requirements. Other less useful functions have been omitted to reduce firmware size. If there are additional functions you'd like to be included, see #Contributing.
If you need help understanding how to use these functions, see the documentation link for each function. You can also check out OpenCV's Python Tutorials and other tutorials online for more educational experience. This repository is simply a port of OpenCV, so we do not document these functions or how to use them, except for deviations from standard OpenCV.
Note
The core module includes many functions for basic operations on arrays. Most of these can be performed by numpy operations, so they have been omitted to reduce firmware size.
| Function | Notes |
|---|---|
cv.convertScaleAbs(src[, dst[, alpha[, beta]]]) -> dstScales, calculates absolute values, and converts the result to 8-bit. Documentation |
|
cv.inRange(src, lowerb, upperb[, dst]) -> dstChecks if array elements lie between the elements of two other arrays. Documentation |
|
cv.minMaxLoc(src[, mask]) -> minVal, maxVal, minLoc, maxLocFinds the global minimum and maximum in an array. Documentation |
| Function | Notes |
|---|---|
cv.bilateralFilter(src, d, sigmaColor, sigmaSpace[, dst[, borderType]]) -> dstApplies the bilateral filter to an image. Documentation |
|
cv.blur(src, ksize[, dst[, anchor[, borderType]]]) -> dstBlurs an image using the normalized box filter. Documentation |
|
cv.boxFilter(src, ddepth, ksize[, dst[, anchor[, normalize[, borderType]]]]) -> dstBlurs an image using the box filter. Documentation |
|
cv.dilate(src, kernel[, dst[, anchor[, iterations[, borderType[, borderValue]]]]]) -> dstDilates an image by using a specific structuring element. Documentation |
|
cv.erode(src, kernel[, dst[, anchor[, iterations[, borderType[, borderValue]]]]]) -> dstErodes an image by using a specific structuring element. Documentation |
|
cv.filter2D(src, ddepth, kernel[, dst[, anchor[, delta[, borderType]]]]) -> dstConvolves an image with the kernel. Documentation |
|
cv.GaussianBlur(src, ksize, sigmaX[, dst[, sigmaY[, borderType[, hint]]]]) -> dstBlurs an image using a Gaussian filter. Documentation |
|
cv.getStructuringElement(shape, ksize[, anchor]) -> retvalReturns a structuring element of the specified size and shape for morphological operations. Documentation |
|
cv.Laplacian(src, ddepth[, dst[, ksize[, scale[, delta[, borderType]]]]]) -> dstCalculates the Laplacian of an image. Documentation |
|
cv.medianBlur(src, ksize[, dst]) -> dstBlurs an image using the median filter. Documentation |
|
cv.morphologyEx(src, op, kernel[, dst[, anchor[, iterations[, borderType[, borderValue]]]]]) -> dstPerforms advanced morphological transformations. Documentation |
|
cv.Scharr(src, ddepth, dx, dy[, dst[, scale[, delta[, borderType]]]]) -> dstCalculates the first x- or y- image derivative using Scharr operator. Documentation |
|
cv.Sobel(src, ddepth, dx, dy[, dst[, ksize[, scale[, delta[, borderType]]]]]) -> dstCalculates the first, second, third, or mixed image derivatives using an extended Sobel operator. Documentation |
|
cv.spatialGradient(src[, dx[, dy[, ksize[, borderType]]]]) -> dx, dyCalculates the first order image derivative in both x and y using a Sobel operator. Documentation |
| Function | Notes |
|---|---|
cv.adaptiveThreshold(src, maxValue, adaptiveMethod, thresholdType, blockSize, C[, dst]) -> dstApplies an adaptive threshold to an array. Documentation |
|
cv.threshold(src, thresh, maxval, type[, dst]) -> retval, dstApplies a fixed-level threshold to each array element. Documentation |
| Function | Notes |
|---|---|
cv.arrowedLine(img, pt1, pt2, color[, thickness[, line_type[, shift[, tipLength]]]]) -> imgDraws an arrow segment pointing from the first point to the second one. Documentation |
|
cv.circle(img, center, radius, color[, thickness[, lineType[, shift]]]) -> imgDraws a circle. Documentation |
|
cv.drawContours(image, contours, contourIdx, color[, thickness[, lineType[, hierarchy[, maxLevel[, offset]]]]]) -> imageDraws contours outlines or filled contours. Documentation |
|
cv.drawMarker(img, position, color[, markerType[, markerSize[, thickness[, line_type]]]]) -> imgDraws a marker on a predefined position in an image. Documentation |
|
cv.ellipse(img, center, axes, angle, startAngle, endAngle, color[, thickness[, lineType[, shift]]]) -> imgDraws a simple or thick elliptic arc or fills an ellipse sector. Documentation |
|
cv.fillConvexPoly(img, points, color[, lineType[, shift]]) -> imgFills a convex polygon. Documentation |
|
cv.fillPoly(img, pts, color[, lineType[, shift[, offset]]]) -> imgFills the area bounded by one or more polygons. Documentation |
|
cv.line(img, pt1, pt2, color[, thickness[, lineType[, shift]]]) -> imgDraws a line segment connecting two points. Documentation |
|
cv.putText(img, text, org, fontFace, fontScale, color[, thickness[, lineType[, bottomLeftOrigin]]]) -> imgDraws a text string. Documentation |
|
cv.rectangle(img, pt1, pt2, color[, thickness[, lineType[, shift]]]) -> imgDraws a simple, thick, or filled up-right rectangle. Documentation |
| Function | Notes |
|---|---|
cv.cvtColor(src, code[, dst[, dstCn[, hint]]]) -> dstConverts an image from one color space to another. Documentation |
| Function | Notes |
|---|---|
cv.approxPolyDP(curve, epsilon, closed[, approxCurve]) -> approxCurveApproximates a polygonal curve(s) with the specified precision. Documentation |
|
cv.approxPolyN(curve, nsides[, approxCurve[, epsilon_percentage[, ensure_convex]]]) -> approxCurveApproximates a polygon with a convex hull with a specified accuracy and number of sides. Documentation |
|
cv.arcLength(curve, closed) -> retvalCalculates a contour perimeter or a curve length. Documentation |
|
cv.boundingRect(array) -> retvalCalculates the up-right bounding rectangle of a point set or non-zero pixels of gray-scale image. Documentation |
|
cv.boxPoints(box[, points]) -> pointsFinds the four vertices of a rotated rect. Useful to draw the rotated rectangle. Documentation |
|
cv.connectedComponents(image[, labels[, connectivity[, ltype]]]) -> retval, labelscomputes the connected components labeled image of boolean image Documentation |
ltype defaults to CV_16U instead of CV_32S due to ulab not supporting 32-bit integers. See: v923z/micropython-ulab#719 |
cv.connectedComponentsWithStats(image[, labels[, stats[, centroids[, connectivity[, ltype]]]]]) -> retval, labels, stats, centroidscomputes the connected components labeled image of boolean image and also produces a statistics output for each label Documentation |
labels, stats, and centroids are returned with dtype=np.float instead of np.int32 due to ulab not supporting 32-bit integers. See: v923z/micropython-ulab#719 |
cv.contourArea(contour[, oriented]) -> retvalCalculates a contour area. Documentation |
|
cv.convexHull(points[, hull[, clockwise[, returnPoints]]]) -> hullFinds the convex hull of a point set. Documentation |
hull is returned with dtype=np.float instead of np.int32 due to ulab not supporting 32-bit integers. See: v923z/micropython-ulab#719 |
cv.convexityDefects(contour, convexhull[, convexityDefects]) -> convexityDefectsFinds the convexity defects of a contour. Documentation |
convexityDefects is returned with dtype=np.float instead of np.int32 due to ulab not supporting 32-bit integers. See: v923z/micropython-ulab#719 |
cv.findContours(image, mode, method[, contours[, hierarchy[, offset]]]) -> contours, hierarchyFinds contours in a binary image. Documentation |
contours and hierarchy are returned with dtype=np.float and dtype=np.int16 respectively instead of np.int32 due to ulab not supporting 32-bit integers. See: v923z/micropython-ulab#719 |
cv.fitEllipse(points) -> retvalFits an ellipse around a set of 2D points. Documentation |
|
cv.fitLine(points, distType, param, reps, aeps[, line]) -> lineFits a line to a 2D or 3D point set. Documentation |
|
cv.isContourConvex(contour) -> retvalTests a contour convexity. Documentation |
|
cv.matchShapes(contour1, contour2, method, parameter) -> retvalCompares two shapes. Documentation |
|
cv.minAreaRect(points) -> retvalFinds a rotated rectangle of the minimum area enclosing the input 2D point set. Documentation |
|
cv.minEnclosingCircle(points) -> center, radiusFinds a circle of the minimum area enclosing a 2D point set. Documentation |
|
cv.minEnclosingTriangle(points[, triangle]) -> retval, triangleFinds a triangle of minimum area enclosing a 2D point set and returns its area. Documentation |
|
cv.moments(array[, binaryImage]) -> retvalCalculates all of the moments up to the third order of a polygon or rasterized shape. Documentation |
|
cv.pointPolygonTest(contour, pt, measureDist) -> retvalPerforms a point-in-contour test. Documentation |
| Function | Notes |
|---|---|
cv.Canny(image, threshold1, threshold2[, edges[, apertureSize[, L2gradient]]]) -> edgesFinds edges in an image using the Canny algorithm. Documentation |
|
cv.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]]) -> circlesFinds circles in a grayscale image using the Hough transform. Documentation |
|
cv.HoughCirclesWithAccumulator(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]]) -> circlesFinds circles in a grayscale image using the Hough transform and get accumulator. Documentation |
|
cv.HoughLines(image, rho, theta, threshold[, lines[, srn[, stn[, min_theta[, max_theta[, use_edgeval]]]]]]) -> linesFinds lines in a binary image using the standard Hough transform. Documentation |
|
cv.HoughLinesP(image, rho, theta, threshold[, lines[, minLineLength[, maxLineGap]]]) -> linesFinds line segments in a binary image using the probabilistic Hough transform. Documentation |
lines is returned with dtype=np.float instead of np.int32 due to ulab not supporting 32-bit integers. See: v923z/micropython-ulab#719 |
cv.HoughLinesWithAccumulator(image, rho, theta, threshold[, lines[, srn[, stn[, min_theta[, max_theta[, use_edgeval]]]]]]) -> linesFinds lines in a binary image using the standard Hough transform and get accumulator. Documentation |
| Function | Notes |
|---|---|
cv.matchTemplate(image, templ, method[, result[, mask]]) -> resultCompares a template against overlapped image regions. Documentation |
| Function | Notes |
|---|---|
cv.imread(filename[, flags]) -> retvalLoads an image from a file. Documentation |
filename can be anywhere in the full MicroPython filesystem, including SD cards if mounted.Only BMP and PNG formats are currently supported. |
cv.imwrite(filename, img[, params]) -> retvalSaves an image to a specified file. Documentation |
filename can be anywhere in the full MicroPython filesystem, including SD cards if mounted.Only BMP and PNG formats are currently supported. |
| Function | Notes |
|---|---|
cv.imshow(winname, mat) -> NoneDisplays an image in the specified window. Documentation |
winname must actually be a display driver object that implements an imshow() method that takes a NumPy array as input. |
cv.waitKey([, delay]) -> retvalWaits for a pressed key. Documentation |
Input is taken from sys.stdin, which is typically the REPL. |
cv.waitKeyEx([, delay]) -> retvalSimilar to waitKey, but returns full key code. Documentation |
Input is taken from sys.stdin, which is typically the REPL.Full key code is implementation specific, so special key codes in MicroPython will not match other Python environments. |
Standard OpenCV leverages the host operating system to access hardware, like creating windows and accessing cameras. MicroPython does not have that luxury, so instead, drivers must be implemented for these hardware devices. Take a look at our Red Vision repo for examples. This leads to necessary API changes for functions like cv.imshow().
As of writing, the OpenCV firmware adds over 3MiB on top of the standard MicroPython firmware, which itself be up to 1MiB in size (depending on platform and board). You'll also want some storage space, so a board with at least 8MB of flash is recommended.
PSRAM is basically a requirement to do anything useful with OpenCV. A single 320x240 RGB888 frame buffer requires 225KiB of RAM; most microcontrollers only have a few hundred KiB of SRAM. Several frame buffers can be needed for even simple vision pipelines, so you really need at least a few MiB of RAM available. The more the merrier!
Below are instructions to build the MicroPython-OpenCV firmware from scratch. Instructions are only provided for Linux systems.
Note
This build process does not include any hardware drivers, see our Red Vision repo for example drivers.
Note
Because OpenCV dramatically increases the firmware size, it may be necessary to define board variants that reduce the storage size to avoid it overlapping with the firmware. See #Adding New Boards.
- Clone this repo and MicroPython
-
cd ~ git clone https://github.com/sparkfun/micropython-opencv.git git clone https://github.com/micropython/micropython.git
-
- Build the MicroPython cross-compiler
-
make -C micropython/mpy-cross -j4
-
- Clone MicroPython submodules for your board
-
make -C micropython/ports/rp2 BOARD=SPARKFUN_XRP_CONTROLLER submodules - Replace
rp2andSPARKFUN_XRP_CONTROLLERwith your platform and board name respectively
-
- Set environment variables (if needed)
- Some platforms require environment variables to be set. Example:
-
export PICO_SDK_PATH=~/micropython/lib/pico-sdk
- Build OpenCV for your platform
-
make -C micropython-opencv PLATFORM=rp2350 --no-print-directory -j4 - Replace
rp2350with your board's platform
-
- Build MicroPython-OpenCV firmware for your board
-
export CMAKE_ARGS="-DSKIP_PICO_MALLOC=1 -DPICO_CXX_ENABLE_EXCEPTIONS=1" && make -C micropython/ports/rp2 BOARD=SPARKFUN_XRP_CONTROLLER USER_C_MODULES=~/micropython-opencv/micropython_opencv.cmake -j4 - Replace
rp2andSPARKFUN_XRP_CONTROLLERwith your platform and board name respectively - Replace the
CMAKE_ARGScontents with whatever is required for your board's platform - Your firmware file(s) will be located in
~/micropython/ports/<port-name>/build-<board-name>/
-
Note
This section assumes this board's platform is supported (eg. RP2350). If not, see #Adding New Platforms.
Because OpenCV dramatically increases the firmware size, it may be necessary to define board variants that reduce the storage size to avoid it overlapping with the firmware. It is also beneficial to adjust the board name to include OpenCV or similar to help people identify that the MicroPython-OpenCV firmware is flashed to the board instead of standard MicroPython.
Below is the variant for the XRP Controller as an example. The variant is defined by creating a file called micropython/ports/rp2/boards/SPARKFUN_XRP_CONTROLLER/mpconfigvariant_RED_VISION.cmake with contents:
list(APPEND MICROPY_DEF_BOARD
# Board name
"MICROPY_HW_BOARD_NAME=\"SparkFun XRP Controller (Red Vision)\""
# 8MB (8 * 1024 * 1024)
"MICROPY_HW_FLASH_STORAGE_BYTES=8388608"
)
Some board definitions do not have #ifndef wrappers in mpconfigboard.h for MICROPY_HW_BOARD_NAME and MICROPY_HW_FLASH_STORAGE_BYTES. They should be added if needed so the variant can build properly.
Then, the firmware can be built by adding BOARD_VARIANT=<variant-name> to the make command when building the MicroPython-OpenCV firmware.
Only support for the Raspberry Pi RP2350 has been figured out, so the all requirements for adding new platforms is not fully known yet. However, it should be along the lines of:
- Create a valid toolchain file for the platform
- See rp2350.toolchain.cmake for reference
- This loosely follow's OpenCV's platform definitions
- Build OpenCV with the new platform
-
make -C micropython-opencv/opencv PLATFORM=<new-platform> --no-print-directory -j4
-
- Create a new board for the new platform
- Build MicroPython-OpenCV firmware for the new board
-
make -C micropython/ports/rp2 BOARD=<board-name> USER_C_MODULES=micropython-opencv/micropython_opencv.cmake -j4
-
Note
We at SparkFun are not OpenCV developers. For things related to OpenCV, please head to https://github.com/opencv/opencv
Found a bug? Is there a discrepancy between standard OpenCV and MicroPython-OpenCV? Have a feature request?
First, please see if there is an existing issue. If not, then please open a new issue so we can discuss the topic!
Pull requests are welcome! Please keep the scope of your pull request focused (make separate ones if needed), and keep file changes limited to the scope of your pull request.
Note
Because of limitations of microcontrollers, MicroPython, and OpenCV, it may not be possible to add some features of OpenCV.