Home > Software > How to Use Image Normalization with OpenCV

How to Use Image Normalization with OpenCV

Anastasios Antoniadis

Share on X (Twitter) Share on Facebook Share on Pinterest Share on LinkedInImage normalization is a critical preprocessing step in the field of computer vision and image processing. It is used to adjust the pixel values of an image to a standard range or scale, enhancing the robustness and performance of computer vision algorithms, especially …


Image normalization is a critical preprocessing step in the field of computer vision and image processing. It is used to adjust the pixel values of an image to a standard range or scale, enhancing the robustness and performance of computer vision algorithms, especially in machine learning and deep learning applications. OpenCV, a popular open-source computer vision library, provides comprehensive support for image normalization through its normalize() function. This article explores the concept of image normalization, its importance, and how to implement it using OpenCV.

Understanding Image Normalization

Image normalization, also known as contrast stretching or histogram normalization, involves scaling the pixel intensity values to fit within a desired range. This process can help mitigate issues related to lighting variations, enhance contrast, and make the image data more suitable for analysis by standardizing the input for further processing.

Normalization is particularly crucial in machine learning models, where consistent input data can significantly impact the model’s training efficiency and prediction accuracy. By normalizing images to a uniform scale, models can learn more effectively from the data, leading to improved performance.

Why Normalize Images?

  • Enhanced Model Performance: Normalized images provide a consistent scale for pixel values, leading to better convergence during training and more accurate predictions.
  • Improved Contrast: Normalization can improve the contrast of images, making features more distinguishable and easier to analyze.
  • Lighting Correction: It helps to reduce the effects of lighting variations across different images, making the dataset more uniform.
  • Efficiency: Normalized data can speed up the computational process by ensuring that values are within a bounded range.

Image Normalization in OpenCV

OpenCV’s normalize() function offers a flexible way to normalize images. The function can adapt to different requirements by allowing the specification of the desired range for normalization and the choice of normalization type.


cv2.normalize(src, dst, alpha, beta, norm_type, dtype, mask)
  • src: Input array (single-channel, 1D or 2D floating-point).
  • dst: Output array of the same size as src.
  • alpha: Norm value to normalize to or the lower range boundary in case of range normalization.
  • beta: Upper range boundary in case of range normalization; it is not used for norm normalization.
  • norm_type: Normalization type (e.g., cv2.NORM_MINMAX for range normalization, cv2.NORM_L2 for L2 normalization).
  • dtype: When negative, the output array has the same type as src; otherwise, it has the same number of channels as src and the depth specified by dtype.
  • mask: Optional operation mask.

Example: Normalizing an Image to a Range

A common use case is scaling the pixel values to a [0, 1] or [0, 255] range. Here’s how to normalize an image to the [0, 255] range using OpenCV in Python:

import cv2
import numpy as np

# Load an image
image = cv2.imread('path/to/your/image.jpg', cv2.IMREAD_GRAYSCALE)

# Normalize the image
normalized_image = cv2.normalize(image, None, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_8U)

In this example:

  • image is the input image loaded in grayscale mode.
  • None indicates that the output normalized image will be stored in a new array (i.e., normalized_image).
  • alpha=0 and beta=255 specify the range [0, 255] for the pixel values in the normalized image.
  • norm_type=cv2.NORM_MINMAX specifies that range normalization should be used.
  • dtype=cv2.CV_8U indicates that the output image should have 8-bit unsigned integers as pixel values.

Visualizing the Results

To observe the effects of normalization, you can display the original and normalized images using OpenCV’s imshow() function:

# Display the original image
cv2.imshow('Original Image', image)

# Display the normalized image
cv2.imshow('Normalized Image', normalized_image)


Advanced Normalization Techniques

Beyond simple range normalization, OpenCV’s normalize() function supports other normalization types, such as L1 and L2 normalization, which are commonly used in machine learning and data preprocessing:

L1 Normalization: Ensures that the sum of absolute values of the vector elements is 1.

l1_normalized_image = cv2.normalize(image, None, alpha=1, norm_type=cv2.NORM_L1, dtype=cv2.CV_32F)

L2 Normalization: Ensures that the sum of squares of the vector elements is 1. This is particularly useful for feature vectors in machine learning.

l2_normalized_image = cv2.normalize(image, None, alpha=1, norm_type=cv2.NORM_L2, dtype=cv2.CV_32F)

Best Practices for Image Normalization with OpenCV

  • Understand the Data: Before applying normalization, analyze your image data to determine the most suitable normalization range and type for your specific application.
  • Consistency: Apply the same normalization method consistently across all images in your dataset, especially important in machine learning models to ensure uniform input data.
  • Performance Considerations: While normalization is generally not computationally expensive, always consider its impact, especially when processing a large number of images or working with real-time applications.


Image normalization is a pivotal step in preparing data for processing in computer vision and machine learning applications. OpenCV’s versatile normalize() function provides an easy yet powerful way to perform a wide range of normalization tasks, from simple range scaling to complex L1 and L2 normalizations. By understanding and applying the appropriate normalization techniques, developers can enhance image contrast, mitigate lighting variations, and prepare data for further analysis, thereby improving the performance and accuracy of their computer vision applications.

Anastasios Antoniadis
Follow me
0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x