Algorithms for digital image processing have been an exciting topic in computer science for many years. Nowadays, a variety of methods and technologies contribute to solving various tasks in the field of machine vision and graphics processing.
In this article, we’ll take a look at some computer vision and digital image processing tasks and algorithms. We’ll look at some algorithmic tools you might want to use for your next image editing project, but first let’s define some common terms.
What is Image Processing?
Image processing is a subcategory of signal processing that focuses on analyzing, modifying, and synthesizing visual data. We can broadly divide image/signal processing into two categories – digital and analog.
Essentially, digital image processing is the process of using a computer to modify a visual media source through an algorithm. Since our focus is on modern computing, of course we’ll be talking about this category from now on.
What does image analysis means?
Image analysis is the process of extraction meaningful information from images. This is achieved by the means of image processing techniques.
Now that we have brief definitions of some popular terms, we can dwell on some well-known digital image processing algorithms.
Common algorithms and tasks
The next few points focus on some of the most common algorithms and tasks in computer vision and analytics.
Filter and enhance photos
Nowadays, every mobile phone has a significant amount of filtering algorithms to deal with image distortion and improve camera quality. Digital filtering aims to reduce image noise and can sometimes produce an interesting visual effect.
Below is a short list of algorithms that help us filter and enhance visual data.
|Box blur||a linear filter that has many applications since it is fast and easy to implement|
|Gaussian blur||a smoothing filter that uses image convolution with a Gaussian function|
Symmetric Nearest Neighbor
|non-linear digital filters that aim to reduce noise and preserve objects contours|
The following image shows an example result of a noise reduction filter:
Image edge detection is a set of image processing algorithms that aim to extract the boundaries of objects. Most of them work by simple image convolution, are easy to implement and do not require much computing power. Some of the most popular are:
|This family of edge detectors relies on the assumption that object boundaries are at locations where there is a rapid change in image gradient.|
|Canny edge detector||It can extract stable image contours using Gaussian blur, Sobel operator, edge thinning by non-maximum suppression and hysteresis thresholding.|
|Laplacian detector||The Laplace edge detector is a zero-crossing based methods frequently rely on the second-order image derivative.|
The following image shows an example output from a Sobel contour detector.
Many computer vision algorithms work well on grayscale images. However, the modern world is working in colors and we can take advantage of this. Below I will present a short list of common color space transformations and color-related algorithms.
|RGB to Grayscale||Different methods to convert an RGB image to monochrome (gray scale) one|
|Skin Detection||Using a color space transformation, we can find parts of the image that are likely to be skin|
The image below is an example of a Hue rotation of 60 degrees on a photo of a toy. You can try this color shift effect on your own photos right here in the browser.
In this post, we have lifted the curtain on different families of algorithms for image and photo analysis and editing. I would say that there are so many genius algorithms out there that it is impossible to put them in a single article.
However, I hope that in time this page will include more and more of them. In the meantime, you can check out the blog for more algorithms and code.
Note that you can apply some of the algorithms mentioned here directly to your own photos using these little image editing tools.