The watershed transform is a computer vision algorithm that serves for image segmentation. This method can extract image objects and separate foreground from background. This tutorial shows how can implement Watershed transformation via Meyer’s flooding algorithm.
How does the Watershed works
The Watershed is based on geological surface representation, therefore we divide the image in two sets: the catchment basins and the watershed lines. Our algorithm is based on Meyer’s flooding introduced by F. Meyer in the early 90’s.
Originally the algorithm works on a grayscale image. When it floods a gradient image the basins should emerge at the edges of objects. The following steps describe the process:
- Individual groups of objects are initialized using pre-selected markers.
- The adjacent pixels from each group are extracted and moved to a priority queue. In classical algorithms, the priority level corresponds to the pixel gradient value, but it is possible to use a different weight measure.
- Use the queue to retrieve and process the lowest priority pixels first.
- In case all the neighbors of the current pixel have the same label, it gets the same one.
- Update the priority queue with unmarked neighboring pixels.
- Repeat step 3 till completion.
Finally, all unlabeled pixels describe objects boundaries (the watershed lines).
Initialize Segmentation
Initially, the algorithm must select starting points from which to start segmentation. A common way to select markers is the gradient local minimum. However, there are different strategies for choosing seed points. Our HTML5 demo application implements user-controlled markers selection.
Priority Queue Details
The HTML5 realization of Watershed Image Segmentation is based on our custom JavaScript priority queue object (PQueue
). While extracting the pixels, we take the neighbors at each point and push them into our queue.
The queue keeps the pixel in the correct order through a simple binary search. During tracing, the flood process divides the image into background and foreground.
Distance between pixels
Typically, algorithms use a gradient image to measure the distance between pixels. In our sample application we use a different weighting function.
The weight is calculated based on the improved RGB Euclidean distance [2]. The distance between the center point and selected neighbor is as on the following equation:
`\sqrt{(2\Delta R^2 + 4\Delta G^2 + 3\Delta B^2)}`
The math equation implements as on the following JavaScript code segment:
function diffRGB(p1, p2){
const rmean = (data[p1 + 0] + data[p2 + 0]) >>> 1;
const r = (data[p1 + 0] - data[p2 + 0]),
g = (data[p1 + 1] - data[p2 + 1]),
b = (data[p1 + 2] - data[p2 + 2]);
return Math.sqrt((((512+rmean)*r*r)>>8) + 4*g*g + (((767-rmean)*b*b)>>8));
}
Watershed image partition
First, we eliminate image noise by a Gaussian filter with small sigma value. Then initialize the image buffer with appropriate label values corresponding to the input seeds:
for (var i = 0, l = seeds.length; seeds[i]; i++){
var p = seeds[i].pixels;
for (var j = 0, c = p.length; j < c; j++){
buff[p[j]] = (i + 1)*80; // For example 80 and 160 grey values
addPoint(p[j]);
}
}
As a next step, we extract all central pixels from our priority queue until we process the whole image:
while (!queue.empty()){
var el = queue.pop();
addPoint(el);
}
The adjacent pixels are extracted and placed into the PQueue (Priority Queue) for further processing:
function addPoint(p){
for (var i = 0; i < 4; i++){
const pos = p + dirxy[i];
if (buff[pos] === 0){
buff[pos] = buff[p];
buff[pos + 1] = diffRGB(p, pos);
queue.push(pos, comparator);
}
}
}
Related Source code
Image Segmentation Online
Watershed Segmentation is part of our collection of online tools. You can use it to separate and remove background from images.
WEB APPLET
See how it works in the browser!