Home » Tutorials » Image Processing » Watershed Image Segmentation: Marker controlled flooding

# Watershed Image Segmentation: Marker controlled flooding

The watershed transform is a computer vision algorithm that serves for image segmentation. This method can extract image objects and separate foreground from background. This tutorial shows how can implement Watershed transformation via Meyer’s flooding algorithm.

## How does the Watershed works

The Watershed is based on geological surface representation, therefore we divide the image in two sets: the catchment basins and the watershed lines. Our algorithm is based on Meyer’s flooding introduced by F. Meyer in the early 90’s.

Originally the algorithm  works on a grayscale image. When it floods a gradient image the basins should emerge at the edges of objects. The following steps describe the process:

1. Initialize object groups with pre-selected seed markers.
2. This step extracts the neighboring pixels of each group and moves them into a priority queue. Originally the priority level corresponds to pixel gradient value, but it is possible to use different weighting measure.
3. The lowest priority pixels are retrieved from the queue and processed first. If all neighbors on the current pixel have the same label, it receives the same label. The algorithm updates the priority queue with all unvisited pixels.
4. Repeat step 3 till completion

At the end all unlabeled pixels mark the object boundaries (the watershed lines).

Other Tutorials

## Initialize Segmentation

Initially, the algorithm must select starting points from which to start segmentation. A common way to select markers is the gradient local minimum. However, there are different strategies for choosing seed points. Our HTML5 demo application implements user-controlled markers selection.

### Priority Queue Details

The HTML5 realization of Watershed Image Segmentation is based on our custom JavaScript priority queue object (PQueue). While extracting the pixels, we take the neighbors at each point and push them into our queue.

The queue keeps the pixel in the correct order through a simple binary search. During tracing, the flood process divides the image into background and foreground.

### Distance between pixels

Typically, algorithms use a gradient image to measure the distance between pixels. In our sample application we use a different weighting function.

The weight is calculated based on the improved RGB Euclidean distance [2]. The distance between the center point and selected neighbor is as on the following equation:

\sqrt{(2\Delta R^2 + 4\Delta G^2 + 3\Delta B^2)}

The math equation implements as on the following JavaScript code segment:


function diffRGB(p1, p2){
const rmean = (data[p1 + 0] + data[p2 + 0]) >>> 1;
const r = (data[p1 + 0] - data[p2 + 0]),
g = (data[p1 + 1] - data[p2 + 1]),
b = (data[p1 + 2] - data[p2 + 2]);

return Math.sqrt((((512+rmean)*r*r)>>8) + 4*g*g + (((767-rmean)*b*b)>>8));
}


### Watershed image partition

First, we eliminate image noise by a Gaussian filter with small sigma value. Then initialize the image buffer with appropriate label values corresponding to the input seeds:


for (var i = 0, l = seeds.length; seeds[i]; i++){
var p = seeds[i].pixels;
for (var j = 0, c = p.length; j < c; j++){
buff[p[j]] = (i + 1)*80; // For example 80 and 160 grey values
}
}


As a next step, we extract all central pixels from our priority queue until we process the whole image:


while (!queue.empty()){
var el = queue.pop();
}


The adjacent pixels are extracted and placed into the PQueue (Priority Queue) for further processing:


for (var i = 0; i < 4; i++){
const pos = p + dirxy[i];
if (buff[pos] === 0){
buff[pos] = buff[p];
buff[pos + 1] = diffRGB(p, pos);
queue.push(pos, comparator);
}
}
}


## Image Segmentation Online

Watershed Segmentation is part of our collection of online tools. You can use it to separate and remove background from images.