The watershed transform is a computer vision algorithm that serves for image segmentation. This method can extract image objects and separate foreground from background. This tutorial shows how can implement Watershed transformation via Meyer’s flooding algorithm.

**How does the Watershed works**

The Watershed is based on geological surface representation, therefore we divide the image in two sets: the **catchment basins **and the **watershed lines.** Our algorithm is based on **Meyer’s** *flooding *introduced by F. Meyer in the early 90’s. Originally the algorithm works on a grayscale image. When it floods a gradient image the basins should emerge at the edges of objects. The following steps describe the process:

- Initialize object groups with pre-selected seed markers.
- This step extracts the neighboring pixels of each group and moves them into a
**priority queue**. Originally the priority level corresponds to pixel gradient value, but it is possible to use different*weighting measure*. - The lowest priority pixels are retrieved from the queue and processed first. If all neighbors on the current pixel have the same label, it receives the same label. The algorithm updates the priority queue with all unvisited pixels.
- Repeat step 3 till completion

At the end all unlabeled pixels mark the object boundaries (the *watershed lines*).

More tutorials?

## Initialize Segmentation

Initially, the algorithm must select starting points from which to start segmentation. A common way to select markers is the gradient local minimum. However, there are different strategies for choosing seed points. We implement user-controlled markers selection in our HTML5 demo application. Use *Left Mouse Click* and *Right Mouse Click* to select foreground and background areas. Some articles discuss different algorithms for automatic seed selection like **Binarization**, **Morphological Opening, Distance Transform** and so on. Although the focus of this post is not this part of the image segmentation process, we plan to review it in future articles.

### Priority Queue Details

Our HTML5 realization of Watershed Image Segmentation is based on our custom **JavaScript priority queue** object. While extracting the pixels, we take the neighbors at each point and push them into our queue. The **push** method selects the proper position using a simple **binary search**. In this way, the list remains sorted during the process. The node comparator is a custom input method and it allows flexible `PQueue`

usage.

```
var PQueue = function(){
this.nodes = [];
}
PQueue.prototype.push = function(el, comparator){
var nodes = this.nodes;
var offset = 0,
size = nodes.length;
while (offset < size){
const pos = ((offset + size) >>> 1);
const diff = comparator(el, nodes[pos]);
((diff < 0) ? (offset = pos + 1) : (size = pos));
}
nodes.splice(offset, 0, el);
}
```

### Distance between pixels

Typically, algorithms use a gradient image to measure the distance between pixels. In our demo application we use a different weighting function. The weight is calculated based on the improved **RGB Euclidean distance** [2]. The distance between the center point and selected neighbor is as on the following equation:

`\sqrt{(2\Delta R^2 + 4\Delta G^2 + 3\Delta B^2)}`

The math equation implements as on the following **JavaScript** code segment:

```
function diffRGB(p1, p2){
const rmean = (data[p1 + 0] + data[p2 + 0]) >>> 1;
const r = (data[p1 + 0] - data[p2 + 0]),
g = (data[p1 + 1] - data[p2 + 1]),
b = (data[p1 + 2] - data[p2 + 2]);
return Math.sqrt((((512+rmean)*r*r)>>8) + 4*g*g + (((767-rmean)*b*b)>>8));
}
```

### Watershed image partition

First, we eliminate image noise by a Gaussian filter with small sigma value. Then initialize the image buffer with appropriate label values corresponding to the input seeds:

```
for (var i = 0, l = seeds.length; seeds[i]; i++){
var p = seeds[i].pixels;
for (var j = 0, c = p.length; j < c; j++){
buff[p[j]] = (i + 1)*80; // For example 80 and 160 grey values
addPoint(p[j]);
}
}
```

As a next step, we extract all central pixels from our priority queue until we process the whole image:

```
while (!queue.empty()){
var el = queue.pop();
addPoint(el);
}
```

The adjacent pixels are extracted and placed into the PQueue (*Priority Queue*) for further processing:

```
function addPoint(p){
for (var i = 0; i < 4; i++){
const pos = p + dirxy[i];
if (buff[pos] === 0){
buff[pos] = buff[p];
buff[pos + 1] = diffRGB(p, pos);
queue.push(pos, comparator);
}
}
}
```