1D Edge Detection and the optimizations
- Simple method: Computing gradient
- However, with high frequency noises, the gradient could be even more noisier
- Therefore, Gaussian Smoothing is applied before doing edge detection
- Gaussian smoothing requires 1 conv operation, compute grad requires another one.
- With the derivative theorem of convolution, we could combine these 2 convolutions into. Therefore, just convolve the original image with the derivative of the Gaussian kernel. Then find the maximum and the minium and finally do thresholding.
- Even faster: Convolve with second order derivative of the Gaussian Kernel and find the zero value to determine the maximum and minimum values.
Scales
- Larger $\sigma$ gives coarse edge
- Smaller $\sigma$ gives finer edge, but could be sensitive to noise
2D Edge Detection
- Similar to 1D, but non-maximal suppression is used to make sure only one edge is detected in one location
- Canny Edge vs Marr-Hildreth
- Canny has directions, Marr is isotropic
- Marr would be computationally faster and avoid storing gradient vectors
- For implementations, since infinite summation is infeasible, truncated summations was used. Kernels are truncated until 1/1000 of the peak values. Therefore, larger $\sigma$ requires larger truncations
- Decompose the 2D convolutions into 2 1D convolutions for saving computations
- Implement the differentiations as finite difference approximations with Taylor-series expansions. Therefore, the derivative could be computed with a convolution with the [1/2, 0, -1/2] kernel
\( \frac{\partial S}{\partial x} = \frac{S(x+1,y)- S(x-1,y)}{2} \)
and the kernel would be flipped while doing the convolution:
\(
\begin{bmatrix}
-1/2 & 0 & 1/2
\end{bmatrix}
\)