高斯拉普拉斯算子是用于斑点检测还是边缘检测?

时间:2022-11-11
本文介绍了高斯拉普拉斯算子是用于斑点检测还是边缘检测?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

以下代码来自(被要求删除链接).但我想知道它究竟是如何工作的.如果这被认为是边缘检测或斑点检测,我感到很困惑,因为

如果您有一个半径为 3 且值 1 以内核为中心的斑点,并且背景的值为 0,您将获得非常强烈(负面)的响应.很清楚为什么如果半径设置得当它可以进行斑点检测.

边缘检测呢?好吧,它不像 Sobel 算子,它为您提供梯度和对边缘的强烈响应.Sobel 算子不会为您提供准确的边缘,因为梯度通常会在几个像素上上升和下降.您的边缘将是几个像素宽.为了使其定位更准确,我们可以在局部找到具有最大(或最小)梯度的像素.这意味着它的二阶导数(拉普拉斯算子)应该为零,或者在该点处有一个过零.

您可以看到处理后的图像既有明带又有暗带.过零是边缘.要在内核中看到这一点,请尝试手动在内核上滑动一个完美的步进边缘以查看响应如何变化.

对于你的第二个问题,我想绝对是试图找到浅色和深色斑点(浅色斑点,深色背景;深色斑点,浅色背景),因为它们分别给出强烈的负面和强烈的正面回应.然后它在每个像素位置找到所有图像的最大值.对于每个输出像素,它使用图像上具有最大响应的像素作为输出.我认为他的理由是具有强烈冲动(小斑点)的像素是焦点.

他正在使用 bitwise_not 作为复制机制.它将掩码指定的一些像素设置为源图像的按位非.最后,您将获得由来自不同来源的像素组成的 output,但所有这些像素均未按位进行.要恢复真实图像,只需再次不"它们,如 NOT(NOT(x)) = x.255-x 正是这样做的.我认为 copyTo 也可以,不确定为什么他选择了其他方式.

图片取自 http://fourier.eng.hmc.edu/e161/lectures/gradient/node8.html.

The following code is provided from (was asked to remove the link). But I was wondering how it exactly works. I was confused if this was considered edge detection or blob detection, as Wikipedia list the Laplacian of Gaussian (LoG) as blob detection.

Also, could somebody explain and provide a deeper explanation for why the absolute value is calculated and what is going on in the focus_stack() function?

#   Compute the gradient map of the image
def doLap(image):

    # YOU SHOULD TUNE THESE VALUES TO SUIT YOUR NEEDS
    kernel_size = 5         # Size of the laplacian window
    blur_size = 5           # How big of a kernal to use for the gaussian blur
                            # Generally, keeping these two values the same or very close works well
                            # Also, odd numbers, please...

    blurred = cv2.GaussianBlur(image, (blur_size,blur_size), 0)
    return cv2.Laplacian(blurred, cv2.CV_64F, ksize=kernel_size)

#
#   This routine finds the points of best focus in all images and produces a merged result...
#
def focus_stack(unimages):
    images = align_images(unimages)

    print "Computing the laplacian of the blurred images"
    laps = []
    for i in range(len(images)):
        print "Lap {}".format(i)
        laps.append(doLap(cv2.cvtColor(images[i],cv2.COLOR_BGR2GRAY)))

    laps = np.asarray(laps)
    print "Shape of array of laplacians = {}".format(laps.shape)

    output = np.zeros(shape=images[0].shape, dtype=images[0].dtype)

    abs_laps = np.absolute(laps)
    maxima = abs_laps.max(axis=0)
    bool_mask = abs_laps == maxima
    mask = bool_mask.astype(np.uint8)
    for i in range(0,len(images)):
        output = cv2.bitwise_not(images[i],output, mask=mask[i])

    return 255-output

解决方案

EDIT: Cris Luengo is right. Ignore the part about edge detector.


Laplacian of Gaussian(LoG) can be used as both edge detector and blob detector. I will skip the detailed mathematics and rationale, I think you can read them on a book or some websites here, here and here.

To see why it can be used as both, let's look at its plot and kernel.

If you have a blob with radius of 3 and value 1 centered at the kernel, and the background has value 0, you will have a very strong (negative) response. It is clear why it can do blob detection if the radius is set properly.

How about edge detection? Well it is not like Sobel operator which gives you gradient and strong response for edges. Sobel operator does not give you accurate edges as the gradient usually rise and fall across a few pixels. Your edge would then be several pixels wide. To make it localize more accurate, we can find the pixel with maximum (or minimum) gradient locally. This implies its second derivative (Laplacian) should equal zero, or has a zero-crossing at that point.

You can see the processed image has both a light and dark band. The zero-crossing is the edge. To see this with a kernel, try sliding a perfect step edge across the kernel manually to see how the respond changes.

For you second question, I guess the absolute is trying to find both light and dark blob (light blob, dark background; dark blob, light background) as they gives strong negative and strong positive response respectively. It then find the max across all images at each pixel location. For each output pixel, it uses the pixel at the image with the maximum response as output. I think his rationale is that pixels with strong impulse (small blob) are in-focus.

He is using bitwise_not as a copy mechanism. It sets some pixels, specified by the mask, to the bitwise not of the source image. At the end, you would have output consisting of pixels from different sources, except that all of them have undergone bitwise not. To recover the true image, simply 'NOT' them again, as NOT(NOT(x)) = x. 255-x does exactly that. I think a copyTo would work too, not sure why he chose otherwise.

Images taken from http://fourier.eng.hmc.edu/e161/lectures/gradient/node8.html.

这篇关于高斯拉普拉斯算子是用于斑点检测还是边缘检测?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

上一篇:PyTorch NotImplementedError 转发 下一篇:笔画宽度变换 (SWT) 实现 (Python)

相关文章

最新文章