Python OpenCV HoughLinesP 无法检测线

时间:2022-11-19
本文介绍了Python OpenCV HoughLinesP 无法检测线的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

我正在使用 OpenCV HoughlinesP 来查找水平线和垂直线.它大部分时间都没有找到任何线路.即使它找到一条线,它甚至与实际图像都不接近.

导入 cv2将 numpy 导入为 npimg = cv2.imread('image_with_edges.jpg')灰色 = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)标志,b = cv2.threshold(灰色,0,255,cv2.THRESH_OTSU)元素 = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))cv2.erode(b,元素)边缘 = cv2.Canny(b,10,100,apertureSize = 3)线 = cv2.HoughLinesP(edges,1,np.pi/2,275, minLineLength = 100, maxLineGap = 200)[0].tolist()对于 x1,y1,x2,y2 行:对于枚举(行)中的索引(x3,y3,x4,y4):if y1==y2 and y3==y4: # 水平线差异=绝对(y1-y3)elif x1==x2 and x3==x4: # 垂直线差异=绝对(x1-x3)别的:差异 = 0如果差异 <10 且 diff 不为 0:删除线[索引]gridsize = (len(lines) - 2)/2cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)cv2.imwrite('houghlines3.jpg',img)

输入图像:

输出图像:(见红线):

@ljetibo 试试这个:

在这一点之后,不难猜测出了什么问题.但是让我们继续,我相信你想要实现的是这样的:

flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)

然后你继续,并尝试侵蚀图像.我不确定您为什么要这样做,是您打算加粗"线条,还是您打算消除噪音.无论如何,您从未将侵蚀的结果分配给某物.Numpy 数组是表示图像的方式,它是可变的,但它不是语法的工作方式:

cv2.erode(src, kernel, [optionalOptions] ) → dst

所以你必须写:

b = cv2.erode(b,element)

好的,现在介绍元素以及侵蚀的工作原理.侵蚀将内核拖到图像上.内核是一个简单的矩阵,其中包含 1 和 0.该矩阵的元素之一,通常是中心元素,称为锚点.锚点是在操作结束时将被替换的元素.当你创建

cv2.getStructuringElement(cv2.MORPH_CROSS, (1, 1))

您创建的实际上是一个 1x1 矩阵(1 列,1 行).这使得侵蚀完全无用.

腐蚀的作用是首先从原始图像中检索所有像素亮度值,其中与图像片段重叠的内核元素具有1".然后它找到检索到的像素的最小值并将锚点替换为该值.

在您的情况下,这意味着您将 [1] 矩阵拖到图像上,比较源图像像素亮度是否大于、等于或小于自身,然后替换它与自己.

如果您的目的是去除噪音",那么在图像上使用矩形内核可能会更好.这样想,噪音"就是与周围环境格格不入"的东西.因此,如果您将中心像素与周围环境进行比较,发现它不适合,则很可能是噪声.

另外,我说过它用内核检索到的最小值替换了锚点.数值上,最小值为 0,这恰好是图像中黑色的表示方式.这意味着在您的主要是白色图像的情况下,侵蚀会膨胀"黑色像素.如果 255 个值的白色像素在内核的范围内,侵蚀将用 0 值的黑色像素替换它们.在任何情况下,它都不应该是形状 (1,1).

>>>cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))数组([[0, 1, 0],[1, 1, 1],[0, 1, 0]], dtype=uint8)

如果我们用 3x3 矩形内核腐蚀第二个图像,我们会得到下面的图像.

好的,现在我们解决了这个问题,接下来您要做的就是使用 Canny 边缘检测找到边缘.你从中得到的图像是:

好的,现在我们寻找EXACTLY垂直线和EXACTLY水平线ONLY.当然,除了图像左侧的子午线之外没有这样的线条(这就是它的名字吗?),你做对后得到的最终图像是这样的:

现在,由于您从未描述过您的确切想法,而我的最佳猜测是您想要平行线和经线,您在比例较小的地图上会更幸运,因为它们一开始不是直线,而是曲线.此外,是否有特定的理由来完成概率霍夫?常规"霍夫不够用?

抱歉,帖子太长了,希望对你有所帮助.

<小时>

此处的文字是作为 OP 11 月 24 日的澄清请求而添加的.因为没有办法将答案放入字符有限的评论中.

我建议 OP 针对 curves 的检测提出一个更具体的新问题,因为您处理的是曲线 op,而不是水平和垂直 lines.

检测曲线的方法有多种,但都不是简单的方法.按照从最简单到最难的顺序:

  1. 使用 RANSAC 算法.制定一个描述多头性质的公式.和纬度.线路取决于相关地图.IE.当您靠近赤道时,纬度曲线在地图上几乎是一条完美的直线,赤道是完美的直线,但当您在高纬度(靠近两极)时,它会非常弯曲,类似于圆段).SciPy 已经将

    I am using OpenCV HoughlinesP to find horizontal and vertical lines. It is not finding any lines most of the time. Even when it finds a lines it is not even close to actual image.

    import cv2
    import numpy as np
    
    img = cv2.imread('image_with_edges.jpg')
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    
    
    flag,b = cv2.threshold(gray,0,255,cv2.THRESH_OTSU)
    
    element = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))
    cv2.erode(b,element)
    
    edges = cv2.Canny(b,10,100,apertureSize = 3)
    
    lines = cv2.HoughLinesP(edges,1,np.pi/2,275, minLineLength = 100, maxLineGap = 200)[0].tolist()
    
    for x1,y1,x2,y2 in lines:
       for index, (x3,y3,x4,y4) in enumerate(lines):
    
        if y1==y2 and y3==y4: # Horizontal Lines
            diff = abs(y1-y3)
        elif x1==x2 and x3==x4: # Vertical Lines
            diff = abs(x1-x3)
        else:
            diff = 0
    
        if diff < 10 and diff is not 0:
            del lines[index]
    
        gridsize = (len(lines) - 2) / 2
    
       cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
       cv2.imwrite('houghlines3.jpg',img)
    

    Input Image:

    Output Image: (see the Red Line):

    @ljetibo Try this with: c_6.jpg

    解决方案

    There's quite a bit wrong here so I'll just start from the beginning.

    Ok, first thing you do after opening an image is tresholding. I recommend strongly that you have another look at the OpenCV manual on tresholding and the exact meaning of the treshold methods.

    The manual mentions that

    cv2.threshold(src, thresh, maxval, type[, dst]) → retval, dst

    the special value THRESH_OTSU may be combined with one of the above values. In this case, the function determines the optimal threshold value using the Otsu’s algorithm and uses it instead of the specified thresh .

    I know it's a bit confusing because you don't actully combine THRESH_OTSU with any of the other methods (THRESH_BINARY etc...), unfortunately that manual can be like that. What this method actually does is it assumes that there's a "foreground" and a "background" that follow a bi-modal histogram and then applies the THRESH_BINARY I believe.

    Imagine this as if you're taking an image of a cathedral or a high building mid day. On a sunny day the sky will be very bright and blue, and the cathedral/building will be quite a bit darker. This means the group of pixels belonging to the sky will all have high brightness values, that is will be on the right side of the histogram, and the pixels belonging to the church will be darker, that is to the middle and left side of the histogram.

    Otsu uses this to try and guess the right "cutoff" point, called thresh. For your image Otsu's alg. supposes that all that white on the side of the map is the background, and the map itself the foreground. Therefore your image after thresholding looks like this:

    After this point it's not hard to guess what goes wrong. But let's go on, What you're trying to achieve is, I believe, something like this:

    flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
    

    Then you go on, and try to erode the image. I'm not sure why you're doing this, was your intention to "bold" the lines, or was your intention to remove noise. In any case you never assigned the result of erosion to something. Numpy arrays, which is the way images are represented, are mutable but it's not the way the syntax works:

    cv2.erode(src, kernel, [optionalOptions] ) → dst
    

    So you have to write:

    b = cv2.erode(b,element)
    

    Ok, now for the element and how the erosion works. Erosion drags a kernel over an image. Kernel is a simple matrix with 1's and 0's in it. One of the elements of that matrix, usually centre one, is called an anchor. An anchor is the element that will be replaced at the end of the operation. When you created

    cv2.getStructuringElement(cv2.MORPH_CROSS, (1, 1))
    

    what you created is actually a 1x1 matrix (1 column, 1 row). This makes erosion completely useless.

    What erosion does, is firstly retrieves all the values of pixel brightness from the original image where the kernel element, overlapping the image segment, has a "1". Then it finds a minimal value of retrieved pixels and replaces the anchor with that value.

    What this means, in your case, is that you drag [1] matrix over the image, compare if the source image pixel brightness is larger, equal or smaller than itself and then you replace it with itself.

    If your intention was to remove "noise", then it's probably better to use a rectangular kernel over the image. Think of it this way, "noise" is that thing that "doesn't fit in" with the surroundings. So if you compare your centre pixel with it's surroundings and you find it doesn't fit, it's most likely noise.

    Additionally, I've said it replaces the anchor with the minimal value retrieved by the kernel. Numerically, minimal value is 0, which is coincidentally how black is represented in the image. This means that in your case of a predominantly white image, erosion would "bloat up" the black pixels. Erosion would replace the 255 valued white pixels with 0 valued black pixels if they're in the reach of the kernel. In any case it shouldn't be of a shape (1,1), ever.

    >>> cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
    array([[0, 1, 0],
           [1, 1, 1],
           [0, 1, 0]], dtype=uint8)
    

    If we erode the second image with a 3x3 rectangular kernel we get the image bellow.

    Ok, now we got that out of the way, next thing you do is you find edges using Canny edge detection. The image you get from that is:

    Ok, now we look for EXACTLY vertical and EXACTLY horizontal lines ONLY. Of course there are no such lines apart from the meridian on the left of the image (is that what it's called?) and the end image you get after you did it right would be this:

    Now since you never described your exact idea, and my best guess is that you want the parallels and meridians, you'll have more luck on maps with lesser scale because those aren't lines to begin with, they are curves. Additionally, is there a specific reason to get a Probability Hough done? The "regular" Hough doesn't suffice?

    Sorry for the too-long post, hope it helps a bit.


    Text here was added as a request for clarification from the OP Nov. 24th. because there's no way to fit the answer into a char limited comment.

    I'd suggest OP asks a new question more specific to the detection of curves because you are dealing with curves op, not horizontal and vertical lines.

    There are several ways to detect curves but none of them are easy. In the order of simplest-to-implement to hardest:

    1. Use RANSAC algorithm. Develop a formula describing the nature of the long. and lat. lines depending on the map in question. I.e. latitude curves will almost be a perfect straight lines on the map when you're near the equator, with the equator being the perfectly straight line, but will be very curved, resembling circle segments, when you're at high latitudes (near the poles). SciPy already has RANSAC implemented as a class all you have to do is find and the programatically define the model you want to try to fit to the curves. Of course there's the ever-usefull 4dummies text here. This is the easiest because all you have to do is the math.
    2. A bit harder to do would be to create a rectangular grid and then try to use cv findHomography to warp the grid into place on the image. For various geometric transformations you can do to the grid you can check out OpenCv manual. This is sort of a hack-ish approach and might work worse than 1. because it depends on the fact that you can re-create a grid with enough details and objects on it that cv can identify the structures on the image you're trying to warp it to. This one requires you to do similar math to 1. and just a bit of coding to compose the end solution out of several different functions.
    3. To actually do it. There are mathematically neat ways of describing curves as a list of tangent lines on the curve. You can try to fit a bunch of shorter HoughLines to your image or image segment and then try to group all found lines and determine, by assuming that they're tangents to a curve, if they really follow a curve of the desired shape or are they random. See this paper on this matter. Out of all approaches this one is the hardest because it requires a quite a bit of solo-coding and some math about the method.

    There could be easier ways, I've never actually had to deal with curve detection before. Maybe there are tricks to do it easier, I don't know. If you ask a new question, one that hasn't been closed as an answer already you might have more people notice it. Do make sure to ask a full and complete question on the exact topic you're interested in. People won't usually spend so much time writing on such a broad topic.

    To show you what you can do with just Hough transform check out bellow:

    import cv2
    import numpy as np
    
    def draw_lines(hough, image, nlines):
       n_x, n_y=image.shape
       #convert to color image so that you can see the lines
       draw_im = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
    
       for (rho, theta) in hough[0][:nlines]:
          try:
             x0 = np.cos(theta)*rho
             y0 = np.sin(theta)*rho
             pt1 = ( int(x0 + (n_x+n_y)*(-np.sin(theta))),
                     int(y0 + (n_x+n_y)*np.cos(theta)) )
             pt2 = ( int(x0 - (n_x+n_y)*(-np.sin(theta))),
                     int(y0 - (n_x+n_y)*np.cos(theta)) )
             alph = np.arctan( (pt2[1]-pt1[1])/( pt2[0]-pt1[0]) )
             alphdeg = alph*180/np.pi
             #OpenCv uses weird angle system, see: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
             if abs( np.cos( alph - 180 )) > 0.8: #0.995:
                cv2.line(draw_im, pt1, pt2, (255,0,0), 2)
             if rho>0 and abs( np.cos( alphdeg - 90)) > 0.7:
                cv2.line(draw_im, pt1, pt2, (0,0,255), 2)    
          except:
             pass
       cv2.imwrite("/home/dino/Desktop/3HoughLines.png", draw_im,
                 [cv2.IMWRITE_PNG_COMPRESSION, 12])   
    
    img = cv2.imread('a.jpg')
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    
    flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
    cv2.imwrite("1tresh.jpg", b)
    
    element = np.ones((3,3))
    b = cv2.erode(b,element)
    cv2.imwrite("2erodedtresh.jpg", b)
    
    edges = cv2.Canny(b,10,100,apertureSize = 3)
    cv2.imwrite("3Canny.jpg", edges)
    
    hough = cv2.HoughLines(edges, 1, np.pi/180, 200)   
    draw_lines(hough, b, 100)
    

    As you can see from the image bellow, straight lines are only longitudes. Latitudes are not as straight therefore for each latitude you have several detected lines that behave like tangents on the line. Blue drawn lines are drawn by the if abs( np.cos( alph - 180 )) > 0.8: while the red drawn lines are drawn by rho>0 and abs( np.cos( alphdeg - 90)) > 0.7 condition. Pay close attention when comparing the original image with the image with lines drawn on it. The resemblance is uncanny (heh, get it?) but because they're not lines a lot of it only looks like junk. (especially that highest detected latitude line that seems like it's too "angled" but in reality those lines make a perfect tangent to the latitude line on its thickest point, just as hough algorithm demands it). Acknowledge that there are limitations to detecting curves with a line detection algorithm

    这篇关于Python OpenCV HoughLinesP 无法检测线的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

上一篇:OpenCV 图像匹配 - 表单照片与表单模板 下一篇:如何使用 Python OpenCV 将灰度图像转换为热图图像

相关文章

最新文章