python中的轨迹交叉点

时间:2022-11-19
本文介绍了python中的轨迹交叉点的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

我正在使用 tensorflow 和 python 检测人员和车辆.我计算轨迹并使用卡尔曼滤波器预测它们,并拟合一条线来预测轨迹.

我的问题是如何找到两条轨迹之间的交点和碰撞时间?

我尝试了线到线的交点,但拟合线并不总是两点线,而是一条折线.这是我的尝试:

 detections = tracker.update(np.array(z_box))对于检测中的 trk [0]:trk = trk.astype(np.int32)helpers.draw_box_label(img, trk, trk[4]) # 绘制边界框centerCoord = (((trk[1] +trk[3])/2), (trk[0] + trk[2])/2)point_lists[trk[4]].append(centerCoord)x = [i[0] for i in point_lists[trk[4]]]y = [i[1] for i in point_lists[trk[4]]]p = np.polyfit(x, y, deg=1)y = p[1] + p[0] * np.array(x)拟合=列表(zip(x,y))cv2.polylines(img, np.int32([fitted]), False, color=(255, 0, 0))对于其他检测[0]:其他 = other.astype(np.int32)if other[4] != trk[4]: # 检查自己的 IDx2 = [i[0] for i in point_lists[other[4]]]y2 = [i[1] for i in point_lists[other[4]]]p2 = np.polyfit(x2, y2, deg=1)y2 = p2[1] + p2[0] * np.array(x2)other_fitted = list(zip(x2, y2))if(line_intersection(fitted, other_fitted)):打印(交叉点")别的:print("不是交集")

解决方案

这是一个有点宽泛的话题,所以我将只关注数学/物理部分,因为我感觉 CV/DIP部分已由你们两个提问者(andre ahmed 和

如前所述,转换为 3D(项目符号 #2)不是必需的,但它消除了非线性,因此以后可以使用简单的线性插值/外插大大简化了事情.

I'm detecting persons and vehicles using tensorflow and python. I calculate the trajectories and predict them using Kalman filter and I fit a line for predicting the trajectory.

My problem is how would I find the intersection and time of collision between the two trajectories ?

I tried line to line intersection but the fitted line is not always a two point lines, it's a polyline. Here is my attempt:

 detections = tracker.update(np.array(z_box))

    for trk in detections[0]:
            trk = trk.astype(np.int32)
            helpers.draw_box_label(img, trk, trk[4])  # Draw the bounding boxes on the
            centerCoord = (((trk[1] +trk[3]) / 2), (trk[0] + trk[2]) / 2)
            point_lists[trk[4]].append(centerCoord)
            x = [i[0] for i in point_lists[trk[4]]]
            y = [i[1] for i in point_lists[trk[4]]]
            p = np.polyfit(x, y, deg=1)
            y = p[1] + p[0] * np.array(x)
            fitted = list(zip(x, y))
            cv2.polylines(img, np.int32([fitted]), False, color=(255, 0, 0))
            for other in detections[0]:
                other = other.astype(np.int32)
                if other[4] != trk[4]: # check for self ID
                    x2 = [i[0] for i in point_lists[other[4]]]
                    y2 = [i[1] for i in point_lists[other[4]]]
                    p2 = np.polyfit(x2, y2, deg=1)
                    y2 = p2[1] + p2[0] * np.array(x2)
                    other_fitted = list(zip(x2, y2))
                    if(line_intersection(fitted, other_fitted)):
                        print("intersection")
                    else:
                        print("not intersection")

解决方案

this is a bit broader topic so I will focus only on the math/physics part as I got the feeling the CV/DIP part is already handled by both of you askers (andre ahmed, and chris burgees).

For simplicity I am assuming linear movement with constant speeds So how to do this:

  1. obtain 2D position of each object for 2 separate frames after known time dt

    so obtain the 2D center (or corner or whatever) position on the image for each object in question.

  2. convert them to 3D

    so using known camera parameters or known bacground info about the scene you can un-project the 2D position on screen into 3D relative position to camera. This will get rid of the non linear interpolations otherwise need if handled just like a 2D case.

    There are more option how to obtain 3D position depending on what you got at your disposal. For example like this:

    • Transformation of 3D objects related to vanishing points and horizon line
  3. obtaining actual speed of objects

    the speed vector is simply:

    vel = ( pos(t+dt) - pos(t) )/dt
    

    so simply subbstract positions of the same object from 2 consequent frames and divide by the framerate period (or interval between the frames used).

  4. test each 2 objects for collision

    this is the funny stuff Yes you can solve a system of inequalities like:

    | ( pos0 + vel0 * t ) - (pos1 + vel1 * t ) | <= threshold
    

    but there is a simpler way I used in here

    • Collision detection between 2 "linearly" moving objects in WGS84

    The idea is to compute t where the tested objects are closest together (if nearing towards eachother).

    so we can extrapolate the future position of each object like this:

    pos(t) = pos(t0) + vel*(t-t0)
    

    where t is actual time and t0 is some start time (for example t0=0).

    let assume we have 2 objects (pos0,vel0,pos1,vel1) we want to test so compute first 2 iterations of their distance so:

    pos0(0) = pos0;
    pos1(0) = pos1;
    dis0 = | pos1(0) - pos0(0) |
    
    pos0(dt) = pos0 + vel0*dt;
    pos1(dt) = pos1 + vel1*dt;
    dis1 = | pos1(dt) - pos0(dt) |
    

    where dt is some small enough time (to avoid skipping through collision). Now if (dis0<dis1) then the objects are mowing away so no collision, if (dis0==dis1) the objects are not moving or moving parallel to each and only if (dis0>dis1) the objects are nearing to each other so we can estimate:

    dis(t) = dis0 + (dis1-dis0)*t
    

    and the collision expects that dis(t)=0 so we can extrapolate again:

    0 = dis0 + (dis1-dis0)*t
    (dis0-dis1)*t = dis0 
    t = dis0 / (dis0-dis1)
    

    where t is the estimated time of collision. Of coarse all this handles all the movement as linear and extrapolates a lot so its not accurate but as you can do this for more consequent frames and the result will be more accurate with the time nearing to collision ... Also to be sure you should extrapolate the position of each object at the time of estimated collision to verify the result (if not colliding then the extrapolation was just numerical and the objects did not collide just was nearing to each for a time)

As mentioned before the conversion to 3D (bullet #2) is not necessary but it get rid of the nonlinearities so simple linear interpolation/extrapolation can be used later on greatly simplify things.

这篇关于python中的轨迹交叉点的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

上一篇:立体校准 Opencv Python 和视差图 下一篇:如何在 Tesseract 和 OpenCV 之间进行选择?

相关文章

最新文章