Vehicle-Detection-and-Tracking icon indicating copy to clipboard operation
Vehicle-Detection-and-Tracking copied to clipboard

about matching detections and trackers

Open 3073 opened this issue 5 years ago • 8 comments

hi helper ... can I take the principle you used for matching detection and trackers and use dlib for creating a new tracker for the unmatched detection?

3073 avatar Mar 15 '19 10:03 3073

Sorry for the late reply. It should be the same principle

kcg2015 avatar Mar 18 '19 01:03 kcg2015

Thank you as always... normally the detection will run every 30 frames ( some frames will be skipped) for increasing the speed of processing because the detection is slow. anyways i will provide you the result soon for asking a help ...

On Sun, Mar 17, 2019 at 6:48 PM Kyle Guan [email protected] wrote:

Sorry for the late reply. It should be the same principle

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/kcg2015/Vehicle-Detection-and-Tracking/issues/19#issuecomment-473741252, or mute the thread https://github.com/notifications/unsubscribe-auth/AgXgdffDDFtBqBLu9qO0WwD1Ep3Hce8Gks5vXvBigaJpZM4b2H67 .

3073 avatar Mar 19 '19 03:03 3073

Cool!

kcg2015 avatar Mar 20 '19 20:03 kcg2015

##### there are some code here not necessary for this particular

discussion.

def iou(self,a, b): ### I used your code directly here
    w_intsec = np.maximum (0, (np.minimum(a[2], b[2]) -np.maximum(a[0], b[0])))
    h_intsec = np.maximum (0, (np.minimum(a[3], b[3]) -np.maximum(a[1], b[1])))
    s_intsec = w_intsec * h_intsec
    s_a = (a[2] - a[0])*(a[3] - a[1])
    s_b = (b[2] - b[0])*(b[3] - b[1])
    return float(s_intsec)/(s_a + s_b -s_intsec)
def assign_detections_to_trackers(self ,trackers, detections, iou_thrd= 0.6): ### I used your code here 
    unmatched_trackers, unmatched_detections = [], []
    IOU_mat= np.zeros((len(trackers),len(detections)),dtype=np.float32)
    for t,trk in enumerate(trackers):
        for d,det in enumerate(detections):
            IOU_mat[t,d] = self.iou(trk,det)
    matched_idx = linear_assignment(-IOU_mat)
    unmatched_trackers, unmatched_detections = [], []
    unmatched_trackers_box, unmatched_detections_box = [], []
    for t,trk in enumerate(trackers):
        if(t not in matched_idx[:,0]):
            unmatched_trackers.append(t)
            unmatched_trackers_box.append(trk)
    for d, det in enumerate(detections):
        if(d not in matched_idx[:,1]):
            unmatched_detections.append(d)
            self.unmatched_detections_box.append(det)
    matches = []
    for m in matched_idx:
        if(IOU_mat[m[0],m[1]]<iou_thrd):
            unmatched_trackers.append(m[0])
            unmatched_detections.append(m[1])
        else:
            matches.append(m.reshape(1,2))
    if(len(matches)==0):
        matches = np.empty((0,2),dtype=int)
    else:
        matches = np.concatenate(matches,axis=0)
    print(matches, unmatched_detections, unmatched_trackers)
    return matches, np.array(unmatched_detections), np.array(unmatched_trackers)
def people(self, frame): #### start to read the code here
      self.rects = []
      rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
      if self.totalFrames % 20 == 0 : ### every 20 frame there will be detection
          points = self.det.get_points(frame) #### detection model will be executed
          assignation = self.assign_detections_to_trackers(self.prev_points,points,iou_thrd = 0.6)
          if len(assignation[0]) > 0: #### updating the tracked objects previously if a match is found
              tracker_list =[]
              for trk_idx,det_idx in assignation[0]:
                tmp_trk = self.trackers[trk_idx]
                tracker_list.append(tmp_trk)
              self.update_tracker(tracker_list,frame)
          if len(assignation[2]) > 0: #### delete the tracker if there is unmatched tracker
              indexes = []
              for trk_idx in assignation[2]:
                  indexes.append(trk_idx)
              for index in sorted(indexes, reverse=True):
                  del self.trackers[index]
          if len(assignation[1])> 0: ### generate new tracker for the new object detected 
              dete_points = []
              for idx in assignation[1]:
                  point = points[idx]
                  dete_points.append(point)
              self.generate_tracker(frame,dete_points)
      else: #### if the frame number is not divisible by 20  updating the trackers will continue
          self.update_tracker(self.trackers,frame)
      self.prev_points = self.rects  ### previous updated tackers will be noted for calculating the overlap
      people = self.finder.update(self.rects)
      ##HERE GOES ANOTHER CODE !!!
      return frame
def update_tracker(self,trackers,frame):
        for tracker in trackers:
            tracker.update(frame)
            pos = tracker.get_position()
            startX = int(pos.left())
            startY = int(pos.top())
            endX = int(pos.right())
            endY = int(pos.bottom())
            rect = (startX, startY, endX, endY)
            self.rects.append(rect)
            cv2.rectangle(frame, (startX, startY), (endX, endY), (250,250,250), 4)
        return self.rects

def generate_tracker(self,frame,detections_box): for dete_point in detections_box: (startX, startY, endX, endY) = dete_point tracker = dlib.correlation_tracker() rect = dlib.rectangle(int(startX), int(startY), int(endX), int(endY)) tracker.start_track(frame, rect) self.trackers.append(tracker) pos = tracker.get_position() startX = int(pos.left()) startY = int(pos.top()) endX = int(pos.right()) endY = int(pos.bottom()) rect = (startX, startY, endX, endY) self.rects.append(rect) cv2.rectangle(frame, (startX, startY), (endX, endY), (0,0,250), 4) return self.rects

this code works fine but i need some help....

Questions to discuss on

how do we select the best frame skipped to make it a general purpose code ? for instance for this particular video the best result is obtained when the skipped frame is 20. since this is a counting code a person that did not detected at the detection frame number for example 20 will not be tracked for the next 20 frames ( up to frame number 40). so if the person get lost after the 40 frame , it will not be counted at all and that is a counting error..... do you have a fix to this? will great and honest thanks

3073 avatar Mar 21 '19 07:03 3073

@3073 , sorry for the late reply. This seems to me a very difficult problem to solve, honestly. I have been struggling with finding parameters that work well with a wide range of scenarios.

kcg2015 avatar Mar 24 '19 14:03 kcg2015

thank you....

3073 avatar Mar 24 '19 17:03 3073

@3073 , keep me posted if you find a solution.

kcg2015 avatar Mar 25 '19 01:03 kcg2015

ok sir

On Sun, Mar 24, 2019 at 6:47 PM Kyle Guan [email protected] wrote:

@3073 https://github.com/3073 , keep me posted if you find a solution.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kcg2015/Vehicle-Detection-and-Tracking/issues/19#issuecomment-476027220, or mute the thread https://github.com/notifications/unsubscribe-auth/AgXgdeVK1VUJfdvF8YLIK0Jt5emb50K1ks5vaCqVgaJpZM4b2H67 .

3073 avatar Mar 25 '19 03:03 3073