建议先读文献[1],然后配合着看本文。
1. 前言
上一篇文章针对FairMOT进行了测试和训练,测试的效果很好。这里对FairMOT的训练和测试原理进行剖析。
2. 训练
FairMOT基本采用的是检测+跟踪的思路,检测采用的centernet,跟踪采用deepsort。但是将两个任务进行了端到端训练。其实训练还是相对简单的。
初始问题
- 检测和跟踪是如何一起端到端训练的?
2.1 模型结构图
先来一张论文的模型图
下图是我自己理解所画的输出部分。详细内部图
2.2 详解
-
检测部分
检测部分,可以参考我的看代码解读CenterNet :Objects as Points -
reid部分
这篇文章将它处理成了一个分类任务。假设训练数据集中出现了10000个需要跟踪的id(一个id,就是出现到消失的物体,不管其是什么类别),在跟踪分支中就对有object的点,采用线性层Linear(512×10000)输出,然后在利用交叉熵损失函数计算id_loss -
loss 部分
如我画的图
3. 推理
推理是相对较难的部分。其核心是采用deepsort,文献1讲的很好。
前期准备:
- 卡尔曼滤波
- 匈牙利算法
- deepsort算法流程
3.1 变量说明
状态变量说明
activated:激活,用于单次误检目标的判断
track_state: 跟踪状态,有tracked, lost, remove
4种容器
- unconfirmed_stracks(activated = F, track_state=tracked ) 只出现一次的目标(检测器误检的目标)
- activated_stracks(activate=T, track_state=tracked) 跟踪状态良好的tracker
- lost_stracks(activate=T, track_state=lost)激活,但是跟踪状态丢失
- refind_stracks(activated=T, track_state=lost->tracked)跟丢之后重新找回的目标
1种tracker容器
tracker[ activated(激活状态), tracked(跟踪状态),mean(卡尔曼均值向量), covariance(卡尔曼协方差矩阵), smooth_feat(tracker的外观特征) ]
3.2 具体流程(代码+解读)
1. 第一帧初始化
- 用centernet检测得到目标(x,y,w,h)
- 对每个目标初始化一个tracker,就用n个tracker
- 将n个tracker放入activated_stracks容器中
for inew in u_detection: # 对cosine/iou/uncofirmed_tracker都未匹配的detection重新初始化一个unconfimed_tracker
track = detections[inew]
if track.score < self.det_thresh:
continue
track.activate(self.kalman_filter, self.frame_id) # 激活track,第一帧的activated=T,其他为False
activated_stacks.append(track)
第二帧
2. 对新检测目标进行外观+距离匹配
- 对第2帧进行目标检测得detections(这也是初始化成tracker的)
- 将[activated_stracks lost_stracks]融合成pool_stracks
- detections和pool_stracks根据feat计算外观cost_矩阵,就是用feat计算cosine距离
- 利用卡尔曼算法预测pool_stracks的新的mean,covariance
- 计算pool_stracks和detection的距离cost,并将大于距离阈值的外观cost_矩阵赋值为inf
- 利用匈牙利算法进行匹配
- 能匹配的
- pool_stracks的track_state==tracked,更新smooth_feat,卡尔曼状态更新mean,covariance(卡尔曼用),计入activated_stracks
- pool_stracks的track_state==tracked,更新smooth_feat,卡尔曼状态更新mean,covariance(卡尔曼用),
计入refind_stracks
- 不能匹配的
- 提出不能匹配的,得到新的detections,r_tracked_stracks
- 能匹配的
strack_pool = joint_stracks(tracked_stracks, self.lost_stracks)
dists = matching.embedding_distance(strack_pool, detections) # 计算新检测出来的目标和tracked_tracker之间的cosine距离
STrack.multi_predict(strack_pool) # 卡尔曼预测
dists = matching.fuse_motion(self.kalman_filter, dists, strack_pool, detections) # 利用卡尔曼计算detection和pool_stacker直接的距离代价
matches, u_track, u_detection = matching.linear_assignment(dists, thresh=0.7) # 匈牙利匹配 // 将跟踪框和检测框进行匹配 // u_track是未匹配的tracker的索引,
for itracked, idet in matches: # matches:63*2 , 63:detections的维度,2:第一列为tracked_tracker索引,第二列为detection的索引
track = strack_pool[itracked]
det = detections[idet]
if track.state == TrackState.Tracked:
track.update(det, self.frame_id) # 匹配的pool_tracker和detection,更新特征和卡尔曼状态
activated_starcks.append(track)
else:
track.re_activate(det, self.frame_id, new_id=False) # 如果是在lost中的,就重新激活
refind_stracks.append(track)
- 对剩下的detections,r_tracked_stracks进行IOU匹配
- 计算detections,r_tracked_stracks计算IOU cost矩阵
- 针对IOU cost进行匈牙利匹配
- 能匹配的
- pool_stracks的track_state==tracked,更新smooth_feat,卡尔曼状态更新mean,covariance(卡尔曼用),计入activated_stracks
- pool_stracks的track_state==tracked,更新smooth_feat,卡尔曼状态更新mean,covariance(卡尔曼用),
计入refind_stracks
- 不能匹配的
- r_tracked_stracks状态track_state改为lost
- detections再遗留到下一步进行继续匹配
- 能匹配的
detections = [detections[i] for i in u_detection] # u_detection是未匹配的detection的索引
r_tracked_stracks = [strack_pool[i] for i in u_track if strack_pool[i].state == TrackState.Tracked]
dists = matching.iou_distance(r_tracked_stracks, detections)
matches, u_track, u_detection = matching.linear_assignment(dists, thresh=0.5)
for itracked, idet in matches:
track = r_tracked_stracks[itracked]
det = detections[idet]
if track.state == TrackState.Tracked:
track.update(det, self.frame_id)
activated_starcks.append(track)
else:
track.re_activate(det, self.frame_id, new_id=False) # 前面已经限定了是TrackState.Tracked,这里是不用运行到的。
refind_stracks.append(track)
for it in u_track:
track = r_tracked_stracks[it]
if not track.state == TrackState.Lost:
track.mark_lost()
lost_stracks.append(track) # 将和tracked_tracker iou未匹配的tracker的状态改为lost
- 上一步遗留的detection与unconfirmed_stracks进行IOU匹配
- 计算IOU cost
- 匈牙利匹配
- 能匹配
- 更新 unconfirmed_stracks,更新smooth_feat,卡尔曼状态更新mean,covariance(卡尔曼用),计入activated_stracks
- 不能匹配
- unconfirmed_stracks直接计入removed_stracks
- 不能匹配的detections,在遗留到下一步
- 能匹配
detections = [detections[i] for i in u_detection] # 将cosine/iou未匹配的detection和unconfirmed_tracker进行匹配
dists = matching.iou_distance(unconfirmed, detections)
matches, u_unconfirmed, u_detection = matching.linear_assignment(dists, thresh=0.7)
for itracked, idet in matches:
unconfirmed[itracked].update(detections[idet], self.frame_id)
activated_starcks.append(unconfirmed[itracked])
for it in u_unconfirmed:
track = unconfirmed[it]
track.mark_removed()
removed_stracks.append(track)
- 上一步遗留的detections,初始化成unconfirmed_stracks中的tracker
for inew inu_detection: # 对cosine/iou/uncofirmed_tracker都未匹配的detection重新初始化一个unconfimed_tracker
track = detections[inew]
if track.score < self.det_thresh:
continue
track.activate(self.kalman_filter, self.frame_id) # 激活track,第一帧的activated=T,其他为False
activated_starcks.append(track)
- 对15帧连续track_state=lost的tracker,进行删除
for track in self.lost_stracks:
if self.frame_id - track.end_frame > self.max_time_lost: # 消失15帧之后
track.mark_removed()
removed_stracks.append(track)
4. 最后再来看一下效果
FAIRMOT多目标跟踪
5. TO DO
- 根据原理来说,该模型是支持多类别,多目标的跟踪的,找时间实现一下。