从apollo2.0的代码上看,百度是想做lidar和radar融合的 先看传感器输出数据定义 class SensorRawFrame { public: SensorType sensor_type_; double timestamp_; Eigen::Matrix4d pose_; }; // lidar输出数据定义,用的是Velodyne的产品,包含了一块点云,pcl库格式 class Velodyne的产品RawFrame : public SensorRawFrame { public: VelodyneRawFrame() {} ~VelodyneRawFrame() {} public: pcl_util::PointCloudPtr cloud_; }; // radar输出数据定义,用的是大陆的产品,包含了一个ContiRadar成员 class RadarRawFrame : public SensorRawFrame { public: RadarRawFrame() {} ~RadarRawFrame() {}

public: ContiRadar raw_obstacles_; Eigen::Vector3f car_linear_speed_; }; ContiRadar定义在Conti_Radar.proto中 message ContiRadar { optional apollo.common.Header header = 1; repeated ContiRadarObs contiobs = 2; //conti radar obstacle array optional RadarState_201 radar_state = 3; optional ClusterListStatus_600 cluster_list_status = 4;
optional ObjectListStatus_60A object_list_status = 5; } ContiRadarObs是最重要的部分,代表了radar输出的一个目标点,同样定义在Conti_Radar.proto中 message ContiRadarObs { optional apollo.common.Header header = 1; optional bool clusterortrack = 2; // 0 = track, 1 = cluster optional int32 obstacle_id = 3; // obstacle Id required double longitude_dist = 4; required double lateral_dist = 5; required double longitude_vel = 6; required double lateral_vel = 7; optional double rcs = 8; optional int32 dynprop = 9; optional double longitude_dist_rms = 10; optional double lateral_dist_rms = 11; optional double longitude_vel_rms = 12; optional double lateral_vel_rms = 13; optional double probexist = 14;

//The following is only valid for the track object message optional int32 meas_state = 15; optional double longitude_accel = 16; optional double lateral_accel = 17; optional double oritation_angle = 18; optional double longitude_accel_rms = 19; optional double lateral_accel_rms = 20; optional double oritation_angle_rms = 21; optional double length = 22; optional double width = 23; optional int32 obstacle_class = 24; } 值得注意的是百度增加了rcs,虽然2.0的代码未用到rcs。

入口函数,整个函数做了两件事, 1.根据sensor_type_分别处理lidar和radar 2.lidar和radar融合 本次只分析radar,lidar和融合下次再写 bool ObstaclePerception::Process(SensorRawFrame* frame, std::vector<ObjectPtr>* out_objects) { std::shared_ptr<SensorObjects> sensor_objects(new SensorObjects()); if (frame->sensor_type_ == VELODYNE_64) { } else if (frame->sensor_type_ == RADAR) { // radar处理 RadarRawFrame* radar_frame = dynamic_cast<RadarRawFrame*>(frame); RadarDetectorOptions options; options.radar2world_pose = &(radar_frame->pose_); options.car_linear_speed = radar_frame->car_linear_speed_; std::vector<ObjectPtr> objects; std::vector<PolygonDType> map_polygons; if (!radar_detector_->Detect(radar_frame->raw_obstacles_, map_polygons, options, &objects)) { AERROR << "Radar perception error!, " << std::fixed << std::setprecision(12) << radar_frame->timestamp_; return false; } sensor_objects->objects = objects; AINFO << "radar objects size: " << objects.size(); PERF_BLOCK_END("radar_detection"); // set frame content if (FLAGS_enable_visualization && FLAGS_show_radar_obstacles) { frame_content_.SetTrackedObjects(sensor_objects->objects); } } } radar处理两个不步骤 1.调用bool ModestRadarDetector::Detect函数,输出检测跟踪之后的objects 2.objects保存到frame_content_,用于融合

bool ModestRadarDetector::Detect { 1. 调用void ObjectBuilder::Build(const ContiRadar &raw_obstacles, const Eigen::Matrix4d &radar_pose, const Eigen::Vector2d &main_velocity, SensorObjects radar_objects) 这个build函数根据raw_obstacles原始数据,构建Object对象,Object的定义是lidar和radar共用的 这里面有做了几个重要的事 1. radar坐标转化车身坐标,相对速度转为绝对速度 2. 判断目标是否is_background,这个判断的条件比较有意思 情况1:比较出现的次数,目标出现的次数小于delay_frames_的is_background if (current_con_ids[obstacle_id] <= delay_frames_) { object_ptr->is_background = true; } 情况2: ContiRadarUtil::IsFp(raw_obstacles.contiobs(i), conti_params_, delay_frames_, tracking_times)) 情况3:比较目标速度和车身速度的夹角,(1/4Pi,3/4*Pi) (-1/4Pi,-3/4Pi)范围的不是background if (ContiRadarUtil::IsConflict(ref_velocity, object_ptr->velocity.cast<float>())) { object_ptr->is_background = true; } 3. 计算目标的外包矩形,radar是没有目标形状信息的,百度自己把长宽高都设为1,吐槽 RadarUtil::MockRadarPolygon(point, object_ptr->length, object_ptr->width, theta, &(object_ptr->polygon));
2. 调用void RadarTrackManager::Process(const SensorObjects &radar_obs) { radar_obs_ = radar_obs; Update(&radar_obs_); } update函数 1. AssignTrackObsIdMatch(*radar_obs, &assignment, &unassigned_track, &unassigned_obs); id相同,并且距离小于2.5的是同一个目标 百度计算距离没有像lidar那样用卡尔曼预测一步,直接当前速度 * 时间差来预测,吐槽
2. UpdateAssignedTrack(*radar_obs, assignment); 用匹配到的object更新轨迹,百度没有保存radar轨迹的历史点,lidar是保留了轨迹历史点的
3. UpdateUnassignedTrack((*radar_obs).timestamp, unassigned_track); 对于没有匹配到object的轨迹,如果持续0.06没有匹配到object,就置为null 4. DeleteLostTrack(); 删除null的轨迹 5. CreateNewTrack(*radar_obs, unassigned_obs); 没有匹配到轨迹的object建立新的轨迹 3. 调用bool ModestRadarDetector::CollectRadarResult(std::vector<ObjectPtr> *objects) 非background的track输出到objects } 总结一下:

  1. 预留了rcs,没用到
  2. 计算距离前预测方式太简单
  3. radar的目标长宽高自定义1.0,
  4. ROI预留了map_polygons这个变量,但是目前empty,未向lidar那样从hadmap得到ROI区域