Stanford Cars Dataset数据集是一个关于车辆图像分类的数据集,该数据集保存格式为.mat形式。数据及下载地址为:
https://ai.stanford.edu/~jkrause/cars/car_dataset.html
加载.mat文件
Scipy是一个非常流行的用于科学计算的python库,很自然地,它们有一种方法可以让你读入.mat文件。阅读它们绝对是一件容易的事。您可以在一行代码中完成它:
from scipy.io import loadmat
annots = loadmat('cars_train_annos.mat')
格式化数据
通过loadmat方法加载数据后会返回一个Python字典的数据结构,我们可以查看数据关键字,代码如下:
annots.keys()
> dict_keys(['__header__', '__version__', '__globals__', 'annotations'])
下边是关于数据集描述的文档,从中我们可以查看关于数据及更详细的描述,也可以验证通过Python加载后数据是否正确。
This file gives documentation for the cars 196 dataset.
(http://ai.stanford.edu/~jkrause/cars/car_dataset.html)
— — — — — — — — — — — — — — — — — — — —
Metadata/Annotations
— — — — — — — — — — — — — — — — — — — —
Descriptions of the files are as follows:
-cars_meta.mat:
Contains a cell array of class names, one for each class.
-cars_train_annos.mat:
Contains the variable ‘annotations’, which is a struct array of length
num_images and where each element has the fields:
bbox_x1: Min x-value of the bounding box, in pixels
bbox_x2: Max x-value of the bounding box, in pixels
bbox_y1: Min y-value of the bounding box, in pixels
bbox_y2: Max y-value of the bounding box, in pixels
class: Integral id of the class the image belongs to.fname: Filename of the image within the folder of images.
-cars_test_annos.mat:
Same format as ‘cars_train_annos.mat’, except the class is not provided.
— — — — — — — — — — — — — — — — — — — — Submission file format
— — — — — — — — — — — — — — — — — — — — Files for submission should be .txt files with the class prediction forimage M on line M. Note that image M corresponds to the Mth annotation inthe provided annotation file. An example of a file in this format istrain_perfect_preds.txtIncluded in the devkit are a script for evaluating training accuracy,eval_train.m. Usage is:
(in MATLAB)
>> [accuracy, confusion_matrix] = eval_train(‘train_perfect_preds.txt’)
If your training predictions work with this function then your testing
predictions should be good to go for the evaluation server, assuming
that they’re in the same format as your training predictions.
从文档中可以看到,annotations变量中包含我们想要的结构数据,包括标签、图像文件名以及图像边界框信息,因此我们只需处理annotations变量并从中提取我们想要的信息。
type(annots[‘annotations’]),annots[‘annotations’].shape
>(numpy.ndarray, (1, 8144))
type(annots['annotations'][0][0]),annots['annotations'][0][0].shape
>(numpy.void, ())
从.mat中提取的数据以numpy.ndarray格式存储,此数组中的项的数据类型是numpy.void。
annots[‘annotations’][0][0][‘bbox_x1’], annots[‘annotations’][0][0][‘fname’]
> (array([[39]], dtype=uint8), array(['00001.jpg'], dtype='))
接下来我们通过循环将字典中的annotations变量信息提取出来,并将它们存储在列表中:
[item.flat[0] for item in annots[‘annotations’][0][0]]
> [39, 116, 569, 375, 14, '00001.jpg']
将数据转换成Pandas Dataframe
现在我们用python加载好matlab数据文件,为方便后续的处理,我们将数据转换为pandas格式。转换过程十分简单,具体代码如下:
data = [[row.flat[0] for row in line] for line in annots[‘annotations’][0]]
columns = [‘bbox_x1’, ‘bbox_y1’, ‘bbox_x2’, ‘bbox_y2’, ‘class’, ‘fname’]df_train = pd.DataFrame(data, columns=columns)
转换后数据形式如下:
参考
https://towardsdatascience.com/how-to-load-matlab-mat-files-in-python-1f200e1287b5