Can python read matlab files?

There is a nice package called mat4py which can easily be installed using

pip install mat4py

It is straightforward to use [from the website]:

Load data from a MAT-file

The function loadmat loads all variables stored in the MAT-file into a simple Python data structure, using only Python’s dict and list objects. Numeric and cell arrays are converted to row-ordered nested lists. Arrays are squeezed to eliminate arrays with only one element. The resulting data structure is composed of simple types that are compatible with the JSON format.

Example: Load a MAT-file into a Python data structure:

from mat4py import loadmat

data = loadmat['datafile.mat']

The variable data is a dict with the variables and values contained in the MAT-file.

Save a Python data structure to a MAT-file

Python data can be saved to a MAT-file, with the function savemat. Data has to be structured in the same way as for loadmat, i.e. it should be composed of simple data types, like dict, list, str, int, and float.

Example: Save a Python data structure to a MAT-file:

from mat4py import savemat

savemat['datafile.mat', data]

The parameter data shall be a dict with the variables.

Matlab is a really popular platform for scientific computing in the academia. I’ve used it my throughout my engineering degree and chances are, you will come across .mat files for datasets released by the universities.

This is a brief post which explains how to load these files using python, the most popular language for machine learning today.

The data

I wanted to build a classifier for detecting cars of different models and makes and so the Stanford Cars Dataset appeared to be a great starting point. Coming from the academia, the annotations for the dataset was in the .mat format. You can get the file used in this post here.

Loading .mat files

Scipy is a really popular python library used for scientific computing and quite naturally, they have a method which lets you read in .mat files. Reading them in is definitely the easy part. You can get it done in one line of code:

from scipy.io import loadmat
annots = loadmat['cars_train_annos.mat']

Well, it’s really that simple. But let’s go on and actually try to get the data we need out of this dictionary.

Formatting the data

The loadmat method returns a more familiar data structure, a python dictionary. If we peek into the keys, we’ll see how at home we feel now compared to dealing with a .mat file:

annots.keys[]
> dict_keys[['__header__', '__version__', '__globals__', 'annotations']]

Looking at the documentation for this dataset, we’ll get to learn what this is really made of. The README.txt gives us the following information:

This file gives documentation for the cars 196 dataset.
[//ai.stanford.edu/~jkrause/cars/car_dataset.html]
— — — — — — — — — — — — — — — — — — — —
Metadata/Annotations
— — — — — — — — — — — — — — — — — — — —
Descriptions of the files are as follows:
-cars_meta.mat:
Contains a cell array of class names, one for each class.
-cars_train_annos.mat:
Contains the variable ‘annotations’, which is a struct array of length
num_images and where each element has the fields:
bbox_x1: Min x-value of the bounding box, in pixels
bbox_x2: Max x-value of the bounding box, in pixels
bbox_y1: Min y-value of the bounding box, in pixels
bbox_y2: Max y-value of the bounding box, in pixels
class: Integral id of the class the image belongs to.
fname: Filename of the image within the folder of images.
-cars_test_annos.mat:
Same format as ‘cars_train_annos.mat’, except the class is not provided.
— — — — — — — — — — — — — — — — — — — —
Submission file format
— — — — — — — — — — — — — — — — — — — —
Files for submission should be .txt files with the class prediction for
image M on line M. Note that image M corresponds to the Mth annotation in
the provided annotation file. An example of a file in this format is
train_perfect_preds.txt
Included in the devkit are a script for evaluating training accuracy,
eval_train.m. Usage is:
[in MATLAB]
>> [accuracy, confusion_matrix] = eval_train[‘train_perfect_preds.txt’]
If your training predictions work with this function then your testing
predictions should be good to go for the evaluation server, assuming
that they’re in the same format as your training predictions.

Our interest is in the 'annotations' variable, as it contains our class labels and bounding boxes. It’s a struct, a data type very familiar to folks coming from a strongly typed language like a flavour of C or java.

A little digging into the object gives us some interesting things to work with:

type[annots[‘annotations’]],annots[‘annotations’].shape
>[numpy.ndarray, [1, 8144]]
type[annots['annotations'][0][0]],annots['annotations'][0][0].shape
>[numpy.void, []]

The annotations are stored in a numpy.ndarray format, however the data type for the items inside this array is numpy.void and numpy doesn’t really seem to know the shape of them.

The documentation page for the loadmat method tells us how it loads matlab structs into numpy structured arrays.You can access the members of the structs using the keys:

annots[‘annotations’][0][0][‘bbox_x1’], annots[‘annotations’][0][0][‘fname’]> [array[[[39]], dtype=uint8], array[['00001.jpg'], dtype='

Chủ Đề