Reading a binary file with python

Question:

I find particularly difficult reading binary file with Python. Can you give me a hand?
I need to read this file, which in Fortran 90 is easily read by

int*4 n_particles, n_groups
real*4 group_id(n_particles)
read (*) n_particles, n_groups
read (*) (group_id(j),j=1,n_particles)

In detail, the file format is:

Bytes 1-4 -- The integer 8.
Bytes 5-8 -- The number of particles, N.
Bytes 9-12 -- The number of groups.
Bytes 13-16 -- The integer 8.
Bytes 17-20 -- The integer 4*N.
Next many bytes -- The group ID numbers for all the particles.
Last 4 bytes -- The integer 4*N. 

How can I read this with Python? I tried everything but it never worked. Is there any chance I might use a f90 program in python, reading this binary file and then save the data that I need to use?

Asked By: Brian

||

Answers:

In general, I would recommend that you look into using Python’s struct module for this. It’s standard with Python, and it should be easy to translate your question’s specification into a formatting string suitable for struct.unpack().

Do note that if there’s “invisible” padding between/around the fields, you will need to figure that out and include it in the unpack() call, or you will read the wrong bits.

Reading the contents of the file in order to have something to unpack is pretty trivial:

import struct

data = open("from_fortran.bin", "rb").read()

(eight, N) = struct.unpack("@II", data)

This unpacks the first two fields, assuming they start at the very beginning of the file (no padding or extraneous data), and also assuming native byte-order (the @ symbol). The Is in the formatting string mean “unsigned integer, 32 bits”.

Answered By: unwind

You could use numpy.fromfile, which can read data from both text and binary files. You would first construct a data type, which represents your file format, using numpy.dtype, and then read this type from file using numpy.fromfile.

Answered By: Chris

Read the binary file content like this:

with open(fileName, mode='rb') as file: # b is important -> binary
    fileContent = file.read()

then “unpack” binary data using struct.unpack:

The start bytes: struct.unpack("iiiii", fileContent[:20])

The body: ignore the heading bytes and the trailing byte (= 24); The remaining part forms the body, to know the number of bytes in the body do an integer division by 4; The obtained quotient is multiplied by the string 'i' to create the correct format for the unpack method:

struct.unpack("i" * ((len(fileContent) -24) // 4), fileContent[20:-4])

The end byte: struct.unpack("i", fileContent[-4:])

Answered By: gecco
import pickle
f=open("filename.dat","rb")
try:
    while True:
        x=pickle.load(f)
        print x
except EOFError:
    pass
f.close()
Answered By: Eeshitri

To read a binary file to a bytes object:

from pathlib import Path
data = Path('/path/to/file').read_bytes()  # Python 3.5+

To create an int from bytes 0-3 of the data:

i = int.from_bytes(data[:4], byteorder='little', signed=False)

To unpack multiple ints from the data:

import struct
ints = struct.unpack('iiii', data[:16])
Answered By: Eugene Yarmash

I too found Python lacking when it comes to reading and writing binary files, so I wrote a small module (for Python 3.6+).

With binaryfile you’d do something like this (I’m guessing, since I don’t know Fortran):

import binaryfile

def particle_file(f):
    f.array('group_ids')  # Declare group_ids to be an array (so we can use it in a loop)
    f.skip(4)  # Bytes 1-4
    num_particles = f.count('num_particles', 'group_ids', 4)  # Bytes 5-8
    f.int('num_groups', 4)  # Bytes 9-12
    f.skip(8)  # Bytes 13-20
    for i in range(num_particles):
        f.struct('group_ids', '>f')  # 4 bytes x num_particles
    f.skip(4)

with open('myfile.bin', 'rb') as fh:
    result = binaryfile.read(fh, particle_file)
print(result)

Which produces an output like this:

{
    'group_ids': [(1.0,), (0.0,), (2.0,), (0.0,), (1.0,)],
    '__skipped': [b'x00x00x00x08', b'x00x00x00x08x00x00x00x14', b'x00x00x00x14'],
    'num_particles': 5,
    'num_groups': 3
}

I used skip() to skip the additional data Fortran adds, but you may want to add a utility to handle Fortran records properly instead. If you do, a pull request would be welcome.

Answered By: Fax
#!/usr/bin/python

import array
data = array.array('f')
f = open('c:\code\c_code\no1.dat', 'rb')
data.fromfile(f, 5)
print(data)

If the data is array-like, I like to use numpy.memmap to load it.

Here’s an example that loads 1000 samples from 64 channels, stored as two-byte integers.

import numpy as np
mm = np.memmap(filename, np.int16, 'r', shape=(1000, 64))

You can then slice the data along either axis:

mm[5, :] # sample 5, all channels
mm[:, 5] # all samples, channel 5

All the usual formats are available, including C- and Fortran-order, various dtypes and endianness, etc.

Some advantages of this approach:

  • No data is loaded into memory until you actually use it (that’s what a memmap is for).
  • More intuitive syntax (no need to generate a struct.unpack string consisting of 64000 character)
  • Data can be given any shape that makes sense for your application.

For non-array data (e.g., compiled code), heterogeneous formats ("10 chars, then 3 ints, then 5 floats, …"), or similar, one of the other approaches given above probably makes more sense.

Answered By: cxrodgers
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.