CSV to AVRO using python

Question:

I have the following csv :

field1;field2;field3;field4;field5;field6;field7;field8;field9;field10;field11;field12;
eu;4523;35353;01/09/1999; 741 ; 386 ; 412 ; 86 ; 1.624 ; 1.038 ; 469 ; 117 ;

and I want to convert it to avro. I have created the following avro schema:

{"namespace": "forecast.avro",
 "type": "record",
 "name": "forecast",
 "fields": [
     {"name": "field1", "type": "string"},
     {"name": "field2", "type": "string"},
     {"name": "field3", "type": "string"},
     {"name": "field4", "type": "string"},
     {"name": "field5", "type": "string"},
     {"name": "field6", "type": "string"},
     {"name": "field7", "type": "string"},
     {"name": "field8", "type": "string"},
     {"name": "field9", "type": "string"},
     {"name": "field10", "type": "string"},
     {"name": "field11", "type": "string"},
     {"name": "field12", "type": "null"}
 ]
}

and my code is the next one:

import avro.schema
from avro.datafile import DataFileReader, DataFileWriter
from avro.io import DatumReader, DatumWriter
import csv
from collections import namedtuple


FORECAST = "forecast.csv"
fields = ("field1", "field2", "field3", "field4", "field5", "field6", "field7", "field8", "field9", "field10", "field11", "field12")
forecastRecord = namedtuple('forecastRecord', fields)

def read_forecast_data(path):
    with open(path, 'rU') as data:
        data.readline()
        reader = csv.reader(data, delimiter = ";")
        for row in map(forecastRecord._make, reader):
            print(row)
            yield row

if __name__=="__main__":
    for row in read_forecast_data(FORECAST):
        print (row)
        break

def parse_schema(path="forecast.avsc"):
    with open(path, 'r') as data:
        return avro.schema.parse(data.read())
def serialize_records(records, outpath="forecast.avro"):
    schema = parse_schema()
    with open(outpath, 'w') as out:
        writer = DataFileWriter(out, DatumWriter(), schema)
        for record in records:
            record = dict((f, getattr(record, f)) for f in record._fields)
            writer.append(record)
if __name__ == "__main__":
    serialize_records(read_forecast_data(FORECAST))

When I run the code i get the error that the datum is not an example of the current schema. I have checked again and again my schema to find any inconsistencies, but till now I have not managed to find any. Could someone help me ?

Asked By: Gerasimos

||

Answers:

When I run your code as written I get an error TypeError: Expected 12 arguments, got 13 at for row in map(forecastRecord._make, reader): because your CSV ends in a ; and therefore has 13 fields.

Once I remove those trailing ;s, I can run the example and get the same error about the schema mismatch. The reason is that field12 in your schema is defined as a type of null but in the data it is a string type (with value "117").

If you change the avsc file to {"name": "field12", "type": "string"} then it works.

Answered By: Scott

One more way in example:

    import csv
    from collections import namedtuple
    from fastavro import parse_schema, writer


    schema = {
              "namespace": "test.avro",
              "type": "record",
              "name": "test",
              "fields": [
                         {"name": "region", "type": "string"},
                         {"name": "anzsic_descriptor", "type": "string"},
                         {"name": "gas", "type": "string"},
                         {"name": "units", "type": "string"},
                         {"name": "magnitude", "type": "string"},
                         {"name": "year", "type": "string"},
                         {"name": "data_val", "type": "string"}
                         ]
                      }
fields = 
  ("region","anzsic_descriptor","gas","units","magnitude","year","data_val")
forecastRecord = namedtuple('forecastRecord', fields)
parsed_schema = parse_schema(schema)

lst = []
with open('test.csv', 'r') as data:
    data.readline()
    reader = csv.reader(data, delimiter=",")
    for records in map(forecastRecord._make, reader):
        record = dict((f, getattr(records, f)) for f in records._fields)
        l.append(record)

with open("users.avro", "wb") as fp:
    writer(fp, schema, l)
Answered By: N Nikulin
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.