Do we need to manually cache schema registry?

Question:

We are currently using Protocol Buffers as serialization mechanism for kafak message. We are going to move to Avro. We tested Avro Confluent consumer with Schema Registry and according to those tests, Avro consumer is little bit slow compare to protobuff consumer.

My question is do we need to manually cash schemas or Python AvroConsumer handle cash it self?
I’m using confluent_kafka AvroConsumer.

Asked By: GihanDB

||

Answers:

I had the same problem some time ago, there is additional latency when you’re moving from Google protobuf, which is really fast, to something like Avro + Schema Registry.

You definitely have the option to cache the schemas manually. However, most decent kafka clients that talk to SchemaRegistry should already do this. At least, Confluent’s Java kafka client does this automatically. So, the only time it has to send a request to Schema Registry is when it encounters a schema version it hasn’t seen yet.

Answered By: mjuarez

Schema will only get called once every time you change your version, By default it will be cached

Answered By: dhiraj