tfx

Using String parameter for nvidia triton

Using String parameter for nvidia triton Question: I’m trying to deploy a simple model on the Triton Inference Server. It is loaded well but I’m having trouble formatting the input to do a proper inference request. My model has a config.pbtxt set up like this max_batch_size: 1 input: [ { name: "examples" data_type: TYPE_STRING format: …

Total answers: 2