what is XLA_GPU and XLA_CPU for tensorflow

Question:

I can list gpu devices sing the following tensorflow code:

import tensorflow as tf
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

The result is:

[name: "/device:CPU:0"
 device_type: "CPU"
 memory_limit: 268435456
 locality {
 }
 incarnation: 17897160860519880862, name: "/device:XLA_GPU:0"
 device_type: "XLA_GPU"
 memory_limit: 17179869184
 locality {
 }
 incarnation: 9751861134541508701
 physical_device_desc: "device: XLA_GPU device", name: "/device:XLA_CPU:0"
 device_type: "XLA_CPU"
 memory_limit: 17179869184
 locality {
 }
 incarnation: 5368380567397471193
 physical_device_desc: "device: XLA_CPU device", name: "/device:GPU:0"
 device_type: "GPU"
 memory_limit: 21366299034
 locality {
   bus_id: 1
   links {
     link {
       device_id: 1
       type: "StreamExecutor"
       strength: 1
     }
   }
 }
 incarnation: 7110958745101815531
 physical_device_desc: "device: 0, name: Tesla P40, pci bus id: 0000:02:00.0, compute capability: 6.1", name: "/device:GPU:1"
 device_type: "GPU"
 memory_limit: 17336821351
 locality {
   bus_id: 1
   links {
     link {
       type: "StreamExecutor"
       strength: 1
     }
   }
 }
 incarnation: 3366465227705362600
 physical_device_desc: "device: 1, name: Tesla P40, pci bus id: 0000:03:00.0, compute capability: 6.1", name: "/device:GPU:2"
 device_type: "GPU"
 memory_limit: 22590563943
 locality {
   bus_id: 2
   numa_node: 1
   links {
     link {
       device_id: 3
       type: "StreamExecutor"
       strength: 1
     }
   }
 }
 incarnation: 8774017944003495680
 physical_device_desc: "device: 2, name: Tesla P40, pci bus id: 0000:83:00.0, compute capability: 6.1", name: "/device:GPU:3"
 device_type: "GPU"
 memory_limit: 22590563943
 locality {
   bus_id: 2
   numa_node: 1
   links {
     link {
       device_id: 2
       type: "StreamExecutor"
       strength: 1
     }
   }
 }
 incarnation: 2007348906807258050
 physical_device_desc: "device: 3, name: Tesla P40, pci bus id: 0000:84:00.0, compute capability: 6.1"]

I want to know what is XLA_GPU and XLA_CPU?

Asked By: tidy

||

Answers:

As mentioned in the docs, XLA stands for “accelerated linear algebra”. It’s Tensorflow’s relatively new optimizing compiler that can further speed up your ML models’ GPU operations by combining what used to be multiple CUDA kernels into one (simplifying because this isn’t that important for your question).

To your question, my understanding is that XLA is separate enough from the default Tensorflow compiler that they separately register GPU devices and have slightly different constraints on which GPUs they treat as visible (see here for more on this). Looking at the output of the command you ran, it looks like XLA is registering 1 GPU and normal TF is registering 3.

I’m not sure if you’re having issues or are just curious, but if it’s the former, I recommend taking a look at the issue I linked above and this one. Tensorflow’s finicky about which CUDA/cuDNN versions with which it works flawlessly and it’s possible you’re using incompatible versions. (If you’re not having issues, then hopefully the first part of my answer is sufficient.)

Answered By: an1lam
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.