How to limit BigQuery job by slots

Question:

I’d like to limit a BigQuery job (query) submitted with the Python SDK to use a certain amount of slots. For example, I have a provision with 400 slots in BigQuery (flat-rate). And let’s say I’ve got a heavy query which uses 300 slots during execution (according to the statistics). I would like to define a lower limit for the job, let’s say 100 slots. I admit that the execution will be slower, but it’s fine. I just need the job not to "eat" more than 100 slots during the execution.

I found that it’s possible to set a limit by bytes with maximumBytesBilled (described here). Is it possible to set slots in a similar manner?

Asked By: Alexander Goida

||

Answers:

Currently, it’s not possible to manually set the amount of slots that will be used by a query job.

BigQuery automatically calculates how many slots each query requires, depending on query size and complexity. According to the Google documentation:

Depending on complexity and size, a query might not require all the
slots it has the right to, or it may require more. BigQuery
dynamically ensures that, given fair scheduling, all slots can be
fully used at any point in time.

You can raise a feature request if you want BigQuery engineering team to work on it.

I’m not aware of your use case, however a workaround if you want a query to use specific slots would be to create/use a different project and allocate 100 slots to it. 

Answered By: Sakshi Gatyan
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.