Monday, June 27, 2016

Resource Queues 2.0 -- The Next Version of Resource Management in Greenplum

Hi All,

This post covers my work around the next version of Resource Queues in Greenplum. Resource Queues are resource management for queries within Greenplum. This basically means that any query (except superuser queries) need to be run within limits of a given Resource Queue. This ensures that resource utilization is limited and can be monitored and modified. This also allows per role based resource usage prioritization.

Resource Queues allow the defining of three resource limitations. Number of queries that can be run by a role at a given point of time, the maximum amount of memory that can be consumed by all the queries running concurrently under a user at a given point of time, and the maximum cost that can be allowed for a plan selected in per role basis. So you can effectively limit the three parameters per role and that can be modified as you go ahead.

The first implementation of Resource Queues had some major issues around deadlocks. I will explain the problem further down.

Resource Queues use database locks inside Greenplum. What that implies is that the locks taken by Resource Queues is the same as being done by database shared objects, such as tables. Now, Greenplum acquires relation locks at various points in the query lifestage. For eg, in query execution. Resource Queue limits are evaluated after the planner stage, so that Resource Queue planning cost limits can be evaluated. If the resource limits are being exceeded by the current query, the current query will be made to sleep until another query releases the resources being used and then the current sleeping query will be woken up and the resource requirements will be checked.

Now, there are some cases where this steps on each other's toes. For example, consider the following case:

Query 1 starts
Query 2 starts

Query 1 is made to sleep due to excessive resource usage
Query 2 gets the Resource Queue slot lock

Query 1 is waiting for the Resource Queue lock.
Query 2 is waiting for a relation lock held be Query 1.

This leads to a deadlock.

This was a long standing problem.

The solution implemented in my new approach is as below:

Get the Resource Queue Slot lock prior to any other potential lock. This can be at the time of query admission control. This has the problem that the cost based checking cannot be done since that is a post planner stage task. For this case, the solution entails holding the lock till that stage but using a waiter queue to manage the resource queue slot locks. This basically means that if cost based checking exceeds the limit for a specific query, the query releases all the locks it holds and enters a waiter queue. Now, any other query waiting for the lock shall be ahead of the current query in the waiter queue, so will get the lock first. In the above case, Query 2 will hold Resource Queue Slot lock and relation lock so it can proceed.

Query 2 will then release its locks and Query 1 will get both Resource Queue lock and the relation lock so it can proceed.

This removes the non deterministic locking that happens around Resource Queue Slot locks and shared database object locks. The core reason is that relation locks are acquired sometimes during the parser stage as well because we do not want underlying relation to change as we proceed. This leads to a circular deadlock issue.

The pull request for the implementation is:

This serves as a good PoC to demonstrate resource management without deadlocks. Ideally, Resource Queues should not be using the same shared database object locks since they are essentially not the same but that is a different problem space.

No comments:

Post a Comment