MaxDB SQL Locks FAQ
1. What is a transaction?
A transaction is a sequence of one or more processing steps (SQL statements). It refers to database objects such as tables, views, joins, and so on.
For this, the following requirements must be fulfilled (ACID condition = atomic, consistent, isolation, durable):
Indivisibility:
A transaction is atomic, which means that it is either executed completely (all its operations) or not at all (all or nothing principle). For example: There is no employee without a salary.
Maintaining consistency:
The defined integrity conditions remain fulfilled, for example, each employee has a personnel number.
Isolation:
The operations within the transaction are isolated from the operations of other transactions.
Durability:
Changes that completed transactions made to objects must be permanent (persistent), for example, even if a system crashes.
2. Why must SQL locks be set on database objects?
If several transactions want to access the same objects in parallel, these accesses must be synchronized by the SQL lock management.
Since the database system allows concurrent transactions for the same database objects, locks are required to isolate individual transactions.
To lock an object means to lock this object from certain forms of use by other transactions.
3. When are these SQL locks d again?
All locks are d at the end of a transaction at the latest (commit, rollback, commit work, rollback work ).
However, they can also be d by an explicit statement (UNLOCK).
4. Which lock objects do exist in SAP MaxDB?
Three types of locks exist in SAP MaxDB:
Record locks (ROW):
Individual lines of a table are locked.
Table locks (TAB):
The entire table is locked.
Catalog locks (SYS):
Database catalog entries are locked.
5. Must locks be explicitly requested or is implicit locking possible?
Locks can be requested implicitly by the database system or explicitly by you (using the relevant SQL statements).
a) Requesting locks implicitly:
All modifying SQL statements (for example, INSERT, UPDATE, DELETE) always request an exclusive lock.
You can select the lock operation mode by specifying an isolation level when you open the database session.
Depending on the specified isolation level, locks are then implicitly requested by the database system when the SQL statements are processed.
b) Requesting locks explicitly:
You can assign locks explicitly to a transaction using the LOCK statement, and you can lock individual table lines by specifying a LOCK option in an SQL statement. This procedure is possible at each isolation level. In addition, you can use the LOCK option to temporarily change the isolation level of an SQL statement.
6. What is a shared lock?
Shared locks allow other transactions to perform read accesses but not to perform write accesses to the locked object. Other transactions can set further shared locks on this object, but no exclusive locks.
Shared locks can be set on individual table lines or on the entire table.
7. What is an exclusive lock?
If an exclusive lock on a database object is assigned to a transaction, other transactions cannot access this object.
Transactions that want to check whether exclusive locks exist or to set exclusive or shared locks collide with the existing exclusive lock of the other transaction. They cannot access the locked object.
Exclusive locks can be set on individual table lines or on the entire table.
8. What is the isolation level?
The isolation level defines whether and in which way locks are implicitly requested or d.
The selected isolation level affects the degree of parallelism of concurrent transactions and the data consistency:
The lower the value of the isolation level, the higher the degree of parallelism and the lower the degree of the guaranteed consistency.
If there are concurrent accesses to the same dataset, various inconsistencies (such as dirty read, non-repeatable read, and phantom) may occur at different isolation levels.
For SAP applications (ABAP stack), the isolation level is defined in the XUSER file. The ABAP applications run at isolation level 0.
In NetWeaver (Java stack), data sources are used for the application logon. These data sources can be configured using the NetWeaver. You can freely choose the isolation level. If the data source does not specify an explicit isolation level, default isolation level 1 (read committed) is used.
9. Which isolation levels are available?
Isolation level 0:
In isolation level 0 (uncommitted), lines are read without an implicit request of shared locks (dirty read). This does not ensure that when a line is read again within a transaction, it has the same state as when it was read the first time because it may have been changed by a concurrent transaction in the meantime. This does not ensure that read records are actually committed in the database.
When you insert, change, and delete lines, the data records are exclusively locked. The locks are retained until the transaction ends to ensure that no concurrent changes can be made.
The SAP system is operated at isolation level 0 because a separate lock mechanism that is at a higher level than the lock mechanism of the database is implemented for the SAP application.
Isolation level 1:
If an SQL statement is used for a data retrieval, it is ensured for each read line that (when a line is read) no other transaction holds an exclusive lock for this line. The shared lock is removed after the record was read. If lock collisions occur, a lock (REQ ROW SHARE) is requested and the transaction must wait until this lock is assigned to it before it can access the record.
Isolation level 15:
Isolation level 15 ensures that the resulting set is not changed while it is being processed. Reading backwards and positioning in the resulting set creates unique results. When you insert, change, or delete lines, exclusive locks for the relevant lines are implicitly assigned to the transaction; these exclusive locks are d only when the transaction ends.
Isolation level 2:
Isolation level 2 protects against the non-repeatable read. A record that is read several times within a transaction obtains always the same values.
If isolation level 2 or 20 (repeatable) is specified, before processing starts, shared locks are implicitly requested for all the tables that are addressed by a SQL statement for the data retrieval. When you insert, change, or delete lines, exclusive locks for the relevant lines are implicitly assigned to the transaction; these exclusive locks are d only when the transaction ends.
Isolation level 3:
If isolation level 3 or 30 (serializable) is specified, a table shared lock is implicitly assigned to the transaction for each table that is addressed by an SQL statement. These shared locks can be d only when the transaction ends. Isolation level 3 protects against three abnormal access types (dirty read, non-repeatable read, and phantom), but is not suitable for an operation with a high degree of parallelism. When you insert, change, or delete lines, exclusive locks for the relevant lines are implicitly assigned to the transaction; these exclusive locks are d only when the transaction ends.
New isolation level as of the usage of MVCC:
When you start using these new isolation levels, the isolation levels mentioned above are no longer available.
Isolation level 50:
Isolation level 50 corresponds to COMMITTED READ.
Isolation level 60:
Isolation level 60 corresponds to "Serializable" that is known from Oracle. This is a lesser requirement than ISO/ANSI "Serializable".
The old isolation levels that are lower than 3 are mapped to the new isolation level 50. The old isolation level 3 is mapped to the new isolation level 60.
10. What does MVCC mean?
MVCC is the abbreviation of Multi-Version Concurrency Control. MVCC controls simultaneous accesses to several versions of the data.
To prevent locking conflicts of read and write operations, a Multi-Version Concurrency Control algorithm has been implemented in SAP MaxDB; this algorithm is still in the test phase. This means that the database system contains several different versions (consistent views) of the data. Locking conflicts between read operations and write operations are controlled by using exclusive locks.
A read operation of transaction A always uses a snapshot that comprises all the data that was committed at the point in time when the snapshot is defined. If transaction B changes data after this point in time, these changes are ignored for read transaction A. For this new procedure, isolation levels 50 and 60 are implemented.
Depending on the selected isolation level, a snapshot is defined when the transaction starts (level 60) and this snapshot is valid during the entire transaction or a snapshot is defined for each SQL statement (level 50).
Changes are performed directly in the data (eager in-place update).
The old versions of the data are saved in persistent UNDO files. After the transaction ended, these UNDO files are assigned to the history file management.
11. Does the SAP MaxDB use consistent reading?
Consistent reading has not been supported yet in the OLTP operation and OLAP operation. The lock management of SAP MaxDB will be changed in more current versions due to the MVCC implementation. Currently, we cannot specify in which SAP MaxDB version this function can be used productively.
12. Which database parameters do you use to configure the lock management?
The database parameters of SAP MaxDB influence the SAP MaxDB SQL lock management.
a) MaxUserTasks (MAXUSERTASKS):
The database parameter MaxUserTasks defines the maximum number of parallel user sessions.
b) MaxSQLLocks (MAXLOCKS):
The database parameter MaxSQLLocks defines the maximum number of line locks and table locks that can be held or requested at the same time. If the MaxSQLLocks value is reached, statements that request locks are rejected (-1000: Too many lock requests).
c) RequestTimeout (REQUEST_TIMEOUT):
The database parameter RequestTimeout defines how long a transaction must wait for the assignment of a lock. If this wait time has expired before the lock is assigned, the transaction terminates with an error.
d) DeadlockDetectionLevel (DEADLOCK_DETECTION):
The database parameter DeadlockDetectionLevel defines the maximum depth of search for the deadlock detection with SQL locks.
If this database parameter is set to value 0, the deadlock detection is deactivated (in other words, deadlocks are removed only by RequestTimeout). A value that is higher than 0 defines the depth of search. Up to the specified depth of search, deadlocks are detected and immediately removed. The initial value for the deadlock detection is 4. A higher value results in considerable costs. Consider using a higher value only if an application triggers serious deadlock problems that cannot be solved in the application.
13. What does the following SQL error -1000 mean: Too many lock requests?
A transaction terminates because too many SQL locks were requested. The configured maximum value (MaxSQLLocks) has been reached. The system can assign no further SQL locks.
If these problems frequently occur in the system, check the value of the database parameter MaxSQLLocks.
14. When should you increase the database parameter MaxSQLLocks (MAXLOCKS)?
MaxSQLLocks values that are too high result in long search processes in the lock lists. Therefore, if this is possible, reduce the write transactions by using frequent commits.
You can increase MaxSQLLocks if one of the following situations frequently occur:
- There is a high number of LOCK LIST ESCALATIONS
- the number of LOCK LIST MAX USED ENTRIES equals the value of MaxSQLLocks.
- The number of LOCK LIST MAX USED ENTRIES is close to the value of MaxSQLLocks and the write transactions cannot be reduced.
For more information, see 65946.
15. What is a lock escalation?
A lock escalation is the conversion of line locks into a table lock.
For more information, see 65946.
16. When is a line lock set, and when is a table lock set?
If the record locks of a user take up more than 20% of the list for a table, the system carries out a lock escalation. In this process, the record locks are converted to a table lock.
As of Version 7.6.03, an additional threshold has been introduced for lock escalations. If a transaction fills more than 50% of the lock list with line locks, the lock escalation is carried out for the table whose record is to be locked when this threshold value is exceeded.
For more information, see 65946.
17. Is a shared lock assigned to each data record of the table during a full table scan as long as the query is running?
In isolation level 1, a lock is temporarily used for each record and is removed before the next record.
In isolation level 3, a table shared lock is used.
18. How are locks on joins handled?
The SQL locks on joins are handled in the same way as locks on tables. There is no difference.
19. Is the lock management monitored?
Yes, you can use the following methods to monitor the lock management:
a) System tables
- .lockstatistics
The system table LOCKSTATISTICS describes the current lock entries and lock request entries. You can use the system table LOCKSTATISTICS to determine, for example, the following database information:
all locks that are currently held on a table
all locks that the current user holds on the table in the relevant database session (if the current user is a database administrator (DBA user) or a database system administrator, the system displays all the locks that are held)
To facilitate the analysis of locking conflicts, you can use the following views for the system table .lockstatistics.
domain.locks & domain.lock_holder displays all locks that are assigned in the system.
domain.lock_requestor displays all lock requests in the system. domain.lock_waits displays who holds the relevant lock for all lock requests.
- .lockliststatistics
The system table LOCKLISTSTATISTICS displays the statistics of the lock management since the last database start. This system table is insufficient for the analysis of locking conflicts.
b) Database console
x_cons sh[ow] act[ive]
The status VWAIT indicates that a task waits for an SQL lock.
c) Transaction DB50
To navigate to the lock management monitoring, choose "Problem analysis -> SQL-Locks" in the monitoring tree.
20. How can you determine whether there are SQL locks in the system?
In transaction DB50, choose "Problem analysis -> SQL-Locks -> Overview".
Here, an overview of all SQL locks is displayed.
Information is available about the following:
- the database task that holds a lock
- the application pid that is related to it
- the application server
- the lock type, for example, row_exclusive
- the lock status, for example, write
- the table name
and so on.
21. How can you recognize wait situations for SQL locks?
In transaction DB50, choose "Problem analysis -> SQL-Locks -> Wait situations". Here, you can determine which transaction must wait for which transaction in an exclusive lock wait situation.
Information is available about the following:
- the database task that holds a lock
- the application pid that is related to it
- the application server
- the lock type, for example, row_exclusive
- the lock status, for example, write
- the table name
- the database task that waits for this lock
- its application pid and application server
- the request timeout that indicates how long the wait time for the lock is before the transaction is terminated with -51.
The "Appl. ID" column displays the process ID of the work process on the "Appl. Server" application server. The relevant SAP work process is displayed in transactions SM51, SM50, or SM66.
You can use the wait situation listing to analyze lock situations and, in urgent cases, you can remove them before the request timeout expires by canceling a transaction. You can cancel the relevant task in transaction DB50 by choosing"Task Manager" and then "Kernel Threads".
22. What is a lock key?
The lock key can be used for a unique identification of a data record that is to be locked. Usually, this is the primary key. Locks using long primary key values with a variable length are inefficient. Therefore, a lock key is generated for each primary key if the primary key is longer than 64 byte. As of Version 7.4, the lock has a maximum length of 64 byte independently of the 64-bit or 32-bit architecture.
23. What is a deadlock?
A deadlock occurs if two or more users impede each other due to set locks and can no longer work as a result.
Two or more transactions hold locks and request further locks that are already held by the other transaction.
Deadlocks are detected by the database up to a certain depth. The system issues an error message for the user who triggers the deadlock. The deadlock is removed.
The database cannot recognize all deadlocks, for example, if shared locks are involved.
A deadlock that is not detected by the system is removed by timeouts.
24. What does SQL error -51 mean?
SQL error -51: Lock request timeout
The lock request or an implicitly requested lock collides with locks of a different transaction. The maximum wait time for the lock assignment (RequestTimeout) has expired.
25. What does SQL error -60 mean?
SQL error -60: Work rolled back
The transaction was implicitly canceled and reset due to an implicit rollback. This is caused by a deadlock situation. A deadlock situation can be removed only if one of the transactions involved s the lock that it has already obtained.
26. Can you also analyze a deadlock situation afterwards?
In the SAP MaxDB versions that are lower than 7.6.06, you cannot subsequently analyze a deadlock situation because no logs or dumps are written before the deadlock is removed.
As of SAP MaxDB Versions 7.6. 06 (PTS:1153670) and 7.7 (PTS: 1156335), we are planning to generate a dump (if a deadlock occurs) that the SAP Support can analyze using x_diagnose.
To enable the system to write a lock list dump, you must set the database parameter DeadlockDumpFilename. If the database specifies a file name and a deadlock occurs, the lock list is written to this file that is located in the run directory.
Analysis comment: If a DEADLOCK can be reproduced, the parameter DEADLOCK_DETECTION (for versions lower than 7.7.03) or DeadlockDetectionLevel (for versions as of 7.7.03) can be set to value 0. As a result, a lock situation is removed only by the timeouts and you have sufficient time to analyze this deadlock.
27. Can you also analyze a lock collision that was triggered by a timeout afterwards?
This is difficult. Currently, you cannot use a system table to analyze the lock collision afterwards. You may be able to use the database analyzer logs to obtain some information about the lock situation after the lock occurred. If wait situations take longer than a snapshot, the database analyzer logs these wait situations in the logs.
28. Are SQL locks the same as B* tree locks?
No, you should not confuse SQL locks with B* tree locks. B* tree locks are set only for a short time to protect changes to B* trees (for example, page split) against concurrent accesses. User tasks that are waiting have the status VSUSPEND. The figure that is displayed after the keyword SUSPEND indicates whether the user tasks wait for B* tree locks. You can use the command 'x_cons show sus' to determine the reason of the wait situation.
B* tree locks do not ensure the transaction consistency.
29. Where can I find more information about SQL locks?
For more information about SQL locks, see the SAP MaxDB documentation at: www.maxdb.sap.com.
Further information is available in the "WB550 MaxDB Internals" SAP course in the locking chapter.