This is a premium alert message you can set from Layout! Get Now!

PT-DML Slow

When an `INSERT` operation is running slowly in an Oracle database, there can be various root causes, and the solutions to improve performance will depend on the specific issue.

We can get slow insert due to :

1. **Lack of Indexes** :

**Cause** When there are no appropriate indexes on the table, each `INSERT` operation may require a full table scan to check for duplicate records or maintain data integrity.

**Solution** Create indexes on the columns that are frequently queried or involved in uniqueness constraints.

2. **Fragmentation**:

**Cause**: Fragmentation within the table can lead to inefficient use of space and slower `INSERT` operations.

**Solution** Regularly perform maintenance tasks such as reorganizing or rebuilding the table to reduce fragmentation and free space. The `ALTER TABLE` command with the `MOVE` or `SHRINK SPACE` options can help.

3. **Triggers and Constraints**:

**Root Cause**: Triggers, foreign key constraints, or check constraints can add processing overhead to each `INSERT`.

**Solution**: Review and optimize the triggers and constraints if possible. Sometimes, triggers can be rewritten for more efficient execution. Ensure that constraints are necessary and well-tuned.

4. **Uncommitted Transactions**:

**Root Cause**: Uncommitted transactions by other sessions can block `INSERT` operations.

**Solution**: Commit or roll back any long-running transactions that might be causing blocking. Use appropriate transaction management to minimize contention.

5. **Redo Log Size**:

- **Root Cause**: Frequent redo log switches can impact performance.

- **Solution**: Properly size and configure the redo logs to reduce the frequency of switches. Monitor the redo log usage and adjust the size accordingly.

6. **Table Locks**:

- **Root Cause**: Locks held by other sessions can block `INSERT` operations.

- **Solution**: Identify and address locking issues. You can either wait for the locks to be released or work with the sessions causing the locks to modify their behavior.

7. **High I/O Latency**:

- **Root Cause**: Slow disk I/O can significantly impact `INSERT` performance.

- **Solution**: Check and address any hardware or storage-related issues causing high I/O latency. Consider using faster storage options if possible.

9. **Insufficient System Resources**:

- **Root Cause**: Inadequate system resources can slow down database operations.

- **Solution**: Monitor system resource utilization and consider upgrading hardware or allocating more resources to the database server.

10. **Parallel Processing**:

- **Root Cause**: `INSERT` operations can be slow if they are not parallelized.

- **Solution**: Use Oracle's parallel `INSERT` capabilities to insert data in parallel threads or processes. This can significantly speed up bulk `INSERT` operations.

When an "Update" SQL statement is running slow in Oracle, several factors could be the root cause:

1. **Inefficient Query Plan**: The optimizer might be choosing a suboptimal execution plan, such as not using indexes effectively or performing unnecessary full table scans.

2. **Lack of Indexes**: If the table being updated lacks appropriate indexes, the update operation may take longer as Oracle has to scan the entire table.

3. **Locking and Blocking**: If other transactions are holding locks on the table or rows being updated, it can cause contention and slow down the update process.

4. **Insufficient Resources**: If the server running Oracle is under-resourced in terms of CPU, memory, or I/O, it can lead to slow query performance.

5. **Data Volume**: Updating a large number of rows or dealing with large amounts of data can naturally slow down the update operation.

To address these issues and improve the performance of the update statement:

1. **Optimize Query**: Review the execution plan using tools like Explain Plan or SQL Developer to identify any inefficiencies. Consider adding or modifying indexes to improve query performance.

2. **Avoid Full Table Scans**: Ensure that indexes are used appropriately in the update statement to avoid unnecessary full table scans.

3. **Monitor Locking**: Check for locking and blocking issues using Oracle's built-in monitoring tools (e.g., V$LOCK, DBA_BLOCKERS, DBA_WAITERS) and resolve any conflicts causing delays.

4. **Resource Allocation**: Allocate sufficient resources (CPU, memory, I/O) to the Oracle server to handle the workload efficiently.

5. **Partitioning**: If dealing with large volumes of data, consider partitioning the table to improve performance by limiting the amount of data processed during updates.

6. **Batch Processing**: If updating a large number of rows, consider breaking the update into smaller batches to reduce contention and optimize resource utilization.

The "delete" SQL statement in Oracle is used to remove one or more rows from a table. When a delete statement is running slow, it can be due to several reasons. Here are some common root causes and potential solutions:

1. **Large Data Sets:** If you are deleting a large number of rows, it can slow down the delete operation. Consider deleting data in smaller batches using the `commit` and `rownum` techniques to reduce the impact on system resources and improve performance.

Example:

DELETE FROM table_name WHERE condition AND ROWNUM <= batch_size;

COMMIT;

2. **Missing Indexes:** If the table lacks appropriate indexes, the delete operation can be slow, especially when deleting based on non-indexed columns. Analyze the table structure and add indexes on columns frequently used in delete conditions.

Example (Adding an index):

CREATE INDEX index_name ON table_name(column_name);

3. **Foreign Key Constraints:** Deleting rows from a table that has foreign key constraints may involve cascading deletes or checks, which can slow down the delete operation. Review the foreign key constraints and their impact on delete performance.

4. **Locking:** Concurrent transactions or locks held by other sessions on the table can cause delays in the delete operation. Check for locks and sessions that might be holding locks, and consider optimizing transaction management.

Example (Checking locks):

SELECT object_name, object_type, session_id, type, mode FROM v$locked_object;

5. **High Redo Generation:** Excessive redo generation during delete operations can impact performance. Review the database configuration, including the redo log settings and undo tablespace size, to ensure they are appropriately sized for the workload.

6. **Statistics Stale:** Outdated or stale statistics can lead to suboptimal execution plans for the delete statement. Refresh table and index statistics using the `DBMS_STATS` package to ensure the optimizer chooses efficient execution plans.

Example (Refreshing statistics):

EXEC DBMS_STATS.gather_table_stats('schema_name', 'table_name');

7. **Optimizing SQL:** Review the delete statement and its execution plan using tools like Explain Plan or SQL Tuning Advisor. Identify any inefficient query patterns, unnecessary joins, or suboptimal conditions that can be optimized for better performance.

Step 1 : Check Request ID from Find Concurrent request screen (In my case Request ID is 2355)

Step 2 : Run below command to find SPID, provide concurrent request ID (2355 in my case) when prompted

SELECT a.request_id, d.sid, d.serial# ,d.osuser,d.process , c.SPID

FROM apps.fnd_concurrent_requests a, apps.fnd_concurrent_processes b, v$process c, v$session d WHERE a.controlling_manager = b.concurrent_process_id AND c.pid = b.oracle_process_id AND b.session_id=d.audsid AND a.request_id = &Request_ID AND a.phase_code = ‘R’;

REQUEST_ID SID SERIAL# OSUSER PROCESS SPID

—————-

2355 514 28 applmgr 17794 1633.

Step 3.1 : Check and confirm SPID on Database Node

oraclevis11i@onlineappsdba>ps-ef | grep 1633

ovis11i 1633 1 0 13:30:43 ? 0:03 oraclevis11i (LOCAL=NO)

Step 3.2 : Set OSPID (1633 in my case) for ORADEBUG

SQL> oradebug setospid 1633

Oracle pid: 68, Unix process pid: 1633, image: oraclevis11i@onlineappsdba

Step 3.3 : Enable trace for 10046 event with level 12

SQL> oradebug event 10046 trace name context forever, level 12

Step 3.4 : Locate Trace file as

SQL>oradebug tracefile_name

/oracle/apps/vis11idb/11.2.0/admin/vis11i_onlineappsdba/udump/vis11i_ora_1633.trc

Wait for 15-20 minutes

Step 4: Disable trace

SQL> oradebug event 10046 trace name context off

Step 5: Create tkprof file like

tkprof ‘/oracle/ apps/ vis11idb/ 10.2.0/ admin/ vis11i_onlineappsdba/ udump/ vis11i_ora_1633.trc’ ‘/oracle/ apps/ vis11idb/ 10.2.0/ admin/ vis11i_onlineappsdba/ udump/ tkprof_1633.txt’ explain=apps/[apps_passwd] fchela …

Step 6: Check TKPROF file to find root cause of slow concurrent request

Cache Size: It means the Number of Requests cached. And Cache size value should be twice of the number of Target Processes.

For example: If a Concurrent Manager’s target value is 2, then cache size should be like 4. It means CM will read 4 requests at a time and try to run these 4 requests before reading any new request.

i.e. Cache Size = 2*no. of Target Process.

Sleep Seconds: It is the number of seconds when the Concurrent Manager waits to check the list of pending Concurrent Requests.

For Example: If the Sleep Seconds value is 2 Min (max) then CM will check Pending Concurrent Requests every 2 Min. For peak time we can set it to 30 seconds.

Important:-

We should not enable a maximum number of Custom Managers if not required. It will degrade the performance for Queue Table.

We should keep less sleep time for CRM managers in peak hours.

Reports log files should be always truncate from Log file location $APPLCCSF/APPLLOG. Some times its sizes increases more than 2

GB which impacts performance, so we should truncate manually.

More number of records in FND_Concurrent Table can slow the application performance. So, we should run the Purge Concurrent Request FNDCPPUR at a regular interval.

We should always keep eyes on below tables

FND_CONCURRENT_REQUESTS

FND_CONCURRENT_PROCESSES

FND_CRM_HISTORY

FND_ENV_CONTEXT

FND_TEMP_FILES

FND_CONFLICTS_DOMAIN

These above tables should be Defragment periodically, while CM should be down.

We should always check for FND_Stats for the tables.

For example:

exec fnd_stats.gather_table_stats (‘APPLSYS’,’FND_CONCURRENT_REQUESTS’,PERCENT=>99);

Post a Comment

0 Comments

Please Select Embedded Mode To show the Comment System.*

Post a Comment
To Top