获取数据库30天内各个指标的TOP语句
要查询短时间内的可以从v$sql 或者是v$sqlarea 如果要查询一周或者一个月内 那么有可能在V$SQLAREA里找不到!下面是通过历史DBA_HIST_SQLSTAT里获得, 这个是通过快照方式保留下来的. --执行时间最长的 WITH BASTABLE AS ( SELECT DBMS_LOB.SUBSTR(SQL_TEXT,40
要查询短时间内的可以从v$sql 或者是v$sqlarea 如果要查询一周或者一个月内 那么有可能在V$SQLAREA里找不到!下面是通过历史DBA_HIST_SQLSTAT里获得, 这个是通过快照方式保留下来的.
--执行时间最长的
WITH BASTABLE AS
(
SELECT DBMS_LOB.SUBSTR(SQL_TEXT,4000, 1 ) AS SQL_FULL_TEXT,
DHST.SQL_ID,
ROUND (X.ELAPSED_TIME / 1000000 / X.EXECUTIONS_DELTA, 3) AVG_ELAPSED_TIME_SEC,
ROUND (X.CPU_TIME / 1000000 / X.EXECUTIONS_DELTA, 3) AVG_CPU_TIME_SEC,
ROUND (X.BUFFER_GETS_DELTA / X.EXECUTIONS_DELTA, 3) AVG_BUFFER_GETS,
ROUND (X.PARSE_CALLS_DELTA/X.EXECUTIONS_DELTA*100, 3) EXEC_PARSE_RATE,
ROUND (X.PHYSICAL_READ_BYTES_DELTA/1024/X.EXECUTIONS_DELTA, 3) AVG_PHYSICAL_READ_KB,
ROUND (X.DISK_READS_DELTA / X.EXECUTIONS_DELTA, 3) AVG_DISK_READS,
EXECUTIONS_DELTA AS EXEC_TOTAL_NUM,DHST.COMMAND_TYPE,N.COMMAND_NAME
FROM DBA_HIST_SQLTEXT DHST, DBA_HIST_SQLCOMMAND_NAME N,
(
SELECT DHSS.SQL_ID SQL_ID,
SUM (DHSS.CPU_TIME_DELTA) CPU_TIME,
SUM (DHSS.ELAPSED_TIME_DELTA) ELAPSED_TIME,
CASE SUM (DHSS.EXECUTIONS_DELTA) WHEN 0 THEN 1 ELSE SUM (DHSS.EXECUTIONS_DELTA) END AS EXECUTIONS_DELTA,
CASE SUM (DHSS.SORTS_DELTA) WHEN 0 THEN 1 ELSE SUM (DHSS.SORTS_DELTA) END AS SORTS_DELTA,
CASE SUM (DHSS.FETCHES_DELTA) WHEN 0 THEN 1 ELSE SUM (DHSS.FETCHES_DELTA) END AS FETCHES_DELTA,
CASE SUM (DHSS.PARSE_CALLS_DELTA) WHEN 0 THEN 1 ELSE SUM (DHSS.PARSE_CALLS_DELTA) END AS PARSE_CALLS_DELTA,
CASE SUM (DHSS.DISK_READS_DELTA) WHEN 0 THEN 1 ELSE SUM (DHSS.DISK_READS_DELTA) END AS DISK_READS_DELTA,
CASE SUM (DHSS.BUFFER_GETS_DELTA) WHEN 0 THEN 1 ELSE SUM (DHSS.BUFFER_GETS_DELTA) END AS BUFFER_GETS_DELTA,
CASE SUM (DHSS.IOWAIT_DELTA) WHEN 0 THEN 1 ELSE SUM (DHSS.IOWAIT_DELTA) END AS IOWAIT_DELTA,
CASE SUM (DHSS.PHYSICAL_READ_BYTES_DELTA) WHEN 0 THEN 1 ELSE SUM (DHSS.PHYSICAL_READ_BYTES_DELTA) END AS PHYSICAL_READ_BYTES_DELTA
FROM DBA_HIST_SQLSTAT DHSS
WHERE DHSS.SNAP_ID IN
(SELECT SNAP_ID
FROM DBA_HIST_SNAPSHOT
WHERE BEGIN_INTERVAL_TIME >= TRUNC(SYSDATE)-30
AND END_INTERVAL_TIME
AND DHSS.PARSING_SCHEMA_NAME =UPPER('SHARK')
GROUP BY DHSS.SQL_ID
) X
WHERE X.SQL_ID = DHST.SQL_ID
AND DHST.COMMAND_TYPE = N.COMMAND_TYPE
)
SELECT * FROM
(
SELECT SQL_FULL_TEXT,SQL_ID,EXEC_TOTAL_NUM, AVG_DISK_READS AS VALUE_S, 'AVG_DISK_READS' AS VALUES_TYPE
FROM BASTABLE WHERE COMMAND_TYPE47 AND SQL_FULL_TEXT NOT LIKE '/* SQL A%' ORDER BY AVG_DISK_READS DESC ) WHERE ROWNUM
UNION ALL
SELECT * FROM
(
SELECT SQL_FULL_TEXT,SQL_ID,EXEC_TOTAL_NUM, AVG_ELAPSED_TIME_SEC AS VALUE_S, 'AVG_ELAPSED_TIME_SEC' AS VALUES_TYPE
FROM BASTABLE WHERE COMMAND_TYPE47 AND SQL_FULL_TEXT NOT LIKE '/* SQL A%' ORDER BY AVG_ELAPSED_TIME_SEC DESC ) WHERE ROWNUM
UNION ALL
SELECT * FROM
(
SELECT SQL_FULL_TEXT,SQL_ID,EXEC_TOTAL_NUM, AVG_CPU_TIME_SEC AS VALUE_S, 'AVG_CPU_TIME_SEC' AS VALUES_TYPE
FROM BASTABLE WHERE COMMAND_TYPE47 AND SQL_FULL_TEXT NOT LIKE '/* SQL A%' ORDER BY AVG_CPU_TIME_SEC DESC ) WHERE ROWNUM
UNION ALL
SELECT * FROM
(
SELECT SQL_FULL_TEXT,SQL_ID,EXEC_TOTAL_NUM, AVG_BUFFER_GETS AS VALUE_S, 'AVG_BUFFER_GETS' AS VALUES_TYPE
FROM BASTABLE WHERE COMMAND_TYPE47 AND SQL_FULL_TEXT NOT LIKE '/* SQL A%' ORDER BY AVG_BUFFER_GETS DESC ) WHERE ROWNUM
UNION ALL
SELECT * FROM
(
SELECT SQL_FULL_TEXT,SQL_ID,EXEC_TOTAL_NUM, EXEC_PARSE_RATE AS VALUE_S, 'EXEC_PARSE_RATE' AS VALUES_TYPE
FROM BASTABLE WHERE COMMAND_TYPE47 AND SQL_FULL_TEXT NOT LIKE '/* SQL A%' ORDER BY EXEC_PARSE_RATE DESC ) WHERE ROWNUM
UNION ALL
SELECT * FROM
(
SELECT SQL_FULL_TEXT,SQL_ID,EXEC_TOTAL_NUM, AVG_PHYSICAL_READ_KB AS VALUE_S, 'AVG_PHYSICAL_READ_KB' AS VALUES_TYPE
FROM BASTABLE WHERE COMMAND_TYPE47 ORDER BY AVG_PHYSICAL_READ_KB DESC ) WHERE ROWNUM
UNION ALL
SELECT * FROM
(
SELECT SQL_FULL_TEXT,SQL_ID,EXEC_TOTAL_NUM, EXEC_TOTAL_NUM AS VALUE_S, 'EXEC_TOTAL_NUM' AS VALUES_TYPE
FROM BASTABLE WHERE COMMAND_TYPE47 AND SQL_FULL_TEXT NOT LIKE '/* SQL A%' ORDER BY EXEC_TOTAL_NUM DESC ) WHERE ROWNUM
UNION ALL
SELECT * FROM
(
SELECT SQL_FULL_TEXT,SQL_ID,EXEC_TOTAL_NUM, AVG_ELAPSED_TIME_SEC AS VALUE_S, 'PROCEDURES_EXEC_TIME' AS VALUES_TYPE
FROM BASTABLE WHERE COMMAND_TYPE=47 AND SQL_FULL_TEXT NOT LIKE '/* SQL A%' ORDER BY AVG_ELAPSED_TIME_SEC DESC ) WHERE ROWNUM

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics











Apple's latest releases of iOS18, iPadOS18 and macOS Sequoia systems have added an important feature to the Photos application, designed to help users easily recover photos and videos lost or damaged due to various reasons. The new feature introduces an album called "Recovered" in the Tools section of the Photos app that will automatically appear when a user has pictures or videos on their device that are not part of their photo library. The emergence of the "Recovered" album provides a solution for photos and videos lost due to database corruption, the camera application not saving to the photo library correctly, or a third-party application managing the photo library. Users only need a few simple steps

Hibernate polymorphic mapping can map inherited classes to the database and provides the following mapping types: joined-subclass: Create a separate table for the subclass, including all columns of the parent class. table-per-class: Create a separate table for subclasses, containing only subclass-specific columns. union-subclass: similar to joined-subclass, but the parent class table unions all subclass columns.

How to use MySQLi to establish a database connection in PHP: Include MySQLi extension (require_once) Create connection function (functionconnect_to_db) Call connection function ($conn=connect_to_db()) Execute query ($result=$conn->query()) Close connection ( $conn->close())

To handle database connection errors in PHP, you can use the following steps: Use mysqli_connect_errno() to obtain the error code. Use mysqli_connect_error() to get the error message. By capturing and logging these error messages, database connection issues can be easily identified and resolved, ensuring the smooth running of your application.

Using the database callback function in Golang can achieve: executing custom code after the specified database operation is completed. Add custom behavior through separate functions without writing additional code. Callback functions are available for insert, update, delete, and query operations. You must use the sql.Exec, sql.QueryRow, or sql.Query function to use the callback function.

JSON data can be saved into a MySQL database by using the gjson library or the json.Unmarshal function. The gjson library provides convenience methods to parse JSON fields, and the json.Unmarshal function requires a target type pointer to unmarshal JSON data. Both methods require preparing SQL statements and performing insert operations to persist the data into the database.

Through the Go standard library database/sql package, you can connect to remote databases such as MySQL, PostgreSQL or SQLite: create a connection string containing database connection information. Use the sql.Open() function to open a database connection. Perform database operations such as SQL queries and insert operations. Use defer to close the database connection to release resources.

PHP database connection guide: MySQL: Install the MySQLi extension and create a connection (servername, username, password, dbname). PostgreSQL: Install the PgSQL extension and create a connection (host, dbname, user, password). Oracle: Install the OracleOCI8 extension and create a connection (servername, username, password). Practical case: Obtain MySQL data, PostgreSQL query, OracleOCI8 update record.
