While NoSQL databases have definitely made their mark and have an important role in applications, there's also still a place for RDBMSes. The key advantage of an RDBMS is that, with a well normalized schema, any arbitrary query is possible, and instead of optimizing the query, you optimize the database itself to ensure you hit your performance goals- indexes, statistics, materialized views, etc..
The reality, of course, is wildly different. While the execution plan used by the database shouldn't be dependent upon how we write the query, it frequently is, managing statistics and indexes is surprisingly hard, and when performance problems crop up, without the right monitoring, it can be difficult to track down exactly which query is causing the problem.
Which brings us to this query, which TJ found while analyzing a performance problem.
select Min(the.moddate) "ModifiedDate"
From T_91CDDC57 what , T_91CDDC57 the , T_91CDDC57 f
where f.rdate > sysdate-1095;
First, let's just congratulate whoever named the table T_91CDDC57
. I assume that's generated, and presumably so was this query. There's clearly a bug- there's no reason to have the same table in the FROM
clause three times, when we just want to find the earliest moddate
.
And that's the problem. T_91CDDC57
isn't a particularly large table. It's no pipsqueak- at 4.5M rows and 34M of data, it's certainly got some heft, but it's no giant, either. But that's 4.5M rows which have to be joined to 4.5M rows with no join condition, and then that gets joined to 4.5M rows again with no join condition.
Here's the explain plan of how this query executes:
-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
-----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | 19P (1)|
| 1 | SORT AGGREGATE | | 1 | 16 | |
| 2 | MERGE JOIN CARTESIAN | | 18E| 15E| 19P (1)|
| 3 | MERGE JOIN CARTESIAN | | 4328G| 31T| 5321M (1)|
|* 4 | TABLE ACCESS FULL | T_91CDDC57 | 959K| 7499K| 18967 (2)|
| 5 | BUFFER SORT | | 4509K| | 5321M (1)|
| 6 | INDEX FAST FULL SCAN| T_91CDDC57_TYPE_INDEX | 4509K| | 5544 (1)|
| 7 | BUFFER SORT | | 4509K| 34M| 19P (1)|
| 8 | INDEX FAST FULL SCAN | T_91CDDC57_MODDATE_INDEX | 4509K| 34M| 4410 (1)|
-----------------------------------------------------------------------------------------------
A few notable entries here. Line 4
does a TABLE ACCESS FULL
. This is the f
iteration of our table, and you can see that it pulls in just 959K rows thanks to our where clause. On line 8
, you can see that it scans the T_91CDDC57_MODDATE_INDEX
- it's using that index to sort so we can find the Min(the.moddate)
. You can also see that it touches 4509K rows. Line 6, also for 4509K rows, is our what
access in the query.
Once we've accessed our three piles of data, we have to connect them. Since there's no ON
or WHERE
clause that links the tables, this connects each row from each table with each row in each other table. And you can see on line 3
, where we join the first pair of tables, we suddenly have 4.3T rows, and a total of 31 terabytes of data. And when we join the third table it on line 2
, that bloats to 18,000,000,000,000,000,000 rows and 15 exabytes of data.
TJ says:
Processing over 15 exabytes of information probably may have something to do with the performance…
Yeah, probably.