Skip to main content
Engine Parts

SAP Performance 101: World War Z

I been doing performance analysis for SAP systems for a while. After this time it’s quite easy to do an initial check so I quickly know what could be the issue causing the performance problem. In case of custom code (also know as Z code) the issues usually are the same ones again and again.

This happens because a lot of programmers code their programs without thinking about how it will be its performance lately. Probably because deadlines or just because they don’t know about program optimization. The result is users complaining about the performance of the system and a Basis consultant doing a performance analysis…

The problem

Users execute at the end of the month a transaction for the finance department. This is a Z transaction and the execution time is about 3 hours. This month the transaction was running for more than 15 hours and still didn’t finish. This transaction is critical for getting important data for the monthly close it.

Users state that the amount of data to be processed is the same as an usual month. Developers state that the code has not change recently and the last change was one year ago. The transaction is executed as a background task since it takes too long to be completed. The information about the system is:

  • SAP ERP 6.00 EHP4.
  • Oracle 12.1.0.2.0.
  • Kernel 721 Patch 600.
  • Central Instance + DB on same server. 2 more servers with Dialog Instance.

The investigation

I executed the transaction on the system and I took a look on the SM50 transaction. The work process was stuck on the MKPF table so the first thing was generating the statistics for this table with DB20 transaction. In the meanwhile I took a look to the query that was being executed in the system and I show this little monster.

This kind of SQL code makes me cry. Something dies inside of me every time I see something like this. Anyway I checked the tables that appears on the SQL query to check if they have the proper indexes created. It has some indexes created but they were not even close to the ones that should be created. I checked the execution plan for the query on transaction ST04:

Query execution plan before solving the issue
Execution plan before solving the issue

The total cost was something about 12.000 and the total CPU-Cost was about 32 millions. At OS level the filesystems that contains the SAPDATA were about 100% use all the time:

iostat command on database server
iostat command on database server

I checked transaction ST04 the logical reads for the query and the number was above 8 million and increasing. The ZAU index for the MSEG table was created one week before so I think the execution plan completely changed on the last executions because of it. I found the problem so the next step was to fix it!

The… solution?

It was pretty clear that I had to optimize the query as much as I could, modifying the SQL code or creating indexes and statistics. Unfortunately I didn’t have enough time for optimizing the SQL code so I had to use the second option. I checked the tables again and I decided to create the indexes that the query would need for MKPF, BSEG and MARA tables. For creating the indexes I used SE11 and SE14 transactions, it’s a pretty easy task so I won’t explain it. Before transporting them to QAS system I did a test execution of this report on the system so I would have an estimation of the execution time before and after the index creation.

When you transport indexes and generate statistics on a SAP system you have to keep in mind some important topics:

  • The transport of the TR can take a long time depending on how many entries you have on the table. I recommend to create a TR individually for each index to be created and transport them one by one.
  • Delete the statistics for the table before generating new ones.
  • During the generation of statistics you can cause locks at database level for that specific table. It will take longer depending on the entries in the table. For critical tables such as basis tables or FI tables expect poor performance on the system and lot of people complaining because their transactions or reports are not finishing.
  • The best idea to perform this tasks is to do it outside of business hours.

After transporting the indexes and generating the statistics I did 2 more executions of the program because one of them ended with a dump. As you can see in the following screenshot the execution time was 61.360 seconds before and 4.000 seconds after:

Job execution on QAS system
Job execution on QAS system

The execution plan changed completely and the CPU-Cost is way lower than it was before:

Execution plan before solving the issue
Execution plan before solving the issue

Since the test were good I decided to transport the indexes to PRD system and execute the transaction. It was a huge surprise when I saw that the execution time was as before. The execution plan didn’t change at all in the system. I checked the execution plan on the database connecting to Oracle, it was the same as it was before the indexes…

The solution!

I decided to delete the execution plan from Oracle so it will be created again when executing the query. I didn’t work at all since the execution plan created was the same one as before. My last idea was to flush the shared pool on the database level:

It worked! Now when I executed the query the execution plan was the same as QAS system. I checked with my coworkers and we decided to reorganize the BSEG table because its fragmentation. After this we executed the transaction and it finished in a couple of hours.

I saw this kind of behavior of SAP systems in the past. The execution plan for a query is different on the database and SAP system. I also remember a similar issue when I upgraded an Oracle from 10g to  11g. The execution plan for a search in FI transaction was completely different and the performance was horrible…

Conclusions

Developers of the world: Please spend a few minutes optimizing the queries you create and creating proper indexes. You will save tons of time to Basis consultants. The users will live longer and won’t complain because performance issues. The world will be a happier and better place.

I will write a post in the future about code performance and tuning, in the meanwhile you can read about it in the ABAP Performance and Tuning Wiki.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.