Profiling Your Applications with Eclipse Callistoby John Ferguson Smart
The latest release of Eclipse (Eclipse 3.2) comes with Callisto, a rich set of optional plugins for Eclipse 3.2. Callisto includes a powerful profiling tool called the Eclipse Test & Performance Tools Platform, or TPTP. TPTP provides a comprehensive suite of open source performance-testing and profiling tools, including integrated application monitoring, testing, tracing and profiling functionalities, and static-code analysis tools. Profiling tools are invaluable aids for localizing and identifying performance issues in all sorts of Java applications. In this article, we will look at how you can use TPTP to guarantee high-quality and high-performance code, even during unit and integration testing.
The easiest way to install TPTP is to use the Remote Update site (see Figure 1). Open the Remote Update window (Help -> Software Updates -> Find and Install), and select the Callisto Discovery Site. Eclipse will propose the set of Callisto plugins. The TPTP tools are listed under "Testing and Performance." The easiest option, albeit the most time-consuming, is just to install all the proposed plugins. Even if you don't install the entire Callisto tool set, you will still need to install some other components needed by TPTP, such as "Charting and Reporting," "Enabling Features," and "Data Tool Performance."
Figure 1. Installing TPTP from the remote site
Profiling a Java Application
The Test & Performance Tools Platform is basically a set of profiling tools. Profiling an application typically involves observing how the application copes under stress. A common way of doing this is to run a set of load tests on a deployed application and use profiling tools to record the application's behavior. You can then study the results to investigate any performance issues. This is often done at the end of the project, once the application is almost ready for production.
TPTP is well suited to this type of task. A typical use case is to run load tests using a tool such as JMeter, and record and analyze the performance statistics using the TPTP tools.
However, this is not the only way you can profile an application with TPTP. As a rule, the earlier you test, the fewer problems you have later. With TPTP, you can profile your code in a wide range of contexts, including JUnit test cases, Java applications, and web applications. And it is well integrated into the Eclipse IDE. So, there is no reason not to start preliminary performance tests and profiling early on.
TPTP lets you test several aspects of your application's behavior, including memory usage (how many objects are being created, and how big they are), execution statistics (where did the application spend most of its time?), and test coverage (how much of the code was actually executed during the tests). Each of these can provide invaluable information about your application's performance.
Despite all belief to the contrary, memory leaks can and do exist in Java. Creating (and keeping) unnecessary objects increases demands on memory and makes the garbage collector work harder, neither of which are good for your application's performance. And if your application is running on a server with long periods of continuous up-time, cumulated memory leakage can eventually cause the application to crash or the server to go down. These are all good reasons to keep a close eye on your application's memory usage.
According to the 80-20 rule of thumb, 80% of performance issues will occur in 20% of the code. Or, in other words, you can obtain substantial performance improvements with relatively little effort by simply concentrating on the areas of the application that are executed most often. This is where execution statistics can be useful.
While it's at it, TPTP also gives you some basic test-coverage data. Although not as complete as a dedicated tool such as Cobertura or Clover, you can still use these statistics to get a quick idea of which methods are being effectively tested by your performance tests.
The sort of testing I'm talking about in this article is not optimization as such. Optimization involves fine-tuning application performance by using techniques such as caching. It is a highly technical activity, and is best done at the very end of the project.
This type of preliminary performance testing and profiling discussed here simply involves making sure that the application performs correctly from the start, and that there are no coding errors or poor coding practices that will penalize performance later. Indeed, fixing memory leaks and avoiding unnecessary object creation is not optimization--it's debugging, and, as such, should be done as early as possible.
Let's start by profiling a single class through some unit tests. You can either profile your normal unit or integration tests, or write more specialized performance-oriented tests. As a rule, you should try to profile code that is as close as possible to the production code. Many people use mock objects to replace DAO objects for unit tests, and it can be a powerful technique to speed up the development life cycle. If you use this type of approach, by all means, run your profiling with these tests; it can reveal useful information about memory usage and test coverage. However, the performance tests are of limited value, as performance in a database-related application is often dominated by database performance, so any serious performance testing should be done in this context. In short, don't forget to profile your integration tests that run against a real database.