Posted by: Ashish Kumar | July 10, 2009

Performance Tuning of Siebel Application

The Siebel application architecture though efficient by its design, there are instances where the custom configuration leads to potential performance pitfalls. Such performance related issues can translate into screens that load slowly or database queries that fail to return the sought-after results. Other times, users must click through a puzzling series of screens – rather than just one or two – to get to the information they need.

 In addition, this drawback can be multifold in environments with large databases and wide data distribution across servers.

 Causes of Poor Performance

 Primarily the problems that technical architects and engineering consultants deal with while tuning up a Siebel installation, fall into two primary areas:

  • Business Performance – organizational and functional problems – from having too many screens – lead to end user issues.
  • Application/System Performance – problem with a number of underlying configuration issues adversely impacting application performance.

 Both of these problems can be addressed by a tune up of the Siebel solution.

 Performance Tuning 

Often it is mistaken that solving a performance problem involves enigmatic talent. However, there is a particular process involved in approaching a performance problem. Figure below shows the methodology to approach a performance problem or simply tuning the system for best performance.

Following this tuning methodology will make performance tuning no longer an issue. There are several books and whitepapers written on this subject and hence it is not being covered in this whitepaper.

The scope of this whitepaper is to list the common causes for poor performance and the effective way to overcome the problem using the database tuning. 

Validating the Performance improvement

Identification of Performance Bottleneck

Solution for improvement   

System Testing

Application Tuning

Setting of Performance Baseline

Monitoring Application performance against Baseline

  1.Optimal Database configuration

 In an ideal scenario, creating a well-planned database to begin with requires less tuning or reorganizing during runtime. One can find plethora of resources available to facilitate creation of a high-performance Oracle databases. However, tuning engineers often attempt to modify a badly designed database consisting of thousands of tables and indexes, piece by piece. This is not only time consuming but prone to mistakes. Eventually one would end up rebuilding the whole database from scratch. Here is an alternative approach to tuning a pre-existing, prepackaged database. 

ü       Measure the exact space used by each object in the schema. The dbms_space packages provide accurate space used by an index or a table. The dba_free_space provides “how much is free” from the total allocated space, which is always more. Then run the benchmark test and again measure the space used. The difference results in an accurate report “how much each table/index grows during the test.” Using this data all of the tables can be right-sized i.e., capacity planned for growth during the test. Also from this we can figure out the hot tables used by the test and concentrate on tuning only those.


ü       Create a new database with multiple index and data tablespaces. The idea is to

place all equi-extent-sized tables into their own tablespace. Keeping the data and index objects in their own tablespace reduces contention and fragmentation, and also provides for easier monitoring. Keeping tables with equal extent sizes in their own tablespace reduces fragmentation as old and new extent allocations are always of the same size within a given tablespace. So no room for empty odd sized pockets in between. This leads to compact data placement which reduced number of I/Os done.


ü       Build a script to create all of the tables and indexes. This script should have the tables being created in their appropriate tablespaces with the appropriate parameters like freelists, freelist_groups, pctfree, pctused etc. Use this script to place all of the tables in their tablespaces and then import the data. This ends up in a clean, defragmented, optimized, right-sized database.

 2.Oracle Parameters Tuning 

Following are the key Oracle init.ora tunables that can be considered for tuning. 


     The above parameter is used for determining the size for Oracle’s SGA (Shared Global Area). Database performance is highly dependant on available memory. In general, more memory increases caching, which reduces physical I/O to the disks. Oracle’s SGA is a memory region in the application which caches database tables and other data for

processing. With 32-bit Oracle software on a 64-bit Solaris OE the SGA is limited to 4 GB.

Oracle comes in two basic architectures: 64-bit and 32-bit. The number of address bits determines the maximum size of the virtual address space: 

32-bits = 232 = 4 GB maximum

64-bits = 264 = 16777216 TB maximum 

For Siebel 10,000 concurrent users PSPP (Platform sizing and Performance program) workload the 4GB SGA is sufficient, hence the 32 bit Oracle server version can be used. 





Setting db_block_max_dirty_target from default value of 4294967294 to Zero, disables writing of buffers for incremental check pointing purposes.

  1. Changing the db_writer_processes default value to 4 from the default value of 1, starts up 4 dbwr processes.     

The above 3 parameters reduce wait times in the database drastically and thereby improved Siebel overall throughput.



Default value is 2K. An 8K value for Siebel is optimal. 


DB_BLOCK_LRU_LATCHES specifies the upper bound of the number of LRU latch sets. Set this parameter to a value equal to the desired number of LRU latch sets. Oracle decides whether to use this value or reduce it based on a number of internal checks. If the parameter is not set, Oracle calculates a value for the number of sets base on number of CPU’s. The value calculated by Oracle is usually adequate. Increase this only if misses are higher than 3% in V$LATCH. For Siebel 10000 users run on a 4 CPU machine, setting this to 48 is optimal.           


Setting this value to 0 disables the oracle background process called reco. Siebel does not use distributed transactions hence we can get back CPU and Bus by having one less oracle background process. Its default value is 99. 


Siebel does not use replication. Hence it is safe to turn off by setting it to false. 


The above parameter writes lesser redo for every commit. The nature of Siebel OLTP is plenty of small transactions with frequent commits. Hence setting this to false buys back CPU and bus.

 3. SQL Query Tuning 

  1. If a performance problem is detected in testing or production, one needs to analyze the SQL statements being spooled by the Siebel applications. To generate an SQL trace file, /S parameter to be added to the command line in the properties for the start menu or desktop icon from which the Siebel application is invoked.             The command line has the following syntax.

C:\Siebel\webclient\bin\twsiebel.exe /c config_file /s trace_file 

The /s trace_file expression in the command line instructs the system to generate an SQL trace file with the specified filename and path. This trace file records the text of all of the SQL queries issued by the Siebel application, and identifies the amount of time spent processing each one. The trace file may be opened in a text editor for examination after the session has ended. 

During the analysis of the SQL queries, keep in mind the following points. 

ü       Use of outer joins instead of inner joins. Inner joins may be used for joined tables,with a resulting savings in overhead, provided you are guaranteed that all foreign key references are valid. For example, when the join is from a detail business component to its master, you are guaranteed of the existence of the master. You can configure the join as an inner join by setting the Outer Join Flag property of the Join object to FALSE. This improves the performance of queries that use the join.


ü       Cascade Delete set in a many-to-many link. The Cascade Delete property in a Link object definition must be correctly configured for use in a many-to-many link, or the first insertion or deletion in the association applet will be abnormally slow. A link object definition used in a many-to-many relationship is one that contains a non-NULL Inter Table setting. The Cascade Delete property in such a link must be set to NONE.


ü       Use of EXISTS, Max, or Count functions in a search. Using the EXISTS, Max, or Count functions in a search specification causes sub-queries within the main query, thus slowing performance.


ü       Number of joins, extension tables, and primary ID fields in a business component. Joins degrade performance by causing an extra row retrieval in the joined table for each row retrieval in the main table. Extension tables and primary ID fields also use joins, although implied rather than explicitly defined, adding a row retrieval for each. The more joins, extension tables, and primary ID fields defined in a business component, the higher the number of row retrievals required in tables other than the main table, with corresponding performance degradation.


ü       Never put a sum([MVfield]) in a list column. This requires that a separate query be executed for each record in the list, which is a significant performance issue.

4.  Other Performance Tuning

 Following are some of the performance hindrances and should be avoided wherever possible.

 ü       Large number of records returned. To limit the number of records returned for a business component, you can add a search specification to the business component or applet, or you can define a default predefined query on the view.

 ü       Search specification on a non-RDBMS-supported calculated field. Calculated fields

utilize functions that may or may not be supported by the underlying relational database system. If the RDBMS supports the function, it will have algorithms for performing the calculations efficiently and will return the calculated values with the result set. If the function is not supported in the RDBMS, the Siebel application client may have to rescan the entire result set to perform the desired calculation, considerably increasing the time it takes to obtain the results of the query. The difference is that in the one case the calculations can take place before the results are returned, and in the other, they have to be performed in memory on the client.


ü       Too many business components in a view. An excessive number of different business components used in applets in a view can slow down the display of data upon entry into that view. This is because each of the applets must be populated with data.


ü       Number of fields in a business component. There is no set limit on the number of

fields in a business component or list columns in a list applet. However, a business component with too many active fields will have degraded performance. Also, in some database systems it is possible to generate a query that is too large to be processed.



  1. Hi Ashish,
    Under the section “Cascade Delete set in a many-to-many link”.
    There is a statement
    “first insertion or deletion in the association applet will be abnormally slow. ”

    Can you please explain why would the first insertion be abnormally slow.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s


%d bloggers like this: