Switch to standard view 
  Sybase logo
 
 
 



This document was originally written for Adaptive Server Enterprise 11.0.x version and contains some information (particularly the discussion of deadlocks) which is in most part no more applicable to the 11.9.x version that provides the row level locking. However, other information can be still useful for the enterprise solutions developers writing their database driver code for Sybase environment.
 

Table of Contents

1. Overview
2. General
3. Server

  • 3.1 Deadlocking Relief
  • 3.2 Other Locking Issues
  • 3.3 new Adaptive Server Enterprise 11.x Features to Boost Performance
  • 3.4 Server Helpful Hints
  • 3.5 trace flags with which to boot the Server

  • 4. Client
     

  • 4.1 TCP_NODELAY
  • 4.2 setting Packet Size
  • 4.3 CS_NOAPI_CHK
  • 4.4 using the right library
  • 4.5 ct_describe() and ct_param()
  • 4.6 array binding

  • 4.7 how to choose the right CT-Lib command
     

    1. Overview

    This guide is intended as a supplement to the set of publications that is available with the release of Sybase Adaptive Server Enterprise 11.x (ASE 11.x). Its purpose is to provide some guidelines when developing and tuning applications that run on Adaptive Server Enterprise 11.x. This guide, by no means, replaces the need to attend the Performance & Tuning course given by Sybase Education. This course is a pre-requisite to any serious performance and tuning analysis. This guide is meant to compliment the Server publications and course material, and in fact, in many places great care has been taken not to be redundant with materials.

    A new document in the Adaptive Server Enterprise 11.x set is the Performance and Tuning Guide (P&T Guide). It is expected that the reader of this guide has already read that document and is familiar with its concepts. This guide is intended to focus on some of the specific areas that developers of third party applications run across.

    We also strongly advise you to (periodically) visit the Sybase World Wide Web pages for the latest information, whitepapers, performance tips, sample programs, etc. Specifically look at:
     

    http://www.sybase.com

    http://www.sybase.com/products/databases/sql_server/information/performance.html

    http://www.sybase.com/products/samples/
     

    Our goal is to help you achieve what you want of your application - to be fast (as measured by response time and throughput) and to be able to support more current users.

    Being a developer of applications that will be deployed at your customers sites presents you with 3 challenges the typical end-user is spared (not everyone in the audience will face all 3):

      writing your application to support more than 1 RDBMS, using a database specific driver;
    This document will address the first two items that deal with performance issues.

    We realize that many of you have not chosen Sybase as your primary platform and accordingly have written your application with another architecture in mind. This can often lead to inefficient practices on Sybase which we will try to point out and present feasible solutions.

    We will concentrate on two areas, namely concurrency (locking and deadlocking issues) and performance (once we get it to run, making it run fast).

    The table below is a summary of the topics/solutions that are discussed in more depth in subsequent sections. We included a rating (High - Medium - Low) for each on the benefit that might be gained from implementation and for the coding effort you might expect.
     
     
     
     

    Topic Section Benefit Coding Effort
    coding to avoid ?true? deadlocks 5.1
    H
    H
    retry-on-deadlock 5.1
    M
    H
    update only what needs to be updated 5.1
    H
    M
    strive for updates in place 5.1
    H
    L
    tracking the cause of deadlocks 5.1
    M
    L
    hold no locks during user ?think-time? 5.2
    H
    L
    lock at the minimum isolation level  5.2
    M
    L
    max-records-per-page: spreading the data 5.2
    H
    L
    short transactions 5.2
    M
    M
    avoid table locks 5.2
    L
    L
    named caches 5.3
    M
    M
    large I/Os 5.3
    L
    M
    using data slices 5.3
    M
    L
    TCP_NODELAY 6.1
    H
    L
    setting Packet size 6.2
    H
    L
    CS_NOAPI_CHK 6.3
    M
    L
    using the right library 6.4
    H
    L
    ct_describe() and ct_param() 6.5
    M
    L
    arrary binding 6.6
    M
    M

     

    2. General

     The architecture of many of the applications written by the enterprise solutions developers for Sybase today follow an approach where the client and the Adaptive Server Enterprise reside on the same machine.

    Adaptive Server Enterprise was not designed to share resources with the application server.
     
     

    Ideally, whether or not your application resides on the same machine as the server, you want to minimize the traffic between the application and the Adaptive Server Enterprise . This means:· Doing the work as close to the data as possible. Let the server do the work through use of stored procedures, triggers, joins and other set oriented processing wherever possible. Avoid processing one row at a time.

    · Sending multiple commands in a single batch instead of sending each command independently.

    · Filtering the results set before it is returned, eliminating unnecessary columns and rows.  You also want to build the application as optimally as possible. Keep the executable as small as possible and compile with optimization flags. Link only those libraries necessary.
     
     

    Of course, the place where you can gain the most performance improvement is in your query design, transaction design, and database design. If you have the flexibility to modify the design of your database, pay particular attention to chapter 2 of the P&T Guide and the discussions of collapsing and splitting tables.
     

    uning for Performance

    Step 1 - Knowing Your Application

     There are two parts to knowing your application. The first is understanding how the application is to be used. Which tables get hit most frequently, what is the mix of transactions, are there certain hours when the database is in an insert/update intensive mode, other periods when reporting is the major activities, are there somewhat quiet times? Knowing how the data is to be used allows you to decide whether or not some tables should be de-normalized (adding redundant columns to eliminate joins, adding derived columns so that aggregate values are readily available), and/or whether some tables should be duplicated (replicated) or split.
     
     

    The other part is the empirical knowledge you gain after collecting and examining all the statistics either from a running system or, better yet, a controlled, repeatable simulation environment. Here's where data from the profiled application is captured through a variety of tools (operating system monitoring tools, Sybase SQL monitoring tools (i.e. SQL Monitor and "dbcc monitor"), third party monitoring tools, and traces and statistics collected through the application). (Because you, the developer of this database independent API don't have any control or, sometimes, knowledge of what the application is sending, if you can build into the application tools to capture important information, like the flow and frequency of statements and how long they take to process, you'll be way ahead of the game in being able to tune your application.)  Step 2 - Knowing What's Tunable

     - packet size, frame size and other networking parameters

    - server configuration parameters, i.e. cschedspins, memory, cache

    - application configurations

    - number of cursor rows fetched at a time  Step 3 - Having the Right Testing Environment

     This includes being able to re-run the benchmarks at will (after first establishing the baseline). It means having a benchmark that approximates reality (containing a transaction mix, multiple users, with appropriate timing, that simulates what the real world will offer). And it means having the results of the tests being easily understood and compared (this means you can't stand there with a stopwatch - you must build the timings into the application).
     
     

    There are testing tools that are available that can help you build a multi-user testing environment. The following in not a conclusive list by any means (or an endorsement) but a starting place of vendors who have offering: Performix, Mercury, and Neal Nelson.  Step 4 - What to Monitor

     At the OS level: (use tools such as vmstat, netstat, iostat)

    memory usage

    CPU usage (especially important when client and server, or multiple clients, are on the same machine)

    swapping activity

    paging activity

    network activity

    I/O activity per controller
     
     

       
    At the Adaptive Server Enterprise level: (use tools such as SQL Monitor, sp_lock, sp_monitor, dbcc)

    number of packets sent/received (use dbcc monitor)

    CPU usage

    cache hit rates

    I/O activity

    if you see deadlock, determine if it's in the data pages or the index pages of a non-clustered index

    average packet size

    look at the TDS packets

    number of queries/traffic to network
     

     And, of course, from the application:

    how long did the job run, or

    what is the average response time

    using gprof output can be very useful
     

      Step 5 - What Can Be Done

    if the application allows it, try parallelizing batch operations (beware of increased possibility of causing deadlocks)

    use named caches to prevent data that's going to be re-used from being flushed

    look at hardware alternatives (if allowing write caching shows improvement in your development environment, then perhaps a solid state device might help at the production site)

    check showplan output, verify that the optimizer is doing for you what you?d expect

    look at the MRU/LRU strategy being used, override if necessary

    are there table scans where an index should be used

    set TCP_NODELAY on both the client and the server

    test with various packet sizes to optimize network performance

    if dynamic SQL is being used, are the statements being re-used?

    do the work close to the data, use stored procedures

    use temp tables

    for non-shared data

    for intermediate join results

    to establish result sets without holding locks

    they will be cleaned up automatically

    use optimistic locking (via a timestamp or version number)

    use the minimal level of locking for each command;
     

    use isolation level 0 (dirty reads) wherever you canThe query analysis tools that you will use most often are:
     
     
     
     

    Tool Use
    set showplan on Shows the query plan for a query, including the indexes selected, join order, and worktables. 
    set statistics io on Shows how many logical and physical reads and writes are performed to process the query
    set statistics time on Shows how long it takes to execute the query.
    set noexec on Usually used with set showplan on, the command suppresses execution of the query. You see the plan the optimizer would choose, but the query is not executed. It is very useful when the query would return very long results, or could cause performance problems on a production system. Note that output from statistics io is not shown when noexec is in effect (since the query doesn't perform I/O.)

     
     

    3. Server

      3.1 Deadlocking Relief
    A deadlock occurs when two users each have a lock on a separate object. Each wants to acquire an additional lock on the other user's object. When this happens, the first user is waiting for the second to let go of the lock, but the second user will not let it go until the lock on the first user's object is released.

     It is possible to encounter deadlocks when many long-running transactions are executed at the same time in the same database. Deadlocks become more common as the lock contention increases between those transactions (decreasing concurrency).

     Well-designed applications can avoid deadlocks (but may not fully eliminate them) by always acquiring locks in the same order. Updates to multiple tables should always be performed in the same order, with no table updated twice.

     At all costs, you must code to avoid deadlocks BUT they will happen anyhow (you can influence the frequency). When a deadlock occurs, Adaptive Server Enterprise will terminate one of the transactions so that the other(s) can complete. Whenever this occurs, that transaction that was terminated must be started again - automatically if it occurs in an (unattended) batch process.

       code to avoid 'true' deadlocks
       


    Back to Top
    © Copyright 2010, Sybase Inc.