Best practices to create optimized database design

When creating SQL Server columns, improve database performance by following these best practices for outlining data types, keys, and NULL configurations.

Table names have no performance benefits. They make databases self-documenting and easier to code against. Table names should reflect their business meaning.

Create a clustered index on every table.

If you've got numerous "lookup" tables with only a few rows in each, consider combining them into one "master lookup" table. For example, you could have numerous "type_lookup" and "category_lookup" tables, each with a dozen rows. Instead of having to take care of 30 different lookup tables, you'll combine them during a single table that has row_identifier, row_type, and row_value columns.

If a table contains millions of rows and you have multiple disks (or disk arrays) at your disposal, take advantage of the table and index partitioning. Partitioning can provide considerable query performance improvements. It also can make loading and purging large data sets from a table in no time.

Use the littlest data type necessary to implement the specified functionality

Ensure that each column has a descriptive name;

do not add a column called "ID" to every table. Instead use "customer_id", "supplier_id", "store_id" and so forth.

Each table allows up to 1,024 columns, but normally you do not need nearly as many columns. For transactional systems ensure the data model is highly normalized; this means the same data element (customer address, customer phone number, product description, etc) should not be repeated in multiple tables. For reporting systems you'll allow some redundancy, but as long as thorough testing confirms that redundant columns improve query performance.

If possible and appropriate, use fixed-length as against variable-length data types. For example, if you recognize product code will always be limited to 50 characters use CHAR(50) against VARCHAR(50). Variable-length columns impose an overhead that isn't always necessary.

Use UNICODE data types (NCHAR, NVARCHAR, NTEXT) only necessary. If your database will only contain European characters, then you should not need to use UNICODE data types. Realize that UNICODE data types use twice-as-much storage as their non-UNICODE counterparts.

Attempt to configure column null-ability correctly. If the column should have worth, then configure it as NOT NULL. Using default constraints is more efficient than having columns allowing NULL values. A NULL value isn't equal to anything else - empty string, zero or even other NULL values. A NULL denotes that the value is unknown. With a personality column, you'll often use a default value of "unknown" as against allowing NULL values.

Use large variable-length data types sparingly. Variable-length columns are normally stored on equivalent data pages because of the remainder of the record. However, if the combined size of the variable-length columns exceeds 8,000 characters, they're stored on row-overflow data pages, which imposes additional overhead during data retrieval.

Avoid using TEXT, NTEXT and IMAGE data types for any newly created table columns. These data types are deprecated and might not be available in future versions of SQL Server. Use VARCHAR(MAX), NVARCHAR(MAX) and VARBINARY(MAX) data types instead.

0 Comments