Improving your SQL Server’s performance – Part 3

In Part 1 of this blog series, we outlined the importance of creating a good starting point for performance tuning on your SQL Server  by establishing a baseline for your server’s current performance.

In Part 2, we covered how to enable Instant File Initialization and adding additional TempDB files to your TempDB database.

As a conclusion to those articles, we will look at the effects of those changes by running a simple GP2010 Sales Order processing batch posting. To run the test, I create a virtual server with SQL 2012 using the default setting and GP2010 SP2. I made one change to the default to limit SQL server to only use a max of 12GB of RAM since I was running GP2010 and SQL on the same image.

For testing, I created one Sales Ordering Processing batch with 10,524 Invoices containing 31,323 lines of transactions.

In test 1 – No SQL optional settings were changed

  • Instant File Initialization is disabled
  • One Tempdb

The posting of the batch to the General Ledger took 7 hours to complete.

For test 2, I restored the company’s database back to its original state before test 1 and then made the following SQL optional settings changes:

  • Instant File Initialization is enabled
  • Tempdb data file change to have 1 data file per CPU core (4 files)

The posting of the batch to the General Ledger took 6 hours to complete.

This was a very simple test on a virtual server with only one hard drive. Imagine the system performance gains when you apply all of the ideal SQL performance tuning recommendations!

Share this post

Related Posts

Checking Your CMMC Progress

Written by Alec Toloczko With Cybersecurity Maturity Model Certification (CMMC) requirements on the horizon, it’s crucial for organizations handling Controlled Unclassified Information (CUI) to adhere

Read More »