Monthly Archives: April 2015

Good Compression and Bad Partitioning

Storytime!

Over the last week I’ve been working on compressing a table in an ODS.  I initially setup compression on a bunch of tables in this database several months ago, and simply did not have the space to rebuild this one table.  By itself, it’s over a terabyte.

Now that I have the room, I was asked to revisit compressing this table.  In my analysis, I determined that writes to this table were low enough to where I could do page compression and get some very nice space gains.  So I attempted going full table rebuild on it during the day (I was allowed to for reasons I will not dive into here), but found that the table would not complete the rebuild in time for the nightly load to kick off.  We’re talking like 12 hours here.  After a couple attempts at this, I looked at the table and remembered it was partitioned; so I decided to compress each partition one at a time, and there were 75 partitions.  Ok, I really did more like 10 at a time, but you get the idea.  Most of the partitions compressed pretty quickly, until I got to a particular set of partitions.  It took FOREVER.  HOURS by itself.  When it finally completed, I got about 600 GB back.  I decided to call it a day as it was close to the nightly load’s kickoff time.

The next morning I logged in to find an ecstatic development team.  Their nightly load finished in record time, and it has continued to finish quickly since then.  This is a great example of compression giving a huge performance benefit. There was something to this though, since I had only compressed a handful of partitions, so I decided to go back to my analysis and look at the compression estimates I generated from several months ago.

Well, here’s my 600 GB

 

compression_estimates

I drank my little bottle of….Love Partition Number 9.  Sandra Bullock strikes again!  You and your Ole Miss ex-sorority girl roles.  I’ll always hold Speed 2 over your head.

This partitioning is like if you and a friend decided to get a Snickers to split, and he shaved off a small piece of chocolate on the end, giving it to you saying, “Ok, I think that’s even.”

The buffer cache on this instance is usually anywhere from 720-750 GB depending on the time of day.  You figure this one partition when queried was taking up the entire buffer cache when it was called into memory, and everything else had to page out.  I know I know, where clauses wouldn’t actually bring the whole thing into memory but you get my drift if someone came in a selected the whole thing.  Now when it’s queried, only 1/4th of that has to go into memory.  My CPU usage is up, but on this box I actually have CPU for days. It’s not under load at all so I feel good about moving the stress from memory to CPU.

Not Done Yet.

I decided to investigate further.  Why is this one partition so huge compared to the others?  I took a look at the partition scheme.  Whenever this database was initially designed, it had one filegroup; looks like the scheme for this hasn’t changed even though filegroups have been added.  It’s all going to primary.  I’m not going to rant about that, but it’d be best if the partitions were split out to multiple filegroups on multiple drives or mountpoints in order to get better iops.

Next I checked out the partition function.  The partition function splits the table out by one of the primary key columns, which also happens to be a static ID.

It’s partitioned by a static ID.

A STATIC ID.

No wonder this one partition was so huge.  Most of the input into this table was using a particular key, which is what made it so lopsided.  Now, this would work well actually if the large partition was some sort of archived data that was rarely used, and lookups to this table were always done by this key.  Maybe that was the initial intention, but to me if you’re going to split a table into subtables, it doesn’t make sense to have one gigantic subtable.  You could accomplish this same feat using filtered indexes and not run into half of the issues.  Of course, filtered indexes weren’t available in 2005, and that’s the version this database was designed on.

I passed all of my findings on to the development team, and since I enjoy working with them and vice versa, we’ll continue to collaborate and make things even better.  We are already planning on repartitioning the table and adding filtered indexes.

Here’s some links I used for my analysis.  Check out Jason’s SQLSaturday session on compression; it’s where I learned everything I know on compression!  He’ll be doing this session in Atlanta May 16th.

Compression:

Jason Brimhall

http://jasonbrimhall.info/2010/06/08/data-compression/

http://www.sqlsaturday.com/392/Sessions/Details.aspx?sid=10860

Partitioning:

Michelle Ufford

http://sqlfool.com/2008/11/partitioning-101/

Gail Shaw

https://www.simple-talk.com/sql/database-administration/gail-shaws-sql-server-howlers/

Brent Ozar’s resource link

http://www.brentozar.com/sql/table-partitioning-resources/