Table variables get a lot of bad press, and they deserve it. They are to query performance what the TSA is to air travel. No one’s sure what they’re doing, but they’ve been taking forever to do it.
One particular thing that irks me about them (table variables, now) is that they’ll spill their spaghetti all over your disks, and not warn you. Now, this gripe isn’t misplaced. SQL Server will warn you when Sort and Hash operations spill to disk. And they should! Because spilling to disk usually means you had to slow down to do it. Disks are slow. Memory is fast. Squatting in the Smith Machine is cheating.
Wouldn’t it be nice?Why am I picking on table variables? Because people so frequently use them for the wrong reasons. They’re in memory! They made my query faster! No one harbors delusions about temp tables, except that one guy who told me they’re a security risk. Sure, we should get a warning if temp tables spill to disk, too. But I don’t think that would surprise most people as much.
So let’s see what hits the sneeze guard!
You’ve been here beforeYou know I’m going to useStack Overflow. Here’s the gist: I’m going to set max memory to 1 GB, and stick the Votes table, which is about 2.4 GB, into a table variable. While that goes on, I’m going to run sp_BlitzFirst for 60 seconds in Expert Mode to see which files get read from and written to. I’m also going to get STATISTICS IO information, and the query plan.
SET STATISTICS IO ONDECLARE @Votes TABLE (Id INT NOT NULL, PostId INT NOT NULL, UserId INT NULL, BountyAmount INT NULL, VoteTypeId INT NOT NULL, CreationDate DATETIME NOT NULL)
INSERT @Votes
( Id, PostId, UserId, BountyAmount, VoteTypeId, CreationDate )
SELECT v.Id, v.PostId, v.UserId, v.BountyAmount, v.VoteTypeId, v.CreationDate
FROM dbo.Votes AS v
SELECT COUNT(*) FROM @Votes AS v
First, let’s look at stats IO output. The first section shows us hitting the Votes table to insert data. The second section shows us getting the COUNT from our table variable. Wouldn’t you know, we hit a temp object! Isn’t that funny? I’m laughing.
Table 'Votes'. Scan count 1, logical reads 309027, physical reads 0, read-ahead reads 308527, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.(67258370 row(s) affected)
Table '#ACD2BA14'. Scan count 1, logical reads 308525, physical reads 36, read-ahead reads 308077, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
The query plan doesn’t give us any warnings. No little yellow exclamation points. No red X. It’s all just kind of bland. EvenPaste The Plan doesn’t make this any prettier.
Image may be NSFW.
Clik here to view.

Oatmeal
Well, unless you really go looking at the plan…
Image may be NSFW.
Clik here to view.

Oh, that terrible estimate
Okay, that sucksLet’s look at sp_BlitzFirst. Only Young And Good Looking people contribute code to it, so it must be awesome.
Image may be NSFW.
Clik here to view.

Just too physical
Boy oh boy. Boy howdy. Look at all those physical writes. We spilled everything to disk. That’s right at the 2.4 GB mark, which is the same size as the Votes table. We should probably know about that, right?
Is a temp table any better?In short: kinda. There are some reasons! None of them are in stats IO. They’re nearly identical.
For the insert, the scan could go parallel, but doesn’t. Remember that modifying table variables forcesquery serialization, so that’s never an option to us.
In SQL Server 2016, some INSERT operations can be fully parallelized , which is really cool. If it works. Much like minimal logging, it’s a bit of a crapshoot.
The COUNT(*) query gets an accurate estimate and does go parallel. Hooray. Put those pricey cores to use. Unless you recompile the query, you’re not going to get an accurate estimate out of your table variable. They’re just built that way. It doesn’t even matter if you put an index on them.
Image may be NSFW.
Clik here to view.

Promised Land
Does sp_BlitzFirst tell us anything different?Image may be NSFW.
Clik here to view.

SCANDAL
Yeah! For some mystical magical reason, we only spilled out 1.8 GB, rather than the full 2.4 GB.
Party on, Us.
I still hate table variablesI mean, I guess they’re okay if the # key on your keyboard is broken? But you should probably just buy a new keyboard and stop using Twitter.
Anyway, I think the point was that we should have some information at the plan level about spills to disk for table variables and (possibly) temp tables. It wouldn’t help tools like sp_BlitzCache, because spillinformation isn’t in cached plans, but it might help if you’re doing live query tuning.
Thanks for reading!
Brent says OMG THESE ARE SPILLS TOO. I never thought of it that way.