I’ve talked a bit about my annoyance with hardware limits for Standard Editionhere andhere.
If you’re a bunch of SQL-savvy developers who understand the Enterprise Edition features that are now available in 2016 SP1, great. You’re a very small minority of shops. I’m talking about places like StackOverflow and kCura.
If you’re not, prepare to join the breadline of dba.se users asking why Partitioning didn’t speed up their query. This is most shops who are just happy to get the right results back, and mostly interface with SQL via EF or another ORM. Not your target audience for Change Data Capture and Hekaton.
Why these features?It kind of feels like you’re getting leftovers. There’s a meatloaf edge, congealed creamed spinach, mashed potatoes that have become one with the Styrofoam, and fossilized green beans. You don’t get the Full Monte AGs, TDE, online index operations, etc.
Don’t get me wrong, ColumnStore and Hekaton are neat, but you’re capped at 32GB of RAM each for them. If you’re at a point with your data warehouse or OLTP app where, respectively, those technologies would make sense, you’re not dealing with 32GB of problem data. You’re dealing with hundreds of gigs or more of it.
What still doesn’t quite make sense to me is why Microsoft wouldopen up all these edge case tools that they spent a lot of time and money developing and implementing, rather than saying “here’s all the hardware you can throw at a problem, but if you want the Enterprise features, you gotta pay”.
Enterprise to me means HA/DR (AGs), it means data management tools (Partitioning), it means crazy fast data ingestion (Hekaton), it means security and auditing ( all that other stuff ).
Facing some tough factsThe amount of people who need all the functionality Microsoft opened up (Multiple Filestream Containers?!) pales in comparison to the amount of people on Standard Edition who have more than 64/128GB of data. And I know something about pale. I used to date a lot of girls who had Bauhaus posters in their bedrooms. It’s just not a very compelling argument to me. People need more memory, and hardware gets cheaper and more powerful every year. They don’t need more features to shoot themselves in the foot with.
If they’re not stepping up to pay $5k more per core going to Enterprise from Standard now, this stuff isn’t going to open any wallets. The amount of money it costs to put an extra thousand dollars of RAM in your server isabsurd.
My proposition would be to raise the base RAM cap on Standard Edition for 2016 to 256GB, and either remove or double it to 512GB for shops that have Software Assurance.
Thanks for reading!
Brent says Microsoft says they’re trying to make life easier for ISVs to have a single code base and use the same programmability features across the board. If that’s true, then they would also deploy these same capabilities across the other under-support versions of SQL Server (2014, 2012, etc) because after all, ISVs have to deal with clients on those older versions too. But they didn’t and that’s your first clue that this isn’t really about making life easier for ISVs. It’s about increasing the adoption rate of SQL Server 2016. (Which is a perfectly cool goal, and I salute that but that’s not the story you’re being told.)