Developing Matt

My Technical Journal

What’s new in t-sql programmability in sql server 2008

leave a comment »


I attended a class titled ‘Whats new in t-sql programmability in sql server 2008’ by Umachandar Jayachandran.  A lot of this is things we’ve read before, but it serves as a great reminder as we implement moving towards sql server 2008.  Below are the notes. 

Date Time

Why did they implement new datatypes?

  • To support Historical data before 1753:
  • Time zone offset data aware
  • Date time with larger fractional positions

The datatypes

  • Date: uses 3 bytes. large year range (1-999)
  • Time: It does have fractional parts.
  • DateTimeOffset: you can store timezone aware values
  • Datetime2: large year range. Large optional precision (example: 000/01/01 12:30:29.111723566)

(if you were there and/or if I can find them see slide on how to use different datatypes with different types of connections (odbc, adodb, etc))

New functions are also available including switchoffset for the datetimeoffset type (Select switchoffset(utcdatetime,’-0:4:00’) from table)


  • Enables you to use one dml statement to insert, update, delete
  • Much more readable
  • Syntax

Merget into targettbl as tgt

Using sourcetable as src

On tgt.key = src.key

When matched and (tgt.qty + src.qty = 0) then delete

When matched then update set tgt.lasttradedate = src.tradedate, tgt.qty +=src.qty

When not matched then insert values(src.stock, src.tradedate, src.qty)

  • for performance make sure the condition is on the on clause
  • existing triggers/constraints do work.
  • target table cannot beremote
  • table hints work.
  • Query execution shows that it is a single operation as well.  His example was on a clustered index and the execution showed a clustered index merge. 
  • He also added an output, ‘action’, stock  (which returned which row did what (which row did insert, update, etc and what it did (the action item).

New Openrowset(bulk) hints

  • Added hints to let the optimizer treat a bulk roowset like an index. 
  • The order hint can be given without the unique hint, but not the other way around
  • Syntax: openrowset (bulk_filename, hint)……

New grouping sets clause

  • define multiple groupings in the same query
  • produces a single result set that is equivalent to a union all of different grouped rows
  • Syntax: select storeid, territoryid, max(modifieddate) from adeventureworks.sales.customer group by grouping sets ((storeid), (territoryid))

Table valued constructors through values clause:

  • You can do multiple inserts or multiple selects
  • Syntax: select * from (values (1,’cust 1’,111,222),(2,’cust2’,222)

New table types and tvpes

Pre 2008 you would use temporary tables which increased disk i/o.  It was prone to locking/ had to manuallydrop the temp table.  you could also use multiple parameters….string or blob data, or xml data. 

Table types are optimized to scale and perform better for large data.   I’m still not convinced as to the true value of this.  Currently I implement this technology calling a stored procedure from an ssis package and at 24000 rows it is taking 10 minutes.  the only statement inside the stored procedure is a merge statement (called alone this merge statements takes a few seconds)

object dependencies

  • sys.sql_expression_dependencies
    • tracks both schema-bound and non-schema-bound dependencies
    • tracks cross-dbs and cross server references (by name)
  • sys.dm_sql_referenced_entities
    • returns a row for each entity referenced by a given entity
  • sys.dm_sql_referencing_entites
    • returns a row for each entity that references a given entity
    • for example, show me objects that depend on table t1
  • Example:

create procedure p1 @a int @b myudt output

as declare @x int, @y int

select a, @x =, @y = max© from t1 where a= @a

exec p2

set @b = cast (@x,@y) as myudt


select *

from sys.sql_expression_dependencies

where referencing_id = object_id(‘p1)

Non-relational data

  • hierarchyid
  • large udts (mo more 8k limit on user defined types)
  • sparse columns (optimized storage for sparsely populated columns
  • Side notes from a great class that I attended for the last 20 minutes taught by Goldie Chaudhuri)

    sparse is a storage attribute on a column
    0 bytes stored for a null value
    performance benefits
    null compression in the tds layer
    no change in query/dml behavior

  • Wide tables
  • (support for hundreds of thousands of sparse columns
  • Filtered indicies: define indicies over subsets of data in tables (have no idea what this means, but he talked about it)
  • Filtered indexes and stats
    indexing a portion of the data in a table
    only can be used in non clustered indexes and the predicates are limited.

Other items

new language Collations

compound assignment operators: =+,=+, etc

variable initialization during declaration: declar @v int = 56


Written by matt

March 23, 2009 at 2:58 pm

Posted in DevConnect2009, T-Sql

Tagged with ,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: