Hurricane SQL Stuff
Thursday, 30 March 2017
Task Pane
Blogs I Read
Sites I Visit
Stuff I blog About
<March 2017>
MonTueWedThuFriSatSun
272812345
6789101112
13141516171819
20212223242526
272829303112
3456789
15/03/2006
Now, here's a cool new SQL 2005 Profiler event I used for the first time in anger yesterday.

What I love about this event is that it actually spits out significant & useful data from both the blocked process & blocking process - locked resource, input buffer, time, spid etc into a single (xml) report.

Very handy for digging into query timeout / blocking issues indeed.


I'm up-grading a script I posted a few weeks back which measures i/o at database file level. Included in the new script are:

(a) Physical O/S filenames
(b) Physical O/S file volumes
(c) A secondary, summary report of i/o, aggregated to volume level.

Cheers,
Greg

/**********
* fn_virtualfilestats() script
* Greg Linwood
* greg@SolidQualityLearning.com
* Mar 15th, 2006
*
* Collects fn_virtualfilestats() for all databases
* on either SQL 2000 or SQL 2005 instances.
**********/


set nocount on
go
if object_id('tempdb..#filestats') > 0
drop table #filestats
go

declare @ver int
set @ver = cast(serverproperty('Productversion') as char(1))

create table #filestats (
DbId int not null,
DBName varchar (255) not null,
fileid int not null,
FileName varchar (255) not null,
SampleTime datetime not null,
TS bigint null,
NumberReads bigint null,
BytesRead bigint null,
IoStallReadMS bigint null,
NumberWrites bigint null,
BytesWritten bigint null,
IOStallWriteMS bigint null,
IOStallMS bigint null,
BytesOnDisk bigint null,
FilePhysicalName varchar (1000) null,
FileVolume varchar (20) null
constraint pk primary key (DbId, fileid, SampleTime))


declare @database_name varchar(255)
declare @database_id int
declare @file_id int
declare @file_name varchar (1000)
declare @sql nvarchar(4000)


declare cr cursor for
select sd.name, sd.dbid, saf.fileid, saf.name
from master..sysaltfiles saf
join master..sysdatabases sd on saf.dbid = sd.dbid
order by sd.name, saf.fileid

open cr

fetch next from cr
into @database_name, @database_id, @file_id, @file_name

while @@fetch_status = 0

begin

if @ver = 8
begin

select @sql = N'
insert into #filestats (DBName, FileName, DbId, FileId, SampleTime,
TS, NumberReads, BytesRead, NumberWrites, BytesWritten, IOStallMS)
select @dn, @fn, DbId, FileId, getdate(), [Timestamp], NumberReads,
BytesRead, NumberWrites, BytesWritten, IOStallMS
from :: fn_virtualfilestats(@database_id, @file_id) '
execute sp_executesql @sql
, N'@dn varchar (255), @fn varchar (255), @database_id int, @file_id int'
, @dn = @database_name
, @fn = @file_name
, @database_id = @database_id
, @file_id = @file_id
end

if @ver = 9
begin
select @sql = N'
insert into #filestats (DBName, FileName, DbId, FileId, SampleTime,
TS, NumberReads, BytesRead, IoStallReadMS, NumberWrites, BytesWritten,
IOStallWriteMS, IOStallMS, BytesOnDisk)
select @dn, @fn, DbId, FileId, getdate(), [Timestamp], NumberReads,
BytesRead, IoStallReadMS, NumberWrites, BytesWritten,
IOStallWriteMS, IOStallMS, BytesOnDisk
from :: fn_virtualfilestats(@database_id, @file_id) '
execute sp_executesql @sql
, N'@dn varchar (255), @fn varchar (255), @database_id int, @file_id int'
, @dn = @database_name
, @fn = @file_name
, @database_id = @database_id
, @file_id = @file_id
end

set @sql = 'update #filestats set FilePhysicalName = convert(varchar(1000), '
set @sql = @sql + 'rtrim(sf.filename)), FileVolume = left(convert(varchar(1000), '
set @sql = @sql + 'rtrim(sf.filename)), 3) from #filestats fs join ' + db_name(@database_id)
set @sql = @sql + '..sysfiles sf on fs.fileid = sf.fileid where fs.DbId = '
set @sql = @sql + convert(varchar, @database_id) + ' and fs.fileid = '
set @sql = @sql + convert(varchar, @file_id)
exec (@sql)

fetch next from cr
into @database_name, @database_id, @file_id, @file_name

end
go
close cr
deallocate cr
go
select * from #filestats
go
select FileVolume
, sum(NumberReads) as NumberReads
, sum(BytesRead) as BytesRead
, sum(NumberWrites) as NumberWrites
, sum(BytesWritten) as BytesWritten
from #filestats
group by FileVolume
order by FileVolume

go
if object_id('tempdb..#filestats') > 0
drop table #filestats
go
7/03/2006
This morning I heard about a SQL Server Engine Team blog, where various members of the SQL product team blog about their specific areas of expertise. Worth a look at least:
http://blogs.msdn.com/sqltips/default.aspx

One of the contributors to this blog is Umachandar Jayachandran who was an old-hand SQL Server MVP but got recuited by Microsoft a couple of years back to work in the SQL Server team.

Umachandar has now joined the "Microsoft SQL Server Performance Team" and is posting some good performance related content to the blog.

Cheers,
Greg

25/02/2006
I'm posting another admin script today which can be useed on either SQL Server 2000 or SQL Server 2005 to measure i/o behaviour against specific database files, using the fn_virtualfilestats() system TSQL function.

This script is useful when you need to identify the source of disk bottlenecks on a database server.

For example, you might have identified that a disk bottleneck on your exists on a server via use of a Perfmon counter such as [\\YOURSERVERNAME\LogicalDisk(D:)\Avg. Disk Read Queue Length], but multiple database files exist on the D:\ volume and you're not sure which specific file is causing the i/o activity that is the source of the bottleneck.

The output from this script provides a break-down of specific i/o data on a per-database-file basis, allowing you to see how many bytes have been read to or written from each file (total, since the last restart of SQL Server) and also the total stall time for each file (ms).

More columns were added to the output of fn_virtualfilestats() for SQL Server 2005, providing even further detailed data than from the SQL Server 2000 version of fn_virtualfilestats(). Most significantly, stall time is only provided as a total in the SQL 2000 version but is broken down into read stalls & write stalls in the SQL 2005 version.

When this script runs, it determines which version of SQL Server it is running against & provides the appropriate output for the discovered version. So, when you run this against SQL 2000 you can expect the new SQL 2005 output columns to remain blank.

I've often found fn_virtualfilestats() very useful in demonstrating how much i/o the tempdb can generate compared to user databases. In fact, this is what prompted me to post this script today - I used it yesterday to show a customer that their tempdb was generating 5 x the write i/o activity compared to their main production database. This led to some further useful discussion about file placement.

Anyhow, without further ado, here's the script. Any corrections etc would be welcomed.

Cheers,
Greg

/**********
* fn_virtualfilestats() script
* Greg Linwood
* greg@SolidQualityLearning.com
* Feb 25th, 2006
*
* Collects fn_virtualfilestats() for all databases
* on either SQL 2000 or SQL 2005 instances.
**********/


set nocount on
go
if object_id('tempdb..#filestats') > 0
drop table #filestats
go

declare @ver int
set @ver = cast(serverproperty('Productversion') as char(1))

create table #filestats (
DbId int not null,
DBName varchar (255) not null,
fileid int not null,
FileName varchar (255) not null,
SampleTime datetime not null,
TS bigint null,
NumberReads bigint null,
BytesRead bigint null,
IoStallReadMS bigint null,
NumberWrites bigint null,
BytesWritten bigint null,
IOStallWriteMS bigint null,
IOStallMS bigint null,
BytesOnDisk bigint null,
constraint pk primary key (DbId, fileid, SampleTime))


declare @database_name varchar(255)
declare @database_id int
declare @file_id int
declare @file_name varchar (1000)
declare @sql nvarchar(4000)


declare cr cursor for
select sd.name, sd.dbid, saf.fileid, saf.name
from sysaltfiles saf
join sysdatabases sd on saf.dbid = sd.dbid
order by sd.name, saf.fileid

open cr

fetch next from cr
into @database_name, @database_id, @file_id, @file_name

while @@fetch_status = 0

begin

if @ver = 8
begin

select @sql = N'
insert into #filestats (DBName, FileName, DbId, FileId, SampleTime,
TS, NumberReads, BytesRead, NumberWrites, BytesWritten, IOStallMS)
select @dn, @fn, DbId, FileId, getdate(), [Timestamp], NumberReads,
BytesRead, NumberWrites, BytesWritten, IOStallMS
from :: fn_virtualfilestats(@database_id, @file_id) '
execute sp_executesql @sql
, N'@dn varchar (255), @fn varchar (255), @database_id int, @file_id int'
, @dn = @database_name
, @fn = @file_name
, @database_id = @database_id
, @file_id = @file_id
end

if @ver = 9
begin
select @sql = N'
insert into #filestats (DBName, FileName, DbId, FileId, SampleTime,
TS, NumberReads, BytesRead, IoStallReadMS, NumberWrites, BytesWritten,
IOStallWriteMS, IOStallMS, BytesOnDisk)
select @dn, @fn, DbId, FileId, getdate(), [Timestamp], NumberReads,
BytesRead, IoStallReadMS, NumberWrites, BytesWritten,
IOStallWriteMS, IOStallMS, BytesOnDisk
from :: fn_virtualfilestats(@database_id, @file_id) '
execute sp_executesql @sql
, N'@dn varchar (255), @fn varchar (255), @database_id int, @file_id int'
, @dn = @database_name
, @fn = @file_name
, @database_id = @database_id
, @file_id = @file_id
end

fetch next from cr
into @database_name, @database_id, @file_id, @file_name

end
go
close cr
deallocate cr
go
select * from #filestats
go
if object_id('tempdb..#filestats') > 0
drop table #filestats
go
11/02/2006
11/02/2006 11:20:42 AM
Analysing SQL Server deadlocks can be a tedious process. I'm currently working on a server that's experiencing a few & I wanted a script that could easily pull out the deadlock events from a sql error log, to save me from fishing between all of the backup records & other superfluous information in the log file.

This script assumes you're logging detailed deadlock graph reports to the sql error log first. If you suspect that you're experiencing deadlocks (or positively confirm this from a SQL Profiler trace, which I do by using SQLBenchmarkPro - http://www.GajSoftware.com), you run this command from the Query Analyser:

DBCC TRACEON (1204, 3605, -1)

1204 represents "record deadlock graph reports when deadlocks occur"
3605 represents "log these reports to the sql error log, rather than the console"
-1 represents "log events for ALL user connections"

Once you've set trace flag 1204 on as outlined above, deadlock graph reports appear in the SQL event log whenever a deadlock occurs. Hence, you need to have set this trace flag on in advance of the deadlock problem to give you much chance of solving the problem..

You can then poll the SQL error log for deadlock information using the script below. Solving them, however, is a far lengthier topic for another day when I've got a bit more time (c:

create table #errorlog (
rowid int identity (1, 1),
errorlog varchar (8000),
ContinuationRow int)

insert into #errorlog (errorlog, ContinuationRow)
exec master.dbo.sp_readerrorlog

select distinct el3.*
from #errorlog el3
join (select errorlog
from #errorlog el1
join (select rowid
from #errorlog
where errorlog like 'Deadlock encountered%') el2 on el1.rowid = el2.rowid - 1) el4
on convert(varchar(27), el3.errorlog) = convert(varchar(27), el4.errorlog)
where (el3.errorlog like '%Node%'
or el3.errorlog like '%Input Buf%'
or el3.errorlog like '%Mode%'
or el3.errorlog like '%Statement%'
or el3.errorlog like '%ResType%')
--and convert(varchar(10), el3.errorlog) > '2006-02-08'
order by rowid
1/02/2006
One of my customers is currently converting to SQL Server 2005 Ent Edn 64bit & was disappointed today to observe a query which had been running well on SQL 2000 perform extremely badly on SQL 2005 in the test lab. So I asked for the 2000 / 2005 execution plans be sent over & identified that SQL 2005 was parallising the query where SQL 2000 had been running the query serially.

Given that this customer's system performs an exclusively OLTP workload, I recommended that they test turning off query parallelism (by setting 'Max degree of parallelism' via sp_configure). They're presently testing as I'm writing this, but I'll report on the outcome when they get back to me.

The main point of this blog is to point out that, in most cases, the rationale for adding additional CPU resources to an OLTP SQL Server is to provide more parallel processing power to service more concurrent user requests, rather than to provide for parallelism within individual queries.

Transactions within OLTP systems tend to have considerably smaller resource requirements than those of Decision Support Systems (DSS) & therefore they typically run fairly well serially. Very often, when SQL Server decides to parallelise queries that comprise OLTP transactions, it does so for the wrong reason - eg out of date index statistics influence the cost based optimiser to scan rather than seek into an index & hence the decision to try & parallelise the over-estimated work-load.

Decision Support Systems are different of course, because it's handy to divide the work of monster data warehouse queries accross multiple CPUs, but I'm talking about OLTP systems specifically here.

So, consider turning off parallelism on OLTP systems if you're seeing parallel execution plans in poorly performing queries. It's usually possible to do this during periods of relatively lower activity & doesn't require a SQL Server restart or machine boot:

To turn parallelism off, you set the 'max degree of parallelsim' configuration to 1, denoting that queries can only use 1 CPU eg:

exec sp_configure 'max degree of parallelism', 1
go
reconfigure
go

The default setting is 0, which denotes that any query can use all available CPUs. If you decide that turning off parallelism didn't achieve a noticable improvement in performance, you can simply turn it back on by setting it back to 0, eg:

exec sp_configure 'max degree of parallelism', 0
go
reconfigure
go

You can also set it to a specific number, based on however many CPUs you want to limit query parallelism to, but I've found this to be a relatively rare requirement.

Another option is to limit specific queries by adding the MAXDOP option, eg:

select * from pubs..authors
option (maxdop 1)

This technique has the advantage of limiting specific poorly performing queries, without turning off parallelism for the entire server but the downside is that you've got to specifically add the extra option hint SQL code, which often isn't possible with 3rd party vendor apps or internal apps with tightly managed source control.

Poor OLTP performance resulting from parallelism is a fairly common scenario I see in my performance tuning travels so today's episode prompted me to blog about it for posterity (c:

Cheers,
Greg Linwood
30/01/2006
30/01/2006 10:29:44 PM
I've been relogging (relog.exe) directories of perfmon counter files for years using a dusty old script when doing performance tuning assignments. Today I thought I'd pretty the script up for public consumption as my excuse for a reasonable blog entry.

When you undertake a SQL Server performance tuning engagement, you need to decide which & how much performance data you want to collect. If you haven't collected enough or the correct data, at some stage down the track you might have to repeat the exercise. Hence, I usually tend to suggest collecting a wider range of performance counters than you initially think you need, but this data can be difficult to manage if you don't have decent scripts handy to strip out the data you need later when performing your analysis.

Stripping .csv perfmon scripts is sometimes a tricky job because many text editors have limiting features & even tools like DTS can be finnicky at times.

I sometimes use the following script, which utilises Windows' RELOG.EXE to automate the process of seperating a specific perfmon counter from a whole directory of .csv perfmon log files. For example, I'll often configure a perfmon log to collect all Physical / Logical disk counters, then strip out the specific counters (eg Avg Read Queue Length or Disk % Idle Time) against specific disks or volumes using this script to zero in on specific problems (eg TempDB TLOG volume or Data volume i/o queuing).

With a little further scripting, you could autmoate things further by creating a scheduled SQL Agent task to poll a directory where you're logging perfmon counters and update a performance counter database. Even you're super keen, you could even wrap a Reporting Services graph on the data & publish it to your intranet. Good luck! (c:

Cheers,
Greg Linwood

/**********
* Relog Perfmon .csv files script
* Greg Linwood
* greg@SolidQualityLearning.com
* Jan 30th, 2006
*
* Strips (Relogs) a specific performance counter from a directory
* of perfmon .csv files & loads them to a permanent table..
**********/

set nocount on

/*
* SETUP
*/

--create a permanent table to store your counters
--create table tempdb..mycounters (countertime datetime, counterval decimal (15, 3))
--truncate table tempdb..mycounters

--user configurable variables
declare
@perfdir varchar(255) -- directory containing .csv perfmon files
, @archivedir varchar(255) -- archive directory for processed files
, @counter varchar(1000) -- counter name to be re-logged

-- alter this to point to your perfmon .csv file diretory
set @perfdir = 'C:\PerfmonLogFiles\'
-- alter this to point wherever you want processed files to be archived (you might need to add this dir)
set @archivedir = 'C:\PerfmonLogFiles\Archive\'
-- alter this to be the counter name you want re-logged
set @counter = '\\PRODSQL1\SQLServer:Buffer Manager\Page life expectancy'

/*
* RUN
*/

-- run-time variables (don't change)
declare
@filename varchar(200) -- filename variable for cursor
, @sql varchar (1000) -- dynamic sql variable

-- temp table to list .csv perfmon files to work through on
create table #files(filename varchar(200), d int, f int)
-- staging table within cursor to bulk load re-logged counter values
create table #counters (dt varchar(50), val varchar(50))

--populate #files via call to xp_dirtree
insert #files execute master.dbo.xp_dirtree @perfdir, 1, 1
delete from #files where f != 1

--for every file to be re-logged...
declare cr cursor for
select filename from #files order by filename
open cr
fetch next from cr into @filename
while @@fetch_status = 0
begin
--relog counters out to tempout.csv file
set @sql = 'exec master..xp_cmdshell ''relog '
set @sql = @sql + @perfdir + '' + @filename + ' -c "'
set @sql = @sql + @counter + '" -o ' + @perfdir + 'tempout.csv -f CSV'''
exec(@sql)

--bulk load counters from tempout.csv to #counters staging table
set @sql = 'bulk insert #counters from ''' + @perfdir + 'tempout.csv'' '
set @sql = @sql + 'with (FIELDTERMINATOR = '','', FIRSTROW = 2)'
exec(@sql)

--delete tempout.csv file
set @sql = 'exec master..xp_cmdshell ''del ' + @perfdir + 'tempout.csv'''
exec(@sql)

--move this perfmon .csv file to the archive dir
set @sql = 'exec master..xp_cmdshell ''move ' + @perfdir + ''
set @sql = @sql + @filename + ' ' + @archivedir + @filename + ''''
exec(@sql)

--strip any double quotes from counter datetimes / values
update #counters set dt = replace(dt, '"', ''), val = replace(val, '"', '')

--move re-logged counters to your permanent table
insert into tempdb..mycounters
select convert(datetime, dt) as dt
, convert(decimal(15, 3), val) as val
from #counters where isnumeric(val) = 1

--truncate #counters staging table for next cursor iteration
truncate table #counters

fetch next from cr into @filename
end
go
--cleanup cursor
close cr
deallocate cr
go
--cleanup temporary tables
if object_id('tempdb..#files') > 0 drop table #files;
if object_id('tempdb..#counters') > 0 drop table #counters;
go
--output some sample results
select top 20 * from tempdb..mycounters
17/01/2006
17/01/2006 4:41:38 PM
Helena & I just got back from a three week trip to China so I'm back at the desk & into it again. The main purpose of our trip was to meet my in-laws who are fantastic but we also spent a bit of time sight-seeing & I was totally blown away by the whole place. Beijing was amazing - it's historical sights including the Great Wall, Forbidden City, Temple of Heaven & the Ming Emporer Tombs are a must-see. Helena's family live in an inland city named HuiZhou which has a population of ~1M & is to them rather like Bendigo is to Victorians - a big country town. It's a beautiful place - with ornamental lakes, temples etc as well as a vibrant local community. In the three weeks I was there I didn't see a single westerner the entire time & all the locals seemed to have something wrong with their necks because every single person craned around to check me out as we were walking around. Must be what it's like to be a super-model (c:

My new years' resolution is to keep this blog more up to date, so I'll start off with a few thoughts about what we can do about this website to spruce things up a bit. First, I knocked together a jobs system so that job advertisers (recruiters & employers) can list their SQL Server related jobs on our website for members to view. Like everything else associated with this user group, we're not charging for this - jobs are just another facet of our SQL Server community that are worth disseminating here. I've called a few recruiters & some of they're all interested in the idea so some jobs should start flowing through in the coming weeks.

We've been sending out meeting invitations via a very manual process until now & I'm working on a back-end system for our meeting organisers to use which should come out in a few weeks or so. Hopefully this will improve communications a little & let the local meeting admins keep more in touch with their local members.

Another thing I'd like to see on our website is a list of training opportunities as training is also a little part of our ecosystem. Nothing's been done ain this area yet, but perhaps in the coming months we can get something organised.

Rob Farley's been doing an amazing job with the Adelaide group lately & I've been blown away by the way he's organised webcasts of his recent meetings. Charles Sterling's been helping a lot there too so thanks Chuck. I'm going to try & bring Melbourne into the webcasts as well this year & hopefully upload them to our site as well for off-line viewing for members who can't make it to the meetings.

There are lots more things we can do to spruce this place up - these are just a smattering of ideas & I'm sure there are plenty more out there. If only there were more hours in a day!





1/11/2005
1/11/2005 8:39:05 AM
I was working on some database documentation for a customer recently & noticed something very handy about having columns named uniquely in a database which might be worth taking into consideration for your naming standards documentation.

In the database I was working on, a table name indicator is prefixed to column names to "uniquefy" them. Eg, the first_name column in the customers table is named [cus_first_name]. In the employees table, the column would be named [emp_first_name]. This allows a simple table / column / procedure mapping query to be run which documents which table.columns are accessed by which procedures & vice versa, eg:

select distinct pname, tname, cname
from (select so.name 'pname', sp.text 'ptext'
from syscomments sp
join sysobjects so on sp.id = so.id
where so.xtype = 'P') procs
, (select so.name 'tname', sc.name 'cname'
from sysobjects so
join syscolumns sc on so.id = sc.id
where so.xtype = 'U') cols
where charindex(cols.cname, procs.ptext, 1) > 0

This provides a nice easy to read list in the format:
[TableName], [ColumnName], [Procedure]

Very handy for determining dependency & producing documentation. I put the output into Excel, threw on an auto-filter & now I can easily see which table / columns are accessed by which procedures & vice versa, which really helps with what I'm doing today.

If column names weren't unique in the database though, this wouldn't be possible without a TSQL parser. The uniqueness allows simple pattern matching rather than reliance on TSQL parsers (which aren't readily available)

Is this a good thing to include in a naming standards document? I guess there are arguments for & against doing this but this has been very helpful to me today so I thought I'd put it out there for others to comment on.

Personally, I like the idea as it also avoids the need for specifically aliasing columns in queries where multiple relations are joined, which is a discipline which is often neglected under the heat of development pressures.
2/10/2005
2/10/2005 4:28:02 PM
I'm always hunting around for one of these scripts, so I'm sticking it here for ease of reference & in case its useful to anyone else..


select getdate()
go
if OBJECT_ID('tempdb..#spwho') > 0 drop table #spwho
go
create table #spwho (
SPID int not null
, Status varchar (255) not null
, Login varchar (255) not null
, HostName varchar (255) not null
, BlkBy varchar(10) not null
, DBName varchar (255) null
, Command varchar (255) not null
, CPUTime int not null
, DiskIO int not null
, LastBatch varchar (255) not null
, ProgramName varchar (255) null
, SPID2 int not null
)
go
insert #spwho
exec sp_who2
go
select *
from #spwho
where SPID > 50 and login != SUSER_SNAME()
order by LastBatch desc
go
if OBJECT_ID('tempdb..#spwho') > 0 drop table #spwho
25/09/2005
I sometimes need to do this to work on re-indexing databases. WHY identity columns are perfect for clustered indexes is a big topic, but here's a query to help identify columns where clustered indexes are placed on OTHER columns...

/********************
* Identify where clustered indexes are create
* on columns other than identity columns.
*
* Greg Linwood
* September 2005
*
* Designed for SQL 2000 only
*
********************/
set nocount on

/* holds a list of the tables & indexes in the curreent database */
declare @ssvnts_table_indexes table (
tableid int not null
, indid smallint not null
, indname sysname not null
, colname sysname not null
, isprimarykey varchar (1) not null
, isunique varchar (1) not null)

/* populates with a list of the tables & indexes in the current database */
insert into @ssvnts_table_indexes
select so.id as tableid, si.indid, si.name as indname, sc.name as colname
, case when si.status & 2048 <> 0 then 'y' else 'n' end as 'IsPrimaryKey'
, case when (si.status & 2 <> 0) or (si.status & 4096 <> 0) then 'y' else 'n' end as 'IsUnique'
from sysindexes si
join sysobjects so on si.id = so.id
join sysindexkeys sik on si.id = sik.id and si.indid = sik.indid
join syscolumns sc on sik.id = sc.id and sik.colid = sc.colid
where si.status & 64 = 0
order by so.name, si.indid

/* holds a list of tables in the current database - used as a temp workspace */
declare @ssvnts_table_info table (
tableid int not null
, tablename varchar (255) null
, identity_column varchar (255) null
, Has_A_Clustered_Index int null
, Identity_Column_Is_In_CIX char (1) null
, No_Of_Columns_In_CIX int null)

/* populates a list of tables in the current database */
insert into @ssvnts_table_info
select tableid
, convert(varchar(255), name) as tablename
, convert(varchar(255), '') as 'Identity_Column'
, convert(integer, min(indid)) as 'HasClusteredIndex'
, convert(char(1), 'u') as 'Identity_Column_Is_In_CIX'
, convert(integer, -1) as 'No_Of_Columns_In_CIX'
from @ssvnts_table_indexes ti
join sysobjects so on ti.tableid = so.id
where so.xtype = 'U'
group by tableid, name

/* if min(indid) is 0, table is a heap */
update @ssvnts_table_info
set Has_A_Clustered_Index = 0
where Has_A_Clustered_Index > 1

/* syscolumns.colstat & 1 signifies column is an indentity */
update @ssvnts_table_info
set Identity_Column = tic.Identity_Column
from @ssvnts_table_info ti
join (select so.id as tableid
, sc.name as 'Identity_Column'
from sysobjects so
join syscolumns sc on so.id = sc.id
where (sc.colstat & 1 = 1)) tic on ti.tableid = tic.tableid

/* count the columns in the clustered index (indid = 1) */
update @ssvnts_table_info
set No_Of_Columns_In_CIX = colsincix
from @ssvnts_table_info ti
join (select tableid, count(indid) as colsincix from @ssvnts_table_indexes where indid = 1 group by tableid) dv on ti.tableid = dv.tableid

/* idntify where the identity column is in the clustered index */
update @ssvnts_table_info
set Identity_Column_Is_In_CIX = 'y'
from @ssvnts_table_info ti
join (select tableid, colname, count(indid) as colsincix from @ssvnts_table_indexes where indid = 1 group by tableid, colname having count(indid) = 1) dv on ti.tableid = dv.tableid and ti.Identity_Column = colname
update @ssvnts_table_info
set Identity_Column_Is_In_CIX = ''
where Identity_Column_Is_In_CIX != 'y'

/* return output */
select *
from (select case when Has_A_Clustered_Index != 1
or Identity_Column_Is_In_CIX != 'y'
or No_Of_Columns_In_CIX != 1 then 'no' else 'yes' end as Clustered_Index_Choice
, ti.*
, ts.sizedec
from @ssvnts_table_info ti
join (select dv.id, upper(name) as [table_name]
, convert(decimal(15, 1), left(sizemb, len(sizemb)-2)) as sizedec
from (select so.id, convert(varchar(255), so.name) as name
, convert(decimal(15, 1), round((sum(convert(decimal(15, 1), si.dpages) * convert(decimal(15, 1), 8192)) / 1048000), 1)) as sizemb
from sysindexes si join sysobjects so on si.id = so.id join sysfilegroups sf on si.groupid = sf.groupid
where objectproperty(so.id, 'IsUserTable') = 1
group by so.id, so.name) as dv ) ts on ti.tableid = ts.id) c
where Clustered_Index_Choice != 'good'
order by sizedec desc
6/04/2005
6/04/2005 1:22:57 PM
I just read a CNN news release on Google's new mapping service whilst chewing on a very tastey sandwich. At first I wondered what the heck I'd actually do with such a service, then I thought I'd drop by Redmond & make sure they're hard at it on SQL Server 2005. Looks to me like they're all inside, tapping away at their keyboards on a nice sunny day (c: http://maps.google.com/maps?ll=47.643499,-122.124324&spn=0.008401,0.005922&t=k&hl=en (you might have to zoom in once further)
29/03/2005
I'm always hunting around my hard-drive for a query that lists indexes in a database, so here's one for everyone's (including my) future reference (SQL2K only)

select so.name as 'tablename'
, si.indid
, si.name as indname
, sik.keyno as colorder
, sc.name as colname
, case when si.status & 2048 <> 0 then 'y' else 'n' end as 'IsPrimaryKey'
, case when (si.status & 2 <> 0) or (si.status & 4096 <> 0) then 'y' else 'n' end as 'IsUnique'
, case when objectproperty(so.id, 'IsMSShipped') = 1 then 'y' else 'n' end as 'IsMSShipped'
from sysindexes si join sysobjects so on si.id = so.id
join sysindexkeys sik on si.id = sik.id and si.indid = sik.indid
join syscolumns sc on sik.id = sc.id and sik.colid = sc.colid
where si.status & 64 = 0-- and so.name = '" & tbl & "'
and objectproperty(so.id, 'IsMSShipped') = 0
order by so.name, si.indid, sik.keyno
23/03/2005
Someone asked how to collect the multi-resultset output from DBCC SHOW_STATISTICS in the SQL MVP forum today. This is a tricky thing to do in TSQL because there are no features on TSQL to catch multi-resultset output from stored procs, DBCC etc.

You can use the INSERT / EXEC sp.. technique to collect output from procs that return ONE resultset, but that doesn't help if the proc returns MANY resultsets & you need info from the [n]th resultset.

You can use the WITH TABLERESULTS option with some DBCC commands, but again, this is no help if the DBCC statement returns multiple resultsets (such as DBCC SHOW_STATISTICS) & you need to collect all of the output.

I'd previously written component code to do something like this in VB & C# in the past, but it struck me when I saw this question today that another option would be to use DTS's ActiveX script object.

SO I flung together a few scripts & offered them up to the MVP forum as an option. I'm sure those guys will improve on these scripts, but I've loaded them up to the Resources section in case they're of any use to anyone.

These have been put together with DBCC SHOW_STATISTICS hard-coded, but it would only take a little more work to make the solution a little more generic. I'm just a little lazy right now. (c:

The scripts can be found here:
http://www.sqlserver.org.au/Resources/ViewResource.aspx?resourceId=34
17/02/2005
The Australian SQL Server User Group Melbourne meetings kicked off this year on Tuesday Feb 15th with an excellent presentation from Peter Myers (Tenix Solutions) on SQL Server 2005's BI features, including a drill down into the new Unified Dimensional Modelling (UDM) & One Click Cube technologies.

I've posted more in the events feedback forum here:
http://www.sqlserver.org.au/forums/ShowPost.aspx?PostID=241
9/02/2005
A customer asked me late last year to find any available literature on if / how the optimiser takes current resource usage - locks, cpu, memory, disk, # users logged on etc into account when determining execution plans..

I was sure I had read in the past that the optimiser does in fact make at least some execution plan decisions based on server resources but couldn't remember exactly what.

I wasn't able to find anything published on this anywhere. Not much turned up in BOL, printed books or the KB so I posted into the private MVP group to see what I could fish up from the SQL team & ended up with an interesting response.

After a bit of discussion around the topic the SQL Server Query Optimizer Team Program Manager, Eric Hanson, added this detailed response to the discussion. I thought it would be worth sharing here in case anyone else finds this of use..

======================================================

I talked to some of our developers and have some more to dd regarding the information considered by the optimizer about current system load.

* the query executor will strip out parallel operators from the plan and run the plan serially when the plan is executed if the load on the system is such that not enough threads can be allocated to run the plan with DOP 2 or more

* the query executor attempts to allocate a fairly large fraction of available threads (around half the reamaining ones) to an incoming query with a parallel plan. As the system load increases, at some point there may not be enough threads to run a newly dispatched parallel plan with DOP 2 or more. This would cause the system to strip out parallel operators and run it serially. This can lead to some non-obvious behavior, e.g. I start 3 queries at the same time that would take one hour each if run in parallel alone on a parallel machine. The first one to start grabs a lot of processors and runs in about 1 hour. The second one grabs less processors and runs for about 2 hours. The third one is stripped of parallel operators and runs serially, taking, say, about 8 hours. Plans are not killed and restarted, or given more threads, even though the machine's processors may free up while the plan is executing.

* We can keep 2 plans in cache for the same query, one parallel plan and one serial plan (this is relevent if you change the system's DOP setting with sp_configure while some plans are in cache already)

* when a query comes in, if the number of idle processors is <=1, then "serial plan" is specified as part of the search key during plan cache lookup. If the number of idle processors is >1, then "parallel plan" is specified during cache lookup.

* the ability to keep two plans in cache for the same query (parallel & serial) is in both Shiloh and Yukon

* the cost estimator of the optimizer does consider available buffer space owned by the SQL Server process in cost calculations. This is used in some cases to estimate page access cost, where a cache hit ratio is predicted. Since available buffer space owned by SQL Server can go up and down with system load, and as a function of how long SQL Server has been running, and thus how much memory it has grabbed from the OS, this can affect plan selection.

6/02/2005
It's been a while since I've blogged, so here's an update on a few things that have been going on with the site lately.

Dec / Jan saw no meetings for the Melbourne user group but we're soon kicking off the 2005 meetings with an overview of SQL 2005 Business Intelligence features by Peter Myers on Feb 15th.

In the break, I've been quite busy making plans to run a SQL Server conference in Australia in approximately 12 months. I've always felt it's a shame that we have to travel around the globe to see conference material such as PASS US / Europe so I'm very keen to have something like that over here. I've been lining up speakers, venues, catering, sponsors, staging & theming. Man, there's a lot to organising these events! More on this soon..

WebCentral have also organised sponsorship of an email service to host our sqlserver.org.au email addresses over the break. This should help us solve problems with some of our email based processes such as meeting invitations, reminders, site signups etc. These processes previously suffered because we spoofed hotmail accounts when sending mail & many of your mail servers were rejecting the emails as spam. So hopefully more of our emails will reach you with less fuss now. WebCentral were already sponsoring us with a dedicated hosted server which is a very valuable sponsorship on its own, so thanks a lot for the extra email sponsorship! (c:

I met with a few of the other user group organisers in Sydney last monday to discuss website features & content. Unfortunately we didn't discuss features & content much but had a long discussion about changing our website framework from the current Hurricane (www.projecthurricane.com) framework to DotNetNuke instead. Anyway, we're evaluating DNN over the next month & we'll see how this goes.
22/12/2004
Swedish SQL MVP Tibor Karazi announced the opening of the Swedish SQL Server User Group website yesterday at:

http://www.sqlug.se.

That's a nice looking website (c:

Congrats Tibor & hope the user group is a flying success.
21/12/2004
21/12/2004 6:26:34 PM
I regularly calculating database sizing details for various tasks & use a few scripts for different scenarios. For example, when investigating a performance problem on a database I've not seen before, I usually grab a snapshot of which are the biggest tables broken down by individual indexe sizes. I then identify which tables / indexes are the largest & keep that list in the back of my mind when making other observations.

There are lots of scripts to do this, but a few people have asked me for the one I use, so I'll post it here in case it's useful to anyone. Heck, I'll be able to grab it occassionally myself when I need it as well! (c: It's a nasty little script & can prolly be improved so grab the sizzors & cut it up if you feel inclined.

You toggle the dbcc statement on / off to either update the sysobjects dpages & rowcnt columns. I usually do this on a backup of a prod database so I'm not interrupting online users, but I've often found it runs fast enough to get away with in periods of low activity. If you're only after specific tables, you can add params to that dbcc command to focus it's work to the tables you're interested in.

The script's output format is fairly straight forward - it includes the table name, total table size (all indexes), index name, indexid & index size.

I'll often grab the first ten rows, throw them into a spreadsheet graph & use the output for a nice visual view of the biggest db object sizes. Often, the DBA won't ever have seen that info & is sometimes very surprised to see how big some tables in his / her db really are..

Anyway, enough blab - here's the nasty stuff.

set nocount on
--dbcc updateusage(0) with count_rows

declare @tables table (
sizerank int identity (1, 1) not null
, table_name varchar (255) null
, table_total_size_dec decimal(18, 1) null
, table_size_measure varchar(10) null
)

insert into @tables (table_name, table_total_size_dec, table_size_measure)
select upper(name) as [table_name]
, convert(decimal(15, 1), left(sizemb, len(sizemb)-2)) as sizedec
, right(sizemb, 2) as measure
from (select convert(varchar(255), so.name) as name
, case when sum(convert(decimal(15, 1), si.dpages) * convert(decimal(15, 1), 8192)) / 1024 > 1000
then convert(varchar(255), convert(decimal(15, 1), round((sum(convert(decimal(15, 1), si.dpages) * convert(decimal(15, 1), 8192)) / 1048000), 1))) + 'mb'
else convert(varchar(255), convert(int, round((sum(convert(decimal(15, 1), si.dpages) * convert(decimal(15, 1), 8192)) / 1024), 0))) + 'kb'
end as sizemb
from sysindexes si join sysobjects so on si.id = so.id join sysfilegroups sf on si.groupid = sf.groupid
where objectproperty(so.id, 'IsUserTable') = 1
group by so.name) as dv
order by right(sizemb, 2) desc, convert(decimal(15, 1), left(sizemb, len(sizemb)-2)) desc

select t.*, indname, indid, convert(decimal(15, 1), left(indsizemb, len(indsizemb)-2)) as indsizedec, right(indsizemb, 2) as indmeasure
from @tables t
left join (
select convert(varchar(255), so.name) as tblname
, convert(varchar(255), si.name) as indname
, si.indid
, case when sum(convert(decimal(15, 1), si.dpages) * convert(decimal(15, 1), 8192)) / 1024 > 1000
then convert(varchar(255), convert(decimal(15, 1), round((sum(convert(decimal(15, 1), si.dpages) * convert(decimal(15, 1), 8192)) / 1048000), 1))) + 'mb'
else convert(varchar(255), convert(int, round((sum(convert(decimal(15, 1), si.dpages) * convert(decimal(15, 1), 8192)) / 1024), 0))) + 'kb'
end as indsizemb
from sysindexes si join sysobjects so on si.id = so.id join sysfilegroups sf on si.groupid = sf.groupid
where si.dpages > 0
group by so.name, si.name, si.indid
) as dvtablesindexes on t.table_name = dvtablesindexes.tblname
where table_total_size_dec > 0
order by sizerank, right(indsizemb, 2) desc, convert(decimal(15, 1), left(indsizemb, len(indsizemb)-2)) desc
16/12/2004
One of the things I like most about the SQL Server MVP program is that MVPs are asked to submit a list of issues every month to our "MVP Lead" - Steve Dybing. Steve takes this list & publishes it to various people & even follows some of the better ideas up with the SQL Server development team members. Steve's a pretty good interpreter of good ideas on his own as he's been with Microsoft & the SQL team for 15 years and definitely knows the ropes over there.

Anyhow, I usually take a bit of time to think what issues I might submit each month because it's a great opportunity to get a message through to the SQL dev team about things I'd like to see improved with SQL Server. I'm going to start blogging these ideas here in case anyone's interested in either reading them or perhaps even contributing ideas. Of course there are no promises with this, but it's at least a channel for communication for anyone interested.

This month, I'm going to be submitting a request that the internal 208 errors thrown by the profiler be removed as these have really been getting up my snout lately. My constant error detection traces have been flooded with gazillions of these meaningless messages & it's making my already complicated life slightly more difficult. It's time to pipe up & see if something can be done. One of my constant error detection traces picked up ~13,000 error events last week alone, nearly ALL of which were these nasty little critters.

I've discussed this problem with other DBAs, MVPs & even some SQL dev team memebers over the past couple of years and it seems the source of the 208s is a design flaw in deferred name resolution. MVP Erland Sommarskog (Sweden) posted this to a recent MVP NG thread in October:

Unfortunately, a 208 in the Profiler does not have to be a real 208. This procedure will fire a 208 in Profiler:

CREATE PROCEDURE temptest AS
CREATE TABLE #abc (a int NOT NULL)
INSERT #abc ( a ) VALUES (23)

This is because when the procedure is parsed #abc does not exist. The exception is handled internally, all the vein of the (mis)feature known as deferred name resolution. This happens both in SQL2000 and in the lastest Yukon drop.
(used with permission)

It appears that the problem's continuing into SQL Server 2005, although I haven't tested this out on a recent build personally.

If there was some way of accurately filtering these blighters out of trace logs, the problem could be mitigated somewhat, but the danger with simply slapping a filter on the error data column is that you'd miss the REAL 208s which might be thrown by your app. Real 208s are serious problems because they can cause execution abortion mid-batch, orphaning transactions and potentially clogging up the db with dependant lock blocking chains. (I've seen this happen on real systems) So IMO, it's not really a good option to simply stick a filter on such as error!=208.

So this is my entry to the MVP monthly product issues list. I'd like to see the internal deferred name resolution use some other dedicated error number for reporting it's errors so that we can filter the real 208s from the internal 208s.
8/12/2004
I've just been introduced to Brian Madsen by Microsoft IT Pro Evangelist Michael Kleef this week which is exciting enough on it's own, but perhaps a little more to the point, Brian will be organising the first SQL Server User Group meeting in Perth early next year.

Brian's now signed up as a website admin & we'll be looking forward to seeing SQL Server events popping up in the wild west through 2005. Good luck Brian & I hope that using this site makes getting up & running a little easier for you.
1/12/2004
#temp tables and @table variables are often used to optimise stored procedure performance by breaking down large queries (many joins) into smaller units of work or materialising resultsets to perform additional processing logic in isolation from other concurrent processes running in the database. These are effective & commonly used TSQL coding techniques but there's a dark side to over-using temp objects.

Developers have a tendency to think of temp objects, especially @table variables, as in-memory objects. This is a reasonable assumption given that most modern languages implement variables as in memory objects. However, TSQL is not one of those modern languages and the creation of / population of temp objects are operations which are always written to disk. Twice.

SQL Server uses similar mechanics to manage temp objects that it does for permanent objects. Any write operations against temp objects (inserts, updates & deletes) are first written to the tempdb's transaction log (the first disk hit) & then on a lazy basis to the tempdb's data volume (the second disk hit). SQL Server implements temp objects this way to guarantee that rowsets are fully written to temp objects or not at all in an environment where limited memory is available. This is natural of a RDBMS & shouldn't be that much of a surprise for DBAs who spend most of their time watching the inner RDBMS cogs ticking. Devs on the other hand often don't get down to the nitty-gritties of RDBMS internals and tend to miss this point, sometimes over-using temp objects as result of thinking they operate in-memory (like arrays, right?).

And who can blame devs for thinking this when Microsoft publish confusing articles such as KB305977 which states (mid way through):

Q4: Are table variables memory-only structures that are assured better performance as compared to temporary or permanent tables, because they are maintained in a database that resides on the physical disk?

A4: A table variable is not a memory-only structure. Because a table variable might hold more data than can fit in memory, it has to have a place on disk to store data. Table variables are created in the tempdb database similar to temporary tables. If memory is available, both table variables and temporary tables are created and processed while in memory (data cache).

http://support.microsoft.com/default.aspx?scid=kb;en-us;305977

This article has been incrementally updated over the past few years & at least the current version indicates that temp objects aren't in-memory objects, but it's still a bit mis-leading in that it doesn't even mention hits against the TLOG. It also suggests at the end of the answer that temp tables are processed in memory where enough memory exists. This is quite mis-leading as all *updates* are actually processed on disk in the TLOG (you can check this yourself using a log reader or fn_dblog()). When it says that temporary tables are "created" in memory, it's just plain wrong. What *might* happen in memory is cache re-use if you're performing multiple reads against a single temp object & there's not much memory pressure on the tempdb.

I'm not against the use of #temp tables or @table variables - they're useful objects & I use them regularly. There are some great advantages - with @table variables, stored procs are recompiled less regularly, tempdb tlog records auto truncated & you can perform non-atomic operations when used within a user transaction. They have their place & are very much a useful tool for developing stored procs.

Be aware that temp objects are not in-memory objects & they can hit the tempdb's TLOG very hard if you use them a lot & your database is subject to high levels of concurrent usage. Sometimes it's better to move the rows you're thinking of processing in a temp object up to the data tier & work with them in arrays, datasets, enumerable objects etc as then you'll really be working with in-memory objects. There are other flow on benefits from doing this such as improved cache page life-expectancy on the database server (the tempdb isn't using the buffer pool as heavily) as well as greater scalability from use of unlimited (farmed) data tier memory rather than the limited buffer pool cache on the database server.
27/11/2004
I picked up a good tip from Gert Drapers (MS) in the private MVP forum this week which was helpful to me so I'll post it here in case it's useful to anyone else.

I was trying to identify which specific indexes are suffering excessive page splits in a customer database but the monitoring toolset didn't offer quite the help I needed.

SQL Profiler's events didn't seem to cover it at all & the Perfmon only has the SQL Server:Access Methods\Page Splits/sec counter. That counter is useful for telling you how many Page Splits are occuring on a specific server but offers no information on which database they're occuring in, let alone which index.

So I asked the question in the MVP private newsgroup & Gert posted this solution:


You can do this:

use YourDB
go

select [Object Name], [Index Name]
from ::fn_dblog(null, null)
where Operation = N'LOP_DELETE_SPLIT'

This operation indicates the deletion of the rows that are moved because of a page split.


This clever solution is reading the transaction log, looking for log records which reveal page split events, & includes the index name in the output. I altered it slightly to give me an aggregated view of how many such log records exist per index & have since wrapped it into a utility which gets run prior to transaction log backups & database backups & stuffs the results into a maintenance table. Very handy for identifying which tables are being hit hard by page split events.

My query looks more like this:

select [Object Name], [Index Name], count([Current LSN])
from ::fn_dblog(null, null)
where Operation = N'LOP_DELETE_SPLIT'
group by [Object Name], [Index Name]
23/11/2004
IBM posted a whopping TPC-C record a few days ago, more than triple Oracle's previous single server record & nearly triple Oracle's previous clustered server record.

They've achieved 3.2 million transactions per minute @ US$5.19 tpmC. The server cost US$16M, ran AIX 5L 5.3, DB2 UDB 8.2, 64x 1.9Ghz Power5 CPUs, 2Tb of RAM and and amazing number of disk spindles.

I checked out the IBM website for more info on Power5. It is a dual core chip, with technology they're branding "Hardware Multi-Threading" which I don't fully understand but it sure seems to fly. SQL Server once ran on Power architecture (c:

It's been a while since a large-scale SQL Server TPC benchmark was released. This new monster TPC benchmark seems like it will give IBM bragging rights for a while, but it'll be interesing to see what SQL 2005 can do once it gets closer to the "6 month prior to availability" TPC benchmarking criteria.
22/11/2004
SQL Server's current 3 High Availability features - Clusering, Replication & Log Shipping are all good for different purposes but each has it's own draw-backs & none are good for all scenarios. Eg, clustering can be expensive to implement & you don't get geographically dispersed disk level fault tolerance. Some services (eg Analysis Services) won't even run on clustered servers. Replication has some advantages but is complicated & has far too many moving parts to be considered simple or self managing. And Log Shipping requires a little too much manual intervention for many non-specialist SQL Server IT Pros.

SQL Server 2005's Database Mirroring technology looks like a great middle ground technology, with the best of Clustering & Log Shipping built in. It's conceptually similar to Log Shipping in that transactions are "synchronised" from a primary server to a stand-by server (similar to Log Shipping) but health is monitored by a "Witness" server and automatic fail-over can be configured (like Clustering). Database Mirroring doesn't require certified hardware & can work over geographically dispersed locations like Log Shipping.

Secure Data Group's Craig Ryan provided an exceptional presentation at the Melbourne User Group meeting last week, including a live presentation of all three current High Availability technologies & SQL 05's Database Mirroring. I was really impressed with the ease of setup & also Craig's advanced planning - sure enough, whenever you present Beta software, it's gonna let you down but Craig was prepared with a backup plan which he promptly put into action to the relief of everyone there. Great job Craig!
29/10/2004
It's been a month or so since I've blogged, but here's a quick update / rant on what's been happening with the site from an org / admin point of view in case you seriously aint got anything better to read right now (c:

(a) We kicked off our website's online registration system this month. This is a huge improvement for us because privacy problems with the previous arrangements (Microsoft taking regisrations for us on their website) wasn't working. Privacy constraints meant they couldn't give us a list of who had attended our events so we couldn't effectively promote future events. I chucked a serious wobbly with MS staff over this issue & nearly did a gasket before coming back to earth & getting back on with the job a few weeks back. I probably owe an apology to a few MS people who I went off at but probably didn't realise the damage we suffered through July / August.

(b) A number of expert developers from around Australia have generously volunteered their time to help us develop features for the website. Thanks for the offers & work so far from:

* Greg Obleshchuk (Mel)
* Mary Bray (Can)
* Rowan Burke (Syd)
* Thomas Williams (Mel)
* Vijay Vijayaratnam (Syd)

* Phillip Carter's (Mel) also helped a great deal with testing, ideas etc

* Greg Low's (Bris) joined the fray today, which is a special blessing given his incredible talent.

We're controlling our source code with much appreciated sponsorship from SourceGear, who's Vault product lets us all work easily together whilst at the same time being thousands of kilometers apart. The best thing with Vault is that it uses SQL Server as a repository so backup's are reliable & a cinch for SQL guys like us. fogBUGz have also helped out with their bug / feature tracking software which is helping us keep organised.

(c) User Group meetings are consistently happening in most states, with growing attendances. In Melb, our surveyed satisfaction rates are growing as we're learning more about what our local audience wants to hear. We've had two spanking meetings in the last two months (Greg Low on SQL05 CLR coding & Dave Lean on SQL05 DTS (SSIS)) which have received very high satisfaction feedback ratings. Greg Low's presentation was a sell-out, with 140 registering for the night. Our list of presenters is also growing which is great because getting enough good presenters has been the hardest part in all of this. I'm hoping that if we can build on this & encourage more people to try out presenting, we will be able to run an annual SQL conference in Australia sometime soon using a starting list of local presenters, augmented by a few o/s ring-ins.

(d) We've had two more Australian SQL Server MVP's awarded recently - each participating in the User Group activities. Congratulations to Grant Paisley (Syd) & Mary Bray (Canb)!

(e) I've received an amazing amount of partnership & help with this site from Rowan Burke (Syd). We're either on the email or phone nearly every day & sometimes late into the night working through little issues & ideas. This partnership effort is the backbone of the site's early progress & success.

(f) The website's membership is growing every day which is really encouraging. Thanks to everyone who's signed up & apologies to anyone who's been mucked about by our curly registration process. We're working on it at present & hopefully we'll have a few improvements up soon.



19/09/2004
The SQL Profiler tool has evolved in every major SQL Server release. Earliest versions of SQL Server came with no trace capability, then someone produced a filter utility (SQLEye) which was later the basis for MS's 6.5 SQL Trace filter utility. SQL 7.0 introduced a server event driven model but it's api was way too complex for day to day scripting use. SQL 2000 separated the server / GUI components cleanly, simplified access to the tsql system proc based api and introduced a function that allowed us to load .trc log files at run time.

Each of these steps improved our accessability to trace server activity, adding value for developers and admins alike. However we've never been able to get at this information directly at run-time, without having to open up a GUI, click on icons etc.

Until now.

SQL 2005 now includes a trace library which provides you with access to run-time trace information, from within your program. It does have one big caveat though - you need to create a Profiler trace definition file manually first and pass a file system location to the .tdf when using the new trace libary. This is a limitation if you want to DEFINE your traces at run time, but aside from this limitation, it's still a big step forward for admins to get at run time trace info. Another bit of good news is that the new trace library is backward compatible with SQL 2000

Here's a very brief code demo in VB.Net - you'll need VS2005 to run this. Note that this code is vastly simplified for simple demonstration.

Imports Microsoft.SqlServer.Management.Trace
.
.
.
Dim ts as TraceServer = new TraceServer
Dim c as ConnectionInfoBase = new SqlConnectionInfo("localhost")

c.SqlConnectionInfo.UseIntegratedSecurity = true
ts.InitializeAsReader(c, "c:\Standard.tdf")

do while reader.Read = true
'read trace info off the TraceServer's
'(reader) GetName() & GetValue() methods
loop

ts.Close
ts.Dispose
16/09/2004
It's amazing how things can turn around so quickly for the best. Last month we cancelled the Melbourne meeting after our invite / mail out process didn't happen & we had next to no-body register. This month we've run the invites ourselves for the first time (MS used to do it). I asked for some help from our friends over at the Melbourne .Net User Group (thanks Bill Chesnut!) to cross-promote and tonight I found out the meeting next week's full. I think this means we've got 135 coming. We better up the pizza order.

Greg Low's heading down to Melbourne next Wednesday to demo how this CLR thing we've been hearing about for so long actually works. I for one am going to be there to learn as I've not used the CLR yet for anything other than cleaning carpet so far. Should be a great night.
10/06/2004
10/06/2004 7:05:03 PM
I've been doing a Biztalk 2004 training course in Melbourne with Biztalk MVP Bill Chesnut this week. Very interesting learning how it works - it heavily uses SQL Server as a message store & rules repository. I like the way that business rules processing is diagrammed in a nice GUI that anyone (even BAs) could use to participate in the development process & get the benefit of Biztalk's self-documenting features.

Someone on the course pointed out that SQLXML is based on older .xdr technology & so won't integrate easily with Biztalk 2004. Got to look into that..
7/06/2004
7/06/2004 9:57:10 AM
Finally, after months of mucking around the Australian SQL Server User Group's website is up & running. This is my first blog entry, mainly to test that this thing actually works before inflicting it's use upon other poor souls!