Category Archives: ADO.NET 2

VS2005 November: UpdateBatch

UpdateBatch has been dramatically changed with the current bits (41129) of VS2005. Here are the details on that.

Now that we can exceed the 2100 parameter limit I have been doing some experiments with a 10000 record query from a 104,000 record table and doing batch updates in big chunks. It will be important to tune your batch updates based on the system – location of sql server relative to the application, latency, etc etc. Here is what I hit today when working against a hardwired remote sql server 2000.

Interesting….

Modifying the new CommandTimeout property (from default of 10 to 60) on the DataAdapter’s SelectCommand, fixed this problem. Then I had to change it again for a batch of 10,000. I just popped it up to 120 and that did the trick.

What’s going on of course is that ado.net is building some big goo up front. Prior to this version, it built one big fat query to accomodate the entire batch. But that’s where they quickly (very quickly) hit the 2100 parameter limit. Now watching profiler, it is getting run one update (one row) at a time. I wish I could watch what happens in between, but, really, that’s okay! Can’t geek out too much.

This is the kind of fun you can have on a beautiful winter day with a broken foot. 🙂



Posted from BLInk!

GetDataReader already obsolete?

(aha! just getting a new name: CreateDataReader. Thanks for the info Jackie and Kawarjit (from the AD0.Net team!)

whoa. I just noticed this in the online docs that come up through the msdn library with vs2005 (which points to this link)

The same holds true (as per the above set of docs) for DataTable.GetDataReader.

However, if you browse to a separate source of docs at http://msdn2.microsoft.com/library/0xbt1065.aspx there is no indication of it being obsolete.

It works just fine in the October CTP bits I am using. I’m confused.



Posted from BLInk!

It’s WORKING! SqlDependency!! Here’s how I did it.

read this and then this post to understand the problem.

Well … I started out trying this against pubs and it didn’t work and then I tried it with AdventureWOrks and it didn’t work. I focused on pubs and made a lot of changes without it working. Then I finally went back to AdventureWorks and it did work!! So I’m not sure exactly which thing I changed previously, but the to get pubs to work with query notification was in the DataBase, I needed to set the Database Properties/Options/Miscellaneous: “Database Compatibility Level” to Version90. It had been at Version80.

Note that all of the other Miscellaneous options for pubs were set to false and I did not have to change them for it to work.

Wally McClure and I talked about some of the issues about when to make particular calls. I found that the samples in the docs and Bob Beauchamin’s article were fine in terms of creating the connection, creating the command, opening the connection, creating the dependency, creating an event handler, etc worked fine.

Here are some things that I was unsure about.

All of the samples I saw showed doing a read on the SqlDataReader that was returned by the command that I attached the dependency to. So I started out with this code (which works):

            SqlDataReader rdr=cmd.ExecuteReader();
            while (rdr.Read())
                Console.WriteLine(rdr[0]);
            rdr.Close();

Once I had that working, I did what I really wanted to do which was return the SqlDataReader to my form (in real life, maybe a middle tier) which then loaded it into a DataTable and did something with the DataTable. So I changed the code to:

           SqlDataReader rdr=cmd.ExecuteReader();
           return rdr;

and that worked just fine. So according to this test it is NOT necessary to do a read on the SqlDataReader for the notification to fire.

The placement of the Open method of the connection did not seem to affect the notification query. Of course, just so long as it’s before you try to execute the command. I tried it before creating the dependency and I tried it after the dependency and event handlers were already created. This makes perfect sense to me. But it was something I was worrying about when things weren’t working. So that eases my mind to know that my instincts are okay.

So there were NOT a gazillion other things I had to do in SQL Server besides the frequently advertised ones.

1. Be sure the CLR is enabled in SQL Server
the tsql for that is

EXEC sp_configure ‘show advanced options’, ‘1’
go
RECONFIGURE
go
EXEC sp_configure ‘clr enabled’, 1
go
RECONFIGURE

2. Make sure that Enable_Broker is ON in your database
Wally explains how here

3. Make sure that SQLNotification will send messages with this TSQL statement
GRANT SEND send on service::SqlQueryNotificationService to guest
a) note that the word SqQueryNotificationService is case sensitive
b) you have to have admin priveleges in order to successfully run this command

4. The database user needs permission to subscribe to notification
GRANT SUBSCRIBE QUERY NOTIFICATIONS TO [the user]
 –that is, whatever user the calling application is that is making requests. I haven’t tried ASP.NET yet, but I’m guessing that will be the Network Service account if IIS6 is your webserver.

5. Follow the query rules.
You have to write a query that can be parsed and that needs to follow some very specific rules. The three most notable are
a) The projected columns in the SELECT statement must be explicitly stated (i.e. select id, name, lastname from mytable not select * from mytable)
b)Table names must be qualified with two-part names (eg dbo.authors or person.contact)
c) Due to the above one, this means that all tables referenced in the statement must be in the same database.
I posted a full list here

Even though I fiddled for 2 days with this, I think the above represents the right combination of things to attend to.

The last caveat is that this is still a beta. From reading Bob’s article, we can rest assured that MS is going to simplify this stuff. As for limiting your queries to “hello world“ applications (I’m exaggerating… but see Roman’s comment) maybe they’ll make it smarter or maybe the more full blown notification services will be the way to go. I don’t really know – that’s just a completely wild guess.

Posted from BLInk!

Query Notification SELECT QUERY rules

There are SO many caveats to using query notification wtih SQL Server. Things like the fact that it is not based on transactions (eg you will get a notifcation even if something is rolled back). You have to do a LOT of trolling around in the books online to find all of this stuff and frankly, I still do NOT have my notifications firing yet.

So here is a list from the Books Online for the October CTP of SQL Server 2005 of rules to follow when you are building a query with which you want to susbcribe to the service.

Query notifications are supported for SELECT statements that meet the following requirements:

  • The projected columns in the SELECT statement must be explicitly stated, and table names must be qualified with two-part names. Notice that this means that all tables referenced in the statement must be in the same database.
  • The statement may not use the asterisk (*) or table_name.* syntax to specify columns.
  • The statement may use unnamed columns or duplicate column names.
  • The statement must reference a base table.
  • The projected columns in the SELECT statement may not contain aggregate expressions unless the statement uses a GROUP BY expression. When a GROUP BY expression is provided, the select list may contain the aggregate functions COUNT_BIG() or SUM(). However, SUM() may not be specified for a nullable column. The statement may not specify HAVING, CUBE, or ROLLUP.
  • A projected column in the SELECT statement that is used as a simple expression must not appear more than once.
  • The statement must not include PIVOT or UNPIVOT operators.
  • The statement must not include the INTERSECT or EXCEPT operators.
  • The statement must not reference a view.
  • The statement must not contain any of the following: DISTINCT, COMPUTE or COMPUTE BY, or INTO.
  • The statement must not reference server global variables (@@variable_name).
  • The statement must not reference derived tables, temporary tables, or table variables.
  • The statement must not reference tables or views from other databases or servers.
  • The statement must not contain subqueries, outer joins, or self-joins.
  • The statement must not reference the large object types: text, ntext, and image.
  • The statement must not use the CONTAINS or FREETEXT full-text predicates.
  • The statement must not use rowset functions, including OPENROWSET and OPENQUERY.
  • The statement must not use any of the following aggregate functions: AVG, COUNT(*), MAX, MIN, STDEV, STDEVP, VAR, or VARP.
  • The statement must not use any nondeterministic functions, including ranking and windowing functions.
  • The statement must not contain user-defined aggregates.
  • The statement must not reference system tables or views, including catalog views and dynamic management views.
  • The statement must not include FOR BROWSE information.
  • The statement must not reference a queue.
  • The statement must not contain conditional statements that cannot change and cannot return results (for example, WHERE 1=0).

That’s just the query rules. There are also rules for the connection and a lot of setup to do on the database. I will eventually succeed at this! I swear I will. Of course, by then, I will probably have been fired from all of the other contract work I am supposed to be doing. (Not really…)

Posted from BLInk!

SQLDependency and Notification Services – I give UP! (not really)

UNCLE!

I have the right code from the documentation, from Bob B’s awesome article, from the whidbey.adodtnet newsgroup, these posts from Wally McClure (rules, enabling CLR in SQL) and he was kind enough to spend a good chunk of time with me on i.m., too. But I cannot freaking get a notification from my SQL Server (Oct CTP bits). I have set up everything I know of in SQL, even tried this with my firewall temporarily turned off. Apparently besides the code, besides doing all of these things:

  •  grant send on service::SqlQueryNotificationService to guest  in msdb
  • GRANT SUBSCRIBE QUERY NOTIFICATIONS TO [the user] in the database
  • ensuring that Enable_Broker is true in the database
  • getting your calls in your code in the correct order
  • making a proper query statement

there seem to be some 400+ other rules to follow. I have been changing the order of events in my code, reading through the online books, investingating the sql logs, learning more about SQL Server 2005 than I really care to/want to/have time for right now, and on it goes…

Okay it’s a beta. But really. Eight hours. I freakin’ give up. But of course, that’s a lie. I will post this and then think of 24 other things to try and just keep at it until …..who knows…?



Posted from BLInk!

ADO.NET 2.0 Hypothesizing vs. testing

One of the new features in ADO.NET 2.0 is the DataTableReader. I read about this and invented all kinds of great uses for it in my head. The DataTableReader is a firehose from a DataTable or DataSet, much like a SQLDataReader is from a SQL Database.

I dreamt of ability to disconnect the DataTableReader from it’s source and move it around. Thus having a really light weight version of the data much in the same way many of us will stream SQLDataReader (or other) into an array or some other object to move it around without any extra overhead. But I was so wrong – really misunderstanding this class because I hadn’t CODED anything with it, just read the documentation and hypothesized.

Actually DataTableReader is pretty cool. Not only do you have a new great way to iterate simply through a datatable or dataset, but it will really stream the data live.

Example – you create a SQLDataReader and halfway through reading through it, someone deletes a record in the database that is ahead of where you are in the stream. The SQLDataReader won’t reflect this change. If someone wants to delete a record that is before that point (I have only tested this by literally opening up the database), they can’t.

With the DataTableReader, if you have some asynchronous work being done, someone can possibly add or delete rows and the DataTableReader handles this really nicely. If you add or delete a row that is before the iterator’s row pointer, the row pointer will not be affected (I tested this) and if you add or delete a row after the row pointer, that change will be reflected as you continue to iterate through the rows. Suh-weet!

I also checked to see what would happen if you created a DataTableReader from a DataSet and then mid-stream, added a new DataTable to the DataSet. This does NOT get picked up by the DataTableReader.

Posted from BLInk!

Bulk Copy in ADO.NET 2.0 – pop quiz

BulkCopy sounds great on paper, but have you actually tried it?

I did. I took a 104,000 record table from SQL2000 on one server and used BulkCopy to import those records into a pre-existing table in SQL2005 on another server.

But first I tried the same operation with DTS as a benchmark and that took 21 seconds. (note that I’m doing this on an older computer that is 700 mhz and I think 512MB Ram. The remote server is an old dog, too.)

Then I wanted to see what it was like in code without bulk copy. I thought I would at least leverage some of the new features in ADO.NET 2.0 for this. I grabbed the same 104,000 records into a DataReader and used DataTable.Load (new feature in ado.net 2.0) to pull it into a DataTable. Then rather than let a DataAdapter.Update insert this one at a time (puh-lease!), I leveraged the new batch capabilities.  SQLServer did not like 1000 records at a time (too many parameters in one execute command) so I set UpdateBatchSize to 100. I also am using the beautifully simple little System.Diagnostics.StopWatch class in this test. It’s been running a while. I will come back and insert the time here when it finishes > 41 minutes <. If the lights start dimming in Burlington, you’ll know why!

Before this test though I did the same operation using BulkCopy. Again, I pulled the 104,000 records from the remote server (on the same hardwired network) into a datareader and then passed that datareader into a SqlBulkCopy object, then called it’s WritetoServer method.

So remember the DTS took 21 seconds. How long did the operation take in ADO.NET 2.0?

Let’s have a few guesses and then I’ll tell you. Heh heh heh.

OKAY – it was 23 seconds! Basically the same as the DTS! Rather than 180 times as long as with the batch updating which would have been impossible with ADO.NET 1.1 where there is no batch updating. So that may have been 100 times longer than the batch update way.


Posted from BLInk!

MARS and Transactions – the bigger picture

My eyes are getting bigger and bigger as I’m digging into transactions in ADO.NET 2.0, or I should really just say in .NET 2.0.

First, I like that it solves a problem I currently have with a .NET 1.1 application.

I have to insert data into two tables. The first gets one new record, the second gets many that are related to the first.. If the first insert works, I go the second. If one of the updates to the 2nd table update fails, I have to roll back all that were done before it and then I have to delete the new record out of the first table. There may be a better way in 1.x, but that’s the best I could figure out.

Now with MARS, I can use one connection and therefore one transaction on all of these executions. A failure in the 2nd set will also rollback the first. That is super sweet.

In fact, Angel Saenz-Badillos recommends this as one of the two reasons *he* would use MARS.

Now I find this great 15 seconds article by Bill Ryan who has also found a wonderful muse in the aptly named Angel . Though I was experimenting with transactions using the same old way (using the SqlTransaction class), Bill shows a much simpler implementation using the new System.Transactions.TransactionScope class. (The namespace is new and I cannot believe I don’t have this (yet) in my Whidbey BCL presentation!!!) Oh boy! First of all, if you can remember the first time you tried to implement a transaction in ADO.NET, it was really confusing (and remains so). This implementation, wrapping the whole thing inside of a transaction (very new stuff for me) is really simple!

Think about what’s happening here, I’m using System.Transactions.TransactionScope – nothing to do with SQL, but it’s rolling back or committing my SQL transactions!  I have coded this up with two commands on one connections. Here’s some VB:

    Public Sub MARSUpdateTestTrans()
        Dim sqlbuilder As New SqlConnectionStringBuilder
        With sqlbuilder
            .ConnectionString = GetSqlString()
            .MultipleActiveResultSets = True
        End With
        Using ts As New System.Transactions.TransactionScope
            Dim IsConsistent As Boolean = False
            Dim conn As New SqlConnection(sqlbuilder.ToString)
            Dim cmd1 As New SqlCommand(“update employee set fname=’Julie’ where emp_id=’KJJ92907F'”, conn)
            Dim cmd2 As New SqlCommand(“update employee set fname=’Annabelle’ where emp_id=’AMD15433F'”, conn)
            conn.Open()
            Try
                cmd1.ExecuteReaderExecuteNonQuery ‘oops this had started out as a query…forgot to change the method
                cmd2.ExecuteNonQuery
                Throw New Exception
                IsConsistent = True
            Catch ex As Exception
                 ‘whatever else you want to do
            Finally
                ts.Consistent = IsConsistent
                conn.Close()
            End Try
        End Using
    End Sub

If that’s not enough, Bill explains that you can wrap calls to totally different  databases inside of this one transaction and it handles all of the goo for you.

As if that isn’t enough, Bill goes on (with some encouragement from Angel) to explain something that is beyond the scope of work I have done, but this is what thrills him even more. Not only can the transactionscope handle the different databases, but it also knows how to deal with local and distributed, SQL Server, Oracle, MSMQ.

Do not ignore this class. No, it won’t solve world hunger. But maybe you can go do some community service work with all of your free time it just gave you!

So remember, the SqlTransaction class is still there and there may be lots of cases where it’s better to use (one case is for more control about how the commits and rollbacks work). There is still more to explore!



Posted from BLInk!