The blog entry How do you decide when to pair program? by Gojko Adzic is a great read. I agree with the group's final decision. Please go check it out.
Friday, December 12, 2008
Thursday, December 11, 2008
Help me make this Regular Expression better...
I was recently wanted to update a SQL Script file that was populating some seed (test) data into a bunch of database tables. I had scripted out the data in the table using a tool and all of the datetime columns were being updated with string values similar to the one below.
'20081208 13:54:31.236'
I wanted to update all of these to midnight of the current date, so that the data would be current every time the seed data is loaded. To do this I wrote the following T-SQL to put my date into this format in a variable:
DECLARE @CurrentDateString varchar(30)
SET @CurrentDateString = REPLACE(CONVERT(varchar(10), GETDATE(), 102),'.','') + ' 00:00:00'
PRINT 'Current Date: ' + @CurrentDateString
Now, I wanted to update all of the existing string datetime values with my new variable. I had about 75 strings like this in the seed data script file. I fired up Notepad++ and started to work on a Regular Expression to find and the replace all of these entries. Since, writing Regular Expressions is not a common thing for me, after a little googling and some hacking, I came up with the following:
('20081208)\s[0-9][0-9]:[0-9][0-9]:[0-9][0-9].[0-9][0-9][0-9]'
This is probably not the most efficient expression that I could have written, but it go the job done. Anyone have any suggestions for making this better? Thanks!
Is your ASP.NET AJAX Web Site Slow?
If your ASP.NET AJAX web site is slow, it may be because you are using UpdatePanels. Please read the article Have you ever wondered why CodePlex is so slow? by Dave Ward. It points out the inefficiencies of using ASP.NET AJAX Update Panels and suggests a better approach with an example. A very good read for any ASP.NET developer who is concerned with performance.
Tuesday, November 18, 2008
Agile Reading List
Last week I attended an Agile Mini-Conference (hosted by my company) featuring Martin Fowler and Neal Ford. During the course of the day, there were many books and papers mentioned by both of these Agile luminaries. Below is the list that I compiled.
Books
Smalltalk Best Practice Patterns
Kent Beck, ISBN 013476904X
The Pragmatic Programmer: From Journeyman to Master
Andy Hunt & Dave Thomas, ISBN 020161622X
The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition
Frederick P. Brooks, ISBN 0201835959
The Productive Programmer
Neal Ford, ISBN 0596519788
Pragmatic Thinking and Learning
Andy Hunt, ISBN 1934356050
Essays
No Silver Bullet: Essence and Accidents of Software Engineering
Frederick P. Brooks, Jr.
Code as Design: Three Essays by Jack W. Reeves
Jack W. Reeves
Thursday, September 18, 2008
Developer Basics: Good Source Control Practice
Jeff Atwood has a great article, Check In Early, Check In Often This covers what he considers the golden rule of using source control. After working on a large project with 5+ developers, I would have to agree with his views concerning the "giant dollop of code" and that when you check in smaller bits of functionality, each check in makes that code "infinitesimally more functional".
Thursday, August 28, 2008
Using CTRL + F5 in your browser
I recently found the CTRL+F5 shortcut key that you can use in your browser to refresh a page that you are viewing. Below is an explanation of the difference between that and using F5, from the Microsoft Knowledge Base article: http://support.microsoft.com/kb/306832.
Refresh the current Web page F5 or CTRL+R
Refresh the current Web page, even if CTRL+F5
the time stamp for the Web version and
your locally stored version are the same
Basically, CTRL+F5 will force the page and your cache for that page to be reloaded from the web server, while F5 will just refresh the page and display the cached version if one exists.
CTRL+F5 behaves this way for all Internet Explorer versions as well as Firefox, I am not sure about other browsers.
Tuesday, August 26, 2008
IE7 & Fiddler
This is probably old news for most folks... But since I recently upgraded my browser to IE7 in support of a project that I am working on, this was new to me. Fiddler will not pickup http requests from http://localhost if you are using IE7. The Configuring Clients section on Fiddler site explains why and how to work around the issue.
I also found this post showing that you can use http://localhost./ (note the extra dot at the end).
Wednesday, June 4, 2008
SQL Server 2005 & Ordered Views Gotcha
I recently ran into an issue where we were using an "undocumented feature" in SQL Server 2000, that allowed us to sort a view and then when selecting all items from the view the items returned in the select statement would follow the same order as the view. This does not work with SQL Server 2005, because the undocumented feature has been corrected. Lets say that we have the following table and data.
tblSortedUsers
Id | UserName | Sortorder |
1 | Bob | 4 |
2 | Tim | 3 |
3 | Sally | 2 |
4 | John | 1 |
Now we create the view vwSortedUsers as follows:
CREATE VIEW vwSortedUsers AS
SELECT TOP 100 PERCENT
Id, UserName
FROM tblSortedUsers
ORDER BY SortOrder
Now when execute the following query:
SELECT * FROM vwSortedUsers
I will get the following results:
SQL Server 2000
Id | UserName |
4 | John |
3 | Sally |
2 | Tim |
1 | Bob |
SQL Server 2005
Id | UserName |
1 | Bob |
2 | Tim |
3 | Sally |
4 | John |
The reasoning for this this best explained by the excerpt below from Microsoft KB Article 926292:
SYMPTOMS
You have a view in a database in SQL Server 2005. In the definition of the view, the SELECT statement meets the following requirements:
• The SELECT statement uses the TOP (100) PERCENT expression.
• The SELECT statement uses the ORDER BY clause.
When you query through the view, the result is returned in random order.
However, this behavior is different in Microsoft SQL Server 2000. In SQL Server 2000, the result is returned in the order that is specified in the ORDER BY clause.
This discussion talks about this being an undocumented feature that was corrected in SQL Server 2005 and also references the above MS KB Article. While the KB article mentions a HotFix that can be applied to SQL Server 2005, the best (and recommended) solution is to add the Order By clause to your query and not your view. Therefore, the query should be:
SELECT * FROM vwSortedUsers ORDER BY SortOrder
So if you are in the process of upgrading your database from SQL Server 2000 to SQL 2005, please aware of this issue.
Tuesday, April 1, 2008
Dependency Injection/Inversion Overview
James Kovacs' article, Tame Your Software Dependencies for More Flexible Apps in the March 2008 issue of MSDN Magazine is a great read. If you have heard of either Dependency Injection (DI) or Inversion of Control (IoC) and wanted to know more about what they are and how they work, then I would recommend that you check out the article; as James gives a very good explanation of both.
Dependency Injection/Inversion Overview
James Kovacs' article, Tame Your Software Dependencies for More Flexible Apps in the March 2008 issue of MSDN Magazine is a great read. If you have heard of either Dependency Injection (DI) or Inversion of Control (IoC) and wanted to know more about what they are and how they work, then I would recommend that you check out the article; as James gives a very good explanation of both.
Wednesday, March 26, 2008
Drop All Objects in a SQL Server Database
Here is a script that I created that will drop all the objects in a SQL Server database. It was created using a lot of other scripts out on the Internet as inspiration. I like this one does because of the following:
- Does not use cursors (which a lot of the ones I saw did)
- Will drop the foreign key and primary key constraints on the tables prior to dropping the tables.
/* Drop all non-system stored procs */
DECLARE @name VARCHAR(128)
DECLARE @SQL VARCHAR(254)
SELECT @name = (SELECT TOP 1 [name] FROM sysobjects WHERE [type] = 'P' AND category = 0 ORDER BY [name])
WHILE @name is not null
BEGIN
SELECT @SQL = 'DROP PROCEDURE [dbo].[' + RTRIM(@name) +']'
EXEC (@SQL)
PRINT 'Dropped Procedure: ' + @name
SELECT @name = (SELECT TOP 1 [name] FROM sysobjects WHERE [type] = 'P' AND category = 0 AND [name] > @name ORDER BY [name])
END
GO
/* Drop all views */
DECLARE @name VARCHAR(128)
DECLARE @SQL VARCHAR(254)
SELECT @name = (SELECT TOP 1 [name] FROM sysobjects WHERE [type] = 'V' AND category = 0 ORDER BY [name])
WHILE @name IS NOT NULL
BEGIN
SELECT @SQL = 'DROP VIEW [dbo].[' + RTRIM(@name) +']'
EXEC (@SQL)
PRINT 'Dropped View: ' + @name
SELECT @name = (SELECT TOP 1 [name] FROM sysobjects WHERE [type] = 'V' AND category = 0 AND [name] > @name ORDER BY [name])
END
GO
/* Drop all functions */
DECLARE @name VARCHAR(128)
DECLARE @SQL VARCHAR(254)
SELECT @name = (SELECT TOP 1 [name] FROM sysobjects WHERE [type] IN (N'FN', N'IF', N'TF', N'FS', N'FT') AND category = 0 ORDER BY [name])
WHILE @name IS NOT NULL
BEGIN
SELECT @SQL = 'DROP FUNCTION [dbo].[' + RTRIM(@name) +']'
EXEC (@SQL)
PRINT 'Dropped Function: ' + @name
SELECT @name = (SELECT TOP 1 [name] FROM sysobjects WHERE [type] IN (N'FN', N'IF', N'TF', N'FS', N'FT') AND category = 0 AND [name] > @name ORDER BY [name])
END
GO
/* Drop all Foreign Key constraints */
DECLARE @name VARCHAR(128)
DECLARE @constraint VARCHAR(254)
DECLARE @SQL VARCHAR(254)
SELECT @name = (SELECT TOP 1 TABLE_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE constraint_catalog=DB_NAME() AND CONSTRAINT_TYPE = 'FOREIGN KEY' ORDER BY TABLE_NAME)
WHILE @name is not null
BEGIN
SELECT @constraint = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE constraint_catalog=DB_NAME() AND CONSTRAINT_TYPE = 'FOREIGN KEY' AND TABLE_NAME = @name ORDER BY CONSTRAINT_NAME)
WHILE @constraint IS NOT NULL
BEGIN
SELECT @SQL = 'ALTER TABLE [dbo].[' + RTRIM(@name) +'] DROP CONSTRAINT ' + RTRIM(@constraint)
EXEC (@SQL)
PRINT 'Dropped FK Constraint: ' + @constraint + ' on ' + @name
SELECT @constraint = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE constraint_catalog=DB_NAME() AND CONSTRAINT_TYPE = 'FOREIGN KEY' AND CONSTRAINT_NAME <> @constraint AND TABLE_NAME = @name ORDER BY CONSTRAINT_NAME)
END
SELECT @name = (SELECT TOP 1 TABLE_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE constraint_catalog=DB_NAME() AND CONSTRAINT_TYPE = 'FOREIGN KEY' ORDER BY TABLE_NAME)
END
GO
/* Drop all Primary Key constraints */
DECLARE @name VARCHAR(128)
DECLARE @constraint VARCHAR(254)
DECLARE @SQL VARCHAR(254)
SELECT @name = (SELECT TOP 1 TABLE_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE constraint_catalog=DB_NAME() AND CONSTRAINT_TYPE = 'PRIMARY KEY' ORDER BY TABLE_NAME)
WHILE @name IS NOT NULL
BEGIN
SELECT @constraint = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE constraint_catalog=DB_NAME() AND CONSTRAINT_TYPE = 'PRIMARY KEY' AND TABLE_NAME = @name ORDER BY CONSTRAINT_NAME)
WHILE @constraint is not null
BEGIN
SELECT @SQL = 'ALTER TABLE [dbo].[' + RTRIM(@name) +'] DROP CONSTRAINT ' + RTRIM(@constraint)
EXEC (@SQL)
PRINT 'Dropped PK Constraint: ' + @constraint + ' on ' + @name
SELECT @constraint = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE constraint_catalog=DB_NAME() AND CONSTRAINT_TYPE = 'PRIMARY KEY' AND CONSTRAINT_NAME <> @constraint AND TABLE_NAME = @name ORDER BY CONSTRAINT_NAME)
END
SELECT @name = (SELECT TOP 1 TABLE_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE constraint_catalog=DB_NAME() AND CONSTRAINT_TYPE = 'PRIMARY KEY' ORDER BY TABLE_NAME)
END
GO
/* Drop all tables */
DECLARE @name VARCHAR(128)
DECLARE @SQL VARCHAR(254)
SELECT @name = (SELECT TOP 1 [name] FROM sysobjects WHERE [type] = 'U' AND category = 0 ORDER BY [name])
WHILE @name IS NOT NULL
BEGIN
SELECT @SQL = 'DROP TABLE [dbo].[' + RTRIM(@name) +']'
EXEC (@SQL)
PRINT 'Dropped Table: ' + @name
SELECT @name = (SELECT TOP 1 [name] FROM sysobjects WHERE [type] = 'U' AND category = 0 AND [name] > @name ORDER BY [name])
END
GO
SQL Server: Restore Your Master
I have been playing around with scripting the drop of all objects within a database and I accidentally ran this today in Master database on my local SQLExpress 2005 instance unintentionally (cough, cough). I have had better days. :-( So I originally thought I was going to have to reinstall SQL Server Express to restore the database, but after a little digging on Google I found this article that shows you can do it with just the following command line:
C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Install>sqlcmd -E -S .\SQLEXPRESS -i instmsdb.sql
Whew, that saved me a lot of time. Hopefully, you won't make this same mistake, but if you do, this is a quick way to fix it.
Friday, March 21, 2008
Thou Shalt Not Mock Thyself
We had some Unit Tests in our application recently that were failing. We are using TypeMock.Net (now TypeMock Isolator) to mock out some of the objects in the tests. In tracking down the issue, I learned that you should not mock yourself. To better explain this, please consider the following trivial class.
UPDATE: Please read the comment (and see the code) from Eli Lopian (founder of TypeMock) on how to mock the class without mocking the constructor. Thanks Eli!
using System.Configuration;
namespace SelfMocking
{
public class MyClass
{
public readonly string Text1;
public readonly bool IsValid;
public string SettingValue1()
{
return System.Configuration.ConfigurationManager.AppSettings["Value1"];
}
public MyClass(string text1)
{
Text1 = text1;
IsValid = !string.IsNullOrEmpty(Text1);
}
}
}
Then we will write the following test fixture and test to mock the call to the SettingValue1 property and return a mocked string value. To do that I am going to mock the MyClass object and return the mocked value for all calls to MyClass.SettingValue1. See the code below:
using System.Collections.Specialized;
using NUnit.Framework;
using NUnit.Framework.SyntaxHelpers;
using TypeMock;
namespace SelfMocking
{
[TestFixture]
public class MyClassTest
{
[TestFixtureSetUp]
public void FixtureSetup()
{
MockManager.Init();
}
[TestFixtureTearDown]
public void FixtureTearDown()
{
MockManager.Verify();
}
[Test]
public void Test_MyClass()
{
MockMyClass();
MyClass testMyClass = new MyClass("test1");
Assert.That(testMyClass.SettingValue1(), Is.EqualTo("MockedValue1"));
Assert.That(testMyClass.Text1, Is.Not.Null);
Assert.That(testMyClass.IsValid, Is.True);
}
private void MockMyClass()
{
Mock mockMyClass = MockManager.Mock(typeof (MyClass));
mockMyClass.AlwaysReturn("SettingValue1", "MockedValue1");
}
}
}
When I attempt to execute the unit test Test_MyClass, it returns an fails with the following message:
NUnit.Framework.AssertionException: Expected: not null But was: null
So it is correctly mocking the SettingsValue1() call, but the property Text1 is not being set. If I debug the test and step through the code as it is being executed, I see that the constructor for MyClass is never being executed, even though my test is calling it. The reason it is not being executed, is that the first thing I do is create a mock of MyClass, but I do not instruct the mock object to mock the constructor, so when I call the constructor, it is just ignored. Therefore, the Text1 property is never set and is in fact still Null.
In order to properly mock this, we need to go a level deeper and actually mock the AppSettings call on the System.Configuration.ConfigurationManager class. Below is the revised code:
using System.Collections.Specialized;
using NUnit.Framework;
using NUnit.Framework.SyntaxHelpers;
using TypeMock;
namespace SelfMocking
{
[TestFixture]
public class MyClassTest
{
private NameValueCollection nameValueCollection = new NameValueCollection();
[TestFixtureSetUp]
public void FixtureSetup()
{
MockManager.Init();
nameValueCollection["Value1"] = "MockedValue1";
}
[TestFixtureTearDown]
public void FixtureTearDown()
{
MockManager.Verify();
}
[Test]
public void Test_MyClass()
{
MockAppSettings();
MyClass testMyClass = new MyClass("test1");
Assert.That(testMyClass.SettingValue1(), Is.EqualTo("MockedValue1"));
Assert.That(testMyClass.Text1, Is.Not.Null);
Assert.That(testMyClass.IsValid, Is.True);
}
private void MockAppSettings()
{
Mock mockConfigurationManager = MockManager.Mock(typeof (System.Configuration.ConfigurationManager));
mockConfigurationManager.ExpectGetAlways("AppSettings", nameValueCollection);
}
}
}
As you can see in this new version, I am mocking ConfigurationManager and have created a new NameValueCollection into which I have set the key Value1 = "MockedValue1" and then when the class calls AppSettings, the MockManager will return my NameValueCollection instead.
So, keep in mind that if you are mocking objects, it is never a good idea to mock the class that you are actually trying to test, as it could have unexpected results.
Wednesday, March 19, 2008
Silverlight Cross-Domain Calls
I have been playing around with the latest Silverlight 2.0 Beta. Specifically, working through the excellent tutorial on Scott Guthrie's blog. (By the way, Matt Berseth has provided a working example of this tutorial available here.) So, after doing a few sections of this, I decided that I wanted to be able to pull some of my own data, instead of using the Digg Service. So I wrote a small REST service using WCF with.Net 3.5. I was able to quickly get this up and running and then create a new Silverlight app to consume this service. However, when I would try to connect to the REST service using my Silverlight app, I was always getting an "Download Falure" error.
I did some digging into this and found that in order for a Silverlight (or Flash) app coming from one domain to consume data from services on another domain, there must be a policy mechanism on the service domain that grants access to the domain of the application. This policy must be available from the root of the domain. Unfortunately, http://localhost:8001/ and http://localhost:8002/ are considered different domains. So in order for my Silverlight app on port 8001 to talk to the REST service on port 8002, I needed to provide this policy access. I found this post by Carlos Figueria that covers this scenario and how to setup your service to host the policy files for Silverlight and Flash when your service is self-hosted and in a web site.
Working with DataTables
I recently had to write a method that would take a single row from a DataTable and expand that entry based on a start and end date and then determine which rows intersected with a range of dates.
The process to accomplish this seemed fairly straight forward:
- Clone the original table.
- Populate the new table with the correct number of entries.
- Add the new rows back that fall within the range of dates.
Original Item:
ItemId | StartDate | EndDate | Time |
12345 | 3/5/2008 | 3/8/2008 | 8:00 AM |
Expanded Items:
ItemId | StartDate | EndDate | Time |
12345 | 3/5/2008 | 3/8/2008 | 8:00 AM |
12345 | 3/6/2008 | 3/8/3008 | 8:00 AM |
12345 | 3/7/2008 | 3/8/2008 | 8:00 AM |
12345 | 3/8/2008 | 3/8/2008 | 8:00 AM |
Expected Results for Range of 3/6/2008 to 3/8/2008
ItemId | StartDate | EndDate | Time |
12345 | 3/6/2008 | 3/8/3008 | 8:00 AM |
12345 | 3/7/2008 | 3/8/2008 | 8:00 AM |
12345 | 3/8/2008 | 3/8/2008 | 8:00 AM |
When I originally started out with creating this method my thought was to create only the rows that would be needed. So I began writing a bunch of branching logic that would compare the start date and the range start, compare the range end and end date, etc.; trying to determine how many days needed to be created and what day to start with. It quickly became very complicated and difficult for me to determine what I was doing. So I went back to the drawing board and figured out that with the help of the DataTable and DataRow classes in .NET that I could just expand the entire original item and then just select the rows where that intersected with my date range.
Code
Here is version of the method that I ended up with, I basically just removed the business logic that was specific to our application.
public DataTable ExpandRows(DataTable original, DateTime rangeStart, DateTime rangeEnd)
{
DataTable expanded = original.Clone();
//Assumes only one row for demo purposes
DateTime itemStart = DateTime.Parse(original.Rows[0]["StartDate"].ToString());
DateTime itemEnd = DateTime.Parse(original.Rows[0]["EndDate"].ToString());
int numDays = itemEnd.Subtract(itemStart).Days;
for (int i = 0; i <= numDays; i++)
{
DataRow newRow = expanded.NewRow();
newRow.ItemArray = original.Rows[0].ItemArray;
newRow["StartDate"] = itemStart.AddDays(i);
expanded.Rows.Add(newRow);
}
//select the rows that fall within the specified range of dates.
DataRow[] addRows =
expanded.Select(
string.Format("StartDate >= '{0}' AND StartDate <= '{1}'", rangeStart.ToShortDateString(),
rangeEnd.ToShortDateString()));
//Remove the original row,since we do not know if it falls within range.
original.Rows.Remove(original.Rows[0]);
//add expanded rows that fall within the range
int numRowsToAdd = addRows.Length;
for (int j = 0; j < numRowsToAdd; j++)
{
original.ImportRow(addRows[j]);
}
return original;
}
Limiting Results
Because I was using this method on a search page and I wanted to limit the results, because if you have an item with a large start/end date span and as well as a large range start/end date span there cold be hundreds or thousands of rows in the table. To limit the results I used a technique that I found in the ASP.NET Forums that shows how to add a unique id to the table and only return the a specified number of rows. So here is the final code with the limit the number of rows. Below is the final code:
public DataTable ExpandRows(DataTable original, DateTime rangeStart, DateTime rangeEnd, int maxRowCount)
{
DataTable expanded = original.Clone();
//Assumes only one row for demo purposes
DateTime itemStart = DateTime.Parse(original.Rows[0]["StartDate"].ToString());
DateTime itemEnd = DateTime.Parse(original.Rows[0]["EndDate"].ToString());
int numDays = itemEnd.Subtract(itemStart).Days;
for (int i = 0; i <= numDays; i++)
{
DataRow newRow = expanded.NewRow();
newRow.ItemArray = original.Rows[0].ItemArray;
newRow["StartDate"] = itemStart.AddDays(i);
expanded.Rows.Add(newRow);
}
//select the rows that fall within the specified range of dates.
DataRow[] addRows =
expanded.Select(
string.Format("StartDate >= '{0}' AND StartDate <= '{1}'", rangeStart.ToShortDateString(),
rangeEnd.ToShortDateString()));
//Remove the original row,since we do not know if it falls within range.
original.Rows.Remove(original.Rows[0]);
//add expanded rows that fall within the range
int numRowsToAdd = addRows.Length;
for (int j = 0; j < numRowsToAdd; j++)
{
original.ImportRow(addRows[j]);
}
if (maxRowCount > 0)
{
if (original.Rows.Count > maxRowCount)
{
DataTable autoIds = new DataTable();
DataColumn column = new DataColumn();
column.DataType = typeof(Int32);
column.AutoIncrement = true;
column.AutoIncrementSeed = 1;
column.AutoIncrementStep = 1;
column.ColumnName = "AutoRowId";
autoIds.Columns.Add(column);
autoIds.Merge(original, true, MissingSchemaAction.AddWithKey);
DataTable truncated = original.Clone();
DataRow[] limitedRows = autoIds.Select(string.Format("AutoRowId<={0}", maxRowCount));
for (int z = 0; z < maxRowCount; z++)
{
truncated.ImportRow(limitedRows[z]);
}
return truncated;
}
}
return original;
}
Lessons Learned
Here are some of the things that I learned along the way.
- Sometimes you need to step back and reassess the path you have initially chosen, it may not always be the best one. And in this case I came up with something that ended up being much simpler and more efficient.
- The importance of Unit Tests. I have been experimenting with using Test Driven Development (TDD) recently. So I was decided to switch my approach I was able to feel confident that the code was still working as expected, because I had immediate feedback as I made changes.
- You cannot directly insert a datarow from one table into another. If you try to do this you will get the error: "This row already belongs to another table". Instead, use ImportRow to copy the row into another DataTable.
- I was using a DataView to do some sorting on the DataTable like the following,
DataView dataView = new DataView(original, string.Empty, "StartDate ASC", DataViewRowState.OriginalRows);
However, because I was removing the original rows and adding new ones, the table was missing the original rows and the newly added ones; Because as the name states, it was looking at the original rows in the table... I needed to use the setting DataViewRowState.CurrentRows to display the rows as they currently existed in the table.
Thursday, March 6, 2008
Silverlight 2 & ASP.NET MVC Preview 2
Today at Mix08, Microsoft officially announced and released the latest updates to Silverlight and the ASP.NET MVC Framework. You can download them via the following links:
Silverlight 2 - Includes the following:
- Silverlight 2 Beta 1
- Silverlight 2 SDK Beta 1
- Silverlight Tools Beta 1 for VS 2008
- KB949325 for VS 2008.
ASP.NET MVC Framework - Preview 2
Make sure you also check out the ReadMe that describes some of known issues, etc. related to both Silverlight 2 and ASP.NET MVC Preview 2.
Monday, February 11, 2008
ASP.NET Wiki Beta
Scott Hanselman recently posted about a project his is working on ASP.NET Wiki Beta. The goal is "To provide a targeted, categorized, human-hand-edited, and living Wiki for finding answers to ASP.NET questions." With the right input and moderation this sounds like it could become a great resource. Please check it out and add any relevant content that you have found over your span of working with ASP.NET.
Tuesday, January 22, 2008
Excluding Certain NUnit Tests
Another developer in my group wrote a group of Automation Tests that verify the conversion of some data within our database. He added these tests to our existing Testing assembly (. We are also using NAnt to run and validate the tests in our Testing assembly as part of our build process. However, we did not want this test to be included as part of the normal tests run during the build process. So to accommodate this we added a category of DatabaseConversion to the test fixture:
[TestFixture]
[Category("DatabaseConversion")]
Then I modified the exclude list for the NAnt task as follows:<exec program="nunit-console.exe" basedir="${nunit.dir}" workingdir="${working.dir}">
<arg value="${testassembly}" />
<arg value="/xml:UnitTests.xml" />
<arg value="/exclude:DatabaseConversion" />
</exec>
As I thought about this, I was worried about the fact that if someone forgets to put the appropriate category on a test or we decide that another category of tests needs to be excluded we would need to update the NAnt task again. So I went and did some digging in the NUnit documentation and found that there is an Explicit attribute that you can add at the TestFixture or Test level. The ExplictAttribute documentation describes this as follows:
"The Explicit attribute causes a test or test fixture to be ignored unless it is explicitly selected for running. The test or fixture will be run if it is selected in the gui, if its name is specified on the console runner command line as the fixture to run or if it is included by use of a Category filter.
An optional string argument may be used to give the reason for marking the test Explicit.
If a test or fixture with the Explicit attribute is encountered in the course of running tests, it is skipped unless it has been specifically selected by one of the above means. The test does not affect the outcome of the run at all: it is not considered as ignored and is not even counted in the total number of tests. In the gui, the tree node for the test remains gray and the status bar color is not affected."
So now the code looks like the following:
[TestFixture, Explicit]
[Category("DatabaseConversion")]
This is perfect for what we are trying to accomplish as it basically will exclude the test unless it is explicitly selected to be run. And I do not need to be worried about the exclude list for the NAnt task having .
Composition over Inheritance
Recently when walking through the design of some new features for an application, another developer asked: "Why should I use composition over inheritance?" I attempted to answer that question, and felt that I did not do as good of a job answering it as I would have liked. I was recently looking for something else on Phil Haack's blog and ran across this good article, Composition over Inheritance and other Pithy Catch Phrases.
I think that Phil does a very nice job of explaining the argument and the key point in why this question should be asked: "The goal is not to bend developers to the will of some specific patterns, but to get them to think about their work and what they are doing." Please give this article a read for more details.
Wednesday, January 16, 2008
Getting Assembly Public Key
I recently needed to get the public key of an assembly, so I went straight to the GAC (Global Assembly Cache) located at %WinDir%\assembly, knowing that the public key token is a column in that view via the file explorer. However, the assembly that I wanted was not registered in the GAC. So I navigated to the location of the assembly on my hard drive and started poking around in the properties thinking the public key value might be in there, no luck. Then I remembered Lutz Roeder's Reflector. I opened the assembly using Reflector and there it was down in the details for that assembly. Another reason to have this very handy utility around.