Marc's Musings News Feed 
Sunday, March 9, 2008  |  From Marc's Musings

So, hash has been on my mind lately.  No, not that kind of hash, or that kind either.  First, there was last week, when I installed Internet Explorer 8 beta 1.  I was reading the release notes and was amazed to find that # (you know, octothorpe, pound sign) was not considered part of the URL by this version.  Thus you can't link directly to a named element on a page. Eeew!

Then today, Hugh Brown dropped a comment on my diatribe post about value-types, reference-types, Equals and GetHashCode. The post has been live for many months now, and has quite a bit of Google juice. Until now, nobody has ever quibbled with the stuff I wrote, but Hugh had some interesting observations.

First the little stuff

In a minor part of his comment, he was surprised by the many overloads of GetHashCode that I suggest, wondering why I didn't just always expect callers to use the params int[] version. Quite simply, this is because by providing several overloads for a small number of arguments (5 in my example), I avoid paying the cost of allocating the array of integers and copying the values for each call to the CombineHashCodes. While this may seem like a trivial savings, remember that GetHashCode is called many times when dealing with HashTable collections and thus it is worth it to provide expedited code paths for the more common usages. Additional savings inside the CombineHashCodes method are garnered by avoiding the loop setup/iteration overhead. Finally, in optimized builds, these simpler method calls will be inlined by the compiler and/or JIT, where methods having loops in the body are never inlined (in CLR releases thus far). It is worth noting that the .Net runtime implementation does the same thing for System.Web.Util.HashCodeCombiner and System.String.Format.

To the meat of the comment

The main body of his comment was that my code actually didn't return useful values. That concerned me a lot. Given his use of Python and inlined implementation, I had to write my own test jig. Unfortunately it confirmed his complaint. On the one hand, the values he was using to test were not normal values you would expect from GetHashCode. Normally GetHashCode values are well-distributed across the entire 32-bit range of an Int32. He was using sequential, smallish, numbers which was skewing the result oddly. That said, the values SHOULD have been different for very-similar inputs. I delved a little into the code I originally wrote and found that what's on the web page does NOT match what is now in use in the BCL's internal code to combine hash codes (which is where I got the idea of left-shifting by 5 bits before XORing). I think that my code was originally based on the 1.1 BCL but I'm not really sure.

In the .Net 2.0 version, there's a class called System.Web.Util.HashCodeCombiner that actually reflects essentially the same technique as my code, with one huge and very significant difference. Where I simply left-shift the running hash code by 5 bits and then XOR in the next value, they are doing the left-shift and also adding in the running hash, then doing the XOR.

Why so shifty, anyway?

You might be wondering why do the left shift in the first place. The simple answer is that by doing a left-shift by some number of bits, we preserve the low order bits of the running hash somewhat. This prevents the incoming value from XORing away all the significance of the bits thus far, and also insures that low-byte-only intermediate hash codes don't simply cancel each other out. By shifting left 5 digits, we're simply multiplying by 32 (and thus preserving the lowest 5 digits). Then the original running hash value is added in on more time, making the effective multiplier 33. This isn't far off from Hugh's suggestion of multiplying by 37, while being significantly faster in the binary world of computers. Once the shift and add (e.g. multiplication by 33) is completed, the XOR of the new values results in much better distribution of the final value.

I've updated my code in the Utilities library, and I'm going back to the original post to point to this post and the new code. So, I owe you one, Hugh...and maybe Microsoft does too because while I was reviewing their code in the newly released BCL source code, I found a very unexpected implementation. This is the snippet in question:

    internal static int CombineHashCodes(int h1, int h2) {
        return ((h1 << 5) + h1) ^ h2; 
    }
 
    internal static int CombineHashCodes(int h1, int h2, int h3) { 
        return CombineHashCodes(CombineHashCodes(h1, h2), h3);
    } 

    internal static int CombineHashCodes(int h1, int h2, int h3, int h4) {
        return CombineHashCodes(CombineHashCodes(h1, h2), CombineHashCodes(h3, h4));
    } 

    internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5) { 
        return CombineHashCodes(CombineHashCodes(h1, h2, h3, h4), h5); 
    }
Did you see the oddity? That implementation taking 4 values does its work by calling the two-value one three times. Once to combine the first pair (h1 and h2) of arguments, once to combine the second pair (h3 and h4), then finally to combine the two intermediate values. That's a bit different than doing what the 3-value and 5-value overloads use. I personally think it should have called the 2-value against output of the 3-value to combine the 4th value (h4). That would be more like what the 3-value and 5-value overload do. In other words, the method should be:
    internal static int CombineHashCodes(int h1, int h2, int h3, int h4) {
        return CombineHashCodes(CombineHashCodes(h1, h2, h3), h4);
    }

Perhaps they don't care that the values are inconsistent, especially since they don't provide a combiner that takes a params int[] overload, but imagine if I had blindly copied that code and you got two different values from this:

   Console.WriteLine("Testing gotcha:");
   Console.WriteLine(String.Format("1,2: {0:x}", Utilities.CombineHashCodes(1, 2)));
   Console.WriteLine(String.Format("1,2,3: {0:x}", Utilities.CombineHashCodes(1, 2, 3)));
   Console.WriteLine(String.Format("1,2,3,4: {0:x}", Utilities.CombineHashCodes(1, 2, 3, 4)));
   Console.WriteLine(String.Format("1,2,3,4,5: {0:x}", Utilities.CombineHashCodes(1, 2, 3, 4, 5)));
   Console.WriteLine(String.Format("[1,2]: {0:x}", Utilities.CombineHashCodes(new int[] { 1, 2 })));
   Console.WriteLine(String.Format("[1,2,3]: {0:x}", Utilities.CombineHashCodes(new int[] { 1, 2, 3 })));
   Console.WriteLine(String.Format("[1,2,3,4]: {0:x}", Utilities.CombineHashCodes(new int[] { 1, 2, 3, 4 })));
   Console.WriteLine(String.Format("[1,2,3,4,5]: {0:x}", Utilities.CombineHashCodes(new int[] { 1, 2, 3, 4, 5 })));

Where we are at now


Here is the revised version of the CombineHashCodes methods from my Utilities library

    public static partial class Utilities
{
public static int CombineHashCodes(params int[] hashes)
{
int hash = 0;

for (int index = 0; index < hashes.Length; index++)
{
hash = (hash << 5) + hash;
hash ^= hashes[index];
}

return hash;
}

private static int GetEntryHash(object entry)
{
int entryHash = 0x61E04917; // slurped from .Net runtime internals...

if (entry != null)
{
object[] subObjects = entry as object[];

if (subObjects != null)
{
entryHash = Utilities.CombineHashCodes(subObjects);
}
else
{
entryHash = entry.GetHashCode();
}
}

return entryHash;
}

public static int CombineHashCodes(params object[] objects)
{
int hash = 0;

for (int index = 0; index < objects.Length; index++)
{
hash = (hash << 5) + hash;
hash ^= GetEntryHash(objects[index]);
}

return hash;
}

public static int CombineHashCodes(int hash1, int hash2)
{
return ((hash1 << 5) + hash1)
^
hash2;
}

public static int CombineHashCodes(int hash1, int hash2, int hash3)
{
int hash = CombineHashCodes(hash1, hash2);
return ((hash << 5) + hash)
^
hash3;
}

public static int CombineHashCodes(int hash1, int hash2, int hash3, int hash4)
{
int hash = CombineHashCodes(hash1, hash2, hash3);
return ((hash << 5) + hash)
^
hash4;
}

public static int CombineHashCodes(int hash1, int hash2, int hash3, int hash4, int hash5)
{
int hash = CombineHashCodes(hash1, hash2, hash3, hash4);
return ((hash << 5) + hash)
^
hash5;
}

public static int CombineHashCodes(object obj1, object obj2)
{
return CombineHashCodes(obj1.GetHashCode()
,
obj2.GetHashCode());
}

public static int CombineHashCodes(object obj1, object obj2, object obj3)
{
return CombineHashCodes(obj1.GetHashCode()
,
obj2.GetHashCode()
,
obj3.GetHashCode());
}

public static int CombineHashCodes(object obj1, object obj2, object obj3, object obj4)
{
return CombineHashCodes(obj1.GetHashCode()
,
obj2.GetHashCode()
,
obj3.GetHashCode()
,
obj4.GetHashCode());
}

public static int CombineHashCodes(object obj1, object obj2, object obj3, object obj4, object obj5)
{
return CombineHashCodes(obj1.GetHashCode()
,
obj2.GetHashCode()
,
obj3.GetHashCode()
,
obj4.GetHashCode()
,
obj5.GetHashCode());
}
}

Wednesday, January 2, 2008  |  From Marc's Musings

Little bunny FoFo, hopping through the forest, scooping up the field mice and bopping them on the heads.

Down came the good fairy, "Little bunny FoFo, I don't want to see you scooping up the field mice and bopping them on the heads. I'll give you three chances and then I'll turn you into a goon".

  1. The Blues deserve a good bounce. I just finished watching the Blues v. Stars game and there's no question that they outplayed the Stars all but the first couple minutes. That crappy bounce off Tkachuk's skate beat us. That's all.
  2. Xen's bout with RSV teaches a couple lessons. First, no matter how hard the nights are, you will miss your boy's loud cry when it is replaced by a weak wail of despair. My soul can't handle that kind of trial very often and I thank God that he knows what I can bear. It's simple, when you can't do anything to help, you feel useless... when everything you do (sucking his nose clear, pounding the phlegm out of his lungs) actually make your child cry more, it's HARD. Second, there are things that make you remember where you are in life... I'm a senior developer, the old-wise-guy at church, and a BABY as a dad. I don't know squat.
  3. When you need something that isn't .Net all the way, expect it to be hard. In this case, Subversion to Team Foundation tools SUCK. And the Subversion paradigm expects you to Alt.Net it and NOT use TFS, even though it is several dozen times better a tool.  Expect me to be announcing something based on the TFS Migration Toolkit soon, cause damn sure Microsoft isn't going to bother.
  4. Be VERY careful what skills you teach your children. No matter if they are good, or bad, they will be used against you.
    The other day, at a Blues home game; my 4 year old daughter, Arianna, was squirming a bit in the seat. I told her that if she didn't stop I was going to turn her into a goon. The next whistle (she's a GOOD hockey fan) she asks, "Dad, are fairies real?".
    My spidey sense being dull, I answered, "No, they're just like Santa Claus, just a character."
    She replies with no delay, "Then you can't turn me into a goon."

Monday, November 19, 2007  |  From Marc's Musings

It seems there are several not-very-overlapping audiences for this blog. There are people reading for the SQL stuff, especially the datetime related stuff. There are people reading for the Lightweight Code Generation stuff, especially the DynamicMethod/DynamicSorter library. Then there are the people hunting down information about the RSSToolkit library. Finally, there's the people following the recent URITemplate library.

Since many of you visitors seem to have specific interestes, I've added the ability to subscribe to individual labels applied to the posts via the excellent tip given by Daniel Cazzulino in his instructional posting.

Just check out the labels listing on the right-side navigation. Oh, if you only read via a feed, this might be worth a read of the actual page.

Tuesday, November 13, 2007  |  From Marc's Musings

I'm in...

  • Say Everything As If Speaking To Everyone

    (because you are)

  • If You Must Be A Jerk, Don't Be An Anonymous One

    (because that's cowardly)
  • Encourage Others To Abide By This Code

    (because it's neighborly, plus recursive rules are fun)
  • When Others Don't Care To Abide, Ignore Them

    (because they're not worthy of your time)

A Simple Code - Web Karma, Distilled

Tuesday, October 30, 2007  |  From Marc's Musings

I can't tell you how happy the last few days have made my inner geek. Last week the Chumby started shipping and today the be-far-coolest idea ever is available for order.

Do you have a digital camera? Snap a lot of shots? Forget to get around to uploading them to your PC and your online site of choice? Have we got a solution for you, just get a Eye-Fi SD memory card, configure from your PC/Mac and then install it in your camera. It'll store 2GB of pictures and every time it gets near to a wi-fi network that you have configured it to use, poof instant uploads to your online site. This baby supports all the players (except WinkFlash, what's up with THAT?).

For those of you with CF cards instead... PFFFTT!

Eye-Fi

Saturday, October 27, 2007  |  From Marc's Musings

Today I released a new version of the UriPattern and UriTemplate library on CodePlex (previously announced here). There are two changes in this release:

  1. A bug reported by Darrel Miller where the meta character that have special meaning in Regex expressions are not properly escaped. I inherited this bug in the original implementation I based the library on, but no excuses, this was stupid. Sorry to anyone bit by this.
  2. I've added the ability to specify that a UriPattern should be compiled. This should speed up patterns that are used very frequently.

Pick up Release 1.1 on CodePlex


Tuesday, October 9, 2007  |  From Marc's Musings

With a new baby around, you can imagine that our family's sleep patterns are changing. To say that we are tired misses the point entirely... we're all a "bit slow" round the house. Arianna doesn't want to get up for the Montessori school that she dearly loves to go to, Beth is stressed and struggling with emotion... and mellow me is actually not catching those "snaps of testosterone". That's just the emotionaly impact... the cognitive impact is much worse. I've found it difficult to grok code-review changes that occured in the last 5 days at work... I couldn't even recognize a bad web.config connection-string issue (something that would have jumped out before the problem description was finished a mere week ago). It's getting better, though... today is better than yesterday by far... and the biggest difference is in how much sleep we've gotten. I can easily see the pattern in myself--I even might generalize to Beth--but did I extend this to a general behavior pattern for Arianna, or kids in general? I am not that smart (today?).

Today, I read an article by Po Bronson, who authored an article a while back that really resounded with me. I wrote about it here back in March. This new article shows astonishing evidence for the direct link between how much sleep a child gets and thier cognitive ability the next (and following days). In one study of 77 kids (half asked to stay up a little later and half asked to go to bed a little earlier) the resulting merely one hour difference in the amount of sleep showed the same cognitive difference after three days as that between an average 4th and 6th grader. In other words, three hours of sleep difference cost two years worth of cognitive ability.

So let, no MAKE, your kids (and you) get that extra sleep. Read more at:

Can a Lack of Sleep Set Back Your Child's Cognitive Abilities?

Friday, October 5, 2007  |  From Marc's Musings

I am happy to announce the birth of Xavier Eli Brooks at 1322 of October 4th.

After faking us out by turning himself around the night before the inversion, he resumed his (dad mirroring) ways and refused to turn the crown fully upside down. After 12 hours of Cervidil and 18 hours of contractions standing on his ear, he wasn't coming any closer to finding the stage door so we opened a new one just to his right.

He emerged warping space-time at a mass of 7 pounds 6 ounces, and a length of 19 3/4 inches, not that those numbers actually tell you anything about him.

Beth and baby are both fine, thanks for asking.

Brooks... Xen Brooks

Friday, September 7, 2007  |  From Marc's Musings

 I will always remember the feeling of wonder that overtook me as I read "A Wrinkle in Time" for the first time in 1971... a book born of a fertile mind the same year I was born has shaped me ever since.  We've lost a wonderful person today.

Madeleine L’Engle, Children’s Writer, Is Dead - New York Times

Friday, August 17, 2007  |  From Marc's Musings

Introduction

Checking out a new blog today [Davy Brion's Blog] I stumbled across a very nice entry about Implementing A Value Object. Go read that now if you don't know what a value object is, what immutable means or why it's good.

Identity is who you are

What I want to talk about is GetHashCode() as used with value-type objects (e.g. struct in C#) but to do that, I really need to talk about the difference between reference-type objects (RTOs from here out) vs. value-type objects (VTOs from here out). Feel free to skip down if this is old hat to you.

What's important to realize is that if your are a reference-type object, your identity revolved around "where you are". This is expressed, in terms of .Net, by the fact that you have the same reference handle/memory address. The problem with this is that you might have an Person object that currently represents me and thus has the FirstName property == "Marc" and LastName property == "Brooks". If I give you a reference to that Person object and you change the FirstName property to "Charles", you're suddenly talking about my father. What's dangerous about this is that you have changed the underlying object to which I gave you a reference, thus my reference also now seems to be my father.

On the other hand, if I gave you a copy of the original Person object (perhaps via a Clone() operation), then you can change any property you wish and I will never know. This is good, if that's what you intend. Your personal copy of the object is not my copy of the object, they have different physical identities, even though they might initially share the same logical identity. To me, it's much like the difference between giving you a money order, or simply a copy of a money order. In the former case you are free to set the payee name to be whatever you want and cash/spend that money order.  In the latter, you can do whatever you want to your copy, but it doesn't affect mine.

VTOs automatically enforce the making of copies, you simply cannot change the original, no matter what... though you might change the property values on your copy, this does nothing to my original properties. What this means is that comparing value-type objects cannot meaningfully compare the physical identity (e.g. the reference handle/memory address) between to value-type objects because they will always be different.

So, how do you meaningfully compare VTOs? By their logical identity. In the example of a money order, the logical identity is actually the money order number, not the physical piece of paper. Some less-sophisticated verifications of the money order's validity might hinge on the appearance of the piece of paper, but a much better authoritative verification comes from calling in the money order number to the issuer and seeing if that number is still valid and for what amount.  Even modern sporting event venues operate similarly, checking not the physical appearance of of a ticket; rather they scan the barcode and match that against a database to insure the ticket is valid and hasn't been used yet.

Thus, a VTO's identity must be defined in terms of one or more of the property values. To check logical equality of two VTOs, you compare the equality of the identifying properties. In the case of a money order, the money order number.

Collections and hash-codes

When you drop an object in collection you expect to be able to later be able to retrieve that object (or, in the case of a VTO, a copy of the object) back out. The simplest way is enumeration, but that's not very quick. More commonly, you stick the object in some sort of dictionary keyed by some value. In the case of an Array, the key is simply the integer index of where you stuck the item, but for large numbers of potential objects you really need a identifying property on the object itself. In the event ticket example, it's the barcode of the ticket. That key is used to store and retrieve the ticket information into a collection (perhaps a Dictionary<TicketNumber, TicketStatus> collection) is the barcode value. To make the storage and lookup quick, the collection internally stores the key values in "buckets" that are based in some way on the key's value. Each "bucket" contains a list of objects that have the same key-gives-bucket-number collection. Once you find the right bucket, you scan through all the objects in that bucket by doing an identity comparison. This means that:

  1. There must be some way to map the key values into bucket numbers.
  2. The mapping should not change if the identity of the object doesn't change.
  3. When two objects map to the same bucket number, they are disambiguated using the identity comparison.
  4. That once you've placed an reference-type object in a collection, changes to it's identifying properties are going to break the comparison on retrieval.
  5. Changes to a value-type-object are impossible, since the collection is holding a copy.

The standard method used in the .Net Framework Class Library for identity comparison is the the Equals() method.  The standard method in the FCL to map object key values into buckets is the GetHashCode() method. In practical terms, this means that the Equals() method and the GetHashCode() method work together for any object you might want to place in a collection. They must agree one what properties of an object are identifying. When you design a value-type object, you really have to get it right because they have no meaningful physical identity.

Equals() and GetHashCode() are free!

In the the .Net runtime, all objects automatically inherit an implementation of both the Equals() method and the GetHashCode() method. But as with many things in life, not all free things are really worth much. The default implementation of the Equals() method for reference-type objects is simply to compare the reference handle for equality. If we're pointing at the same object, we're talking about equal objects. The default implementation of the GetHashCode() method similarly bases its answer on the reference handle value. For value-type objects, the .Net runtime treats them as-if they inherit from ValueType, so the the Equals() method on ValueType is what is called. This method compares field-by-field the individual elements of the object and returns if each is equal. Likewise, the the GetHashCode() method of ValueType is the default and it merely computes and combines the field-by-field hash-codes and combines them in an unspecified way to generate an overall object hash-code.

In summary, this means is that the default treatment of VTOs is to treat all fields as identifying. The default treatment of RTOs is to treat none of the fields as identifying. Rarely would this be the right thing to do, but that's what you get for the low-low-price of free.

If you have a logical identity for a VTO, or an RTO. then you need to supply your own implementation of Equals() and GetHashCode(). As detailed above, you need to make sure that they are coupled in their understanding of what fields and/or properties are the identifying ones.

Building your equality

Once you've identified what fields or properties to use when comparing to objects for logical identity, you need to implement an Equals() method and a GetHashCode() method the right way. For the Equals() method, there are only a few rules:

  1. Thou shalt not throw an exception
  2. Thou shalt implement reasonable overrides (at least for your own type and taking System.Object)
  3. If you override operator ==, you must have a corresponding Equals() method.
  4. If you implement the IComparable interface, you should override Equals()

So, a classic implementation of a value-type object would be something like this (borrowed from Davy's post):

public override bool Equals(object obj)
{
   Address address = obj as Address;
   if (address != null)
   {
      return this.Equals(address);
   }
 
   return object.Equals(obj);
}
 
public bool Equals(Address address)
{
   if (address != null)
   {
      return this.Street.Equals(address.Street)
             && this.City.Equals(address.City)
             && this.Region.Equals(address.Region)
             && this.PostalCode.Equals(address.PostalCode)
             && this.Country.Equals(address.Country);
   }
 
   return false;
}

Note that it's perfectly fine for the Address object to have many other properties that are not considered identifying and thus not included in the implementation of the Equals() method. That's really the whole point of implementing the Equals() method on an object. For VTOs you are trying to ignore some fields that the default ValueType implementation would have included. For RTOs, you are trying to establish some properties that give logical equivalence.

Computing a useful hash-code


Once you've established a the body for Equals() method, you absolutely must define the GetHashCode() method. This is where Davy's gets it 99% right.  He correctly states that every field/property value you call Equals() against should also be included in the GetHashCode() return value. Most people get that right, and Davy avoids the common mistake of adding the GetHashCode() sub-values together (which would skew the distribution pattern toward larger absolute values) and does an XOR of the sub-values. This is excellent, but we can get it a tiny bit better by following the pattern of many Microsoft provided classes and shifting the accumulated value before the XOR of the next sub-value. This leads to the low-order bits of the sub-value hash-codes being "distributed" into the final value instead of canceling each other out. Thus, my version of Davy's method is:

public override int GetHashCode()
{
      return (((((((this.Street.GetHashCode() << 5)
                   ^ this.City.GetHashCode()) << 5)
                 ^ this.Region.GetHashCode()) << 5)
               ^ this.PostalCode.GetHashCode()) << 5)
             ^ this.Country.GetHashCode();
}

Unfortunately, that's kind of ugly and error prone due to all the operator precedence issues. Can we make it better?

Introducing CombineHashCode


So, a much better approach would be to have a little helper method set that knows how to do the combining according to this rule. For simplicity and ultimate flexibility, we'll have a version that takes an params array of objects and calls GetHashCode() on each of them in-turn. For better performance (to avoid boxing and unboxing) we'll add a version that takes a params array of precomputed hash codes (actually System.Int32 values). Finally, for ultimate performance, we'll have a few overloads that take a specific number of objects or hash-code values to avoid the allocation of the params array.  You can add more as needed, but your really ought to rethink your class if you get more than five identifying fields/properties.

public static partial class Utilities
{
    public static int CombineHashCodes(params int[] hashes)
    {
        int hash = 0;
 
        for (int index = 0; index < hashes.Length; index++)
        {
            hash <<= 5;
            hash ^= hashes[index];
        }
 
        return hash;
    }
 
    public static int CombineHashCodes(params object[] objects)
    {
        int hash = 0;
 
        for (int index = 0; index < objects.Length; index++)
        {
            int entryHash = 0x61E04917; // slurped from .Net runtime internals...
            object entry = objects[index];

            if (entry != null)
            {
                object[] subObjects = entry as object[];

                if (subObjects != null)
                {
                    entryHash = Utilities.CombineHashCodes(subObjects);
                }
                else
                {
                    entryHash = entry.GetHashCode();
                }
            }
 
            hash <<= 5;
            hash ^= entryHash;
        }
 
        return hash;
    }
 
    public static int CombineHashCodes(int hash1, int hash2)
    {
        return (hash1 << 5)
               ^ hash2;
    }
 
    public static int CombineHashCodes(int hash1, int hash2, int hash3)
    {
        return (((hash1 << 5)
                 ^ hash2) << 5)
               ^ hash3;
    }
 
    public static int CombineHashCodes(int hash1, int hash2, int hash3, int hash4)
    {
        return (((((hash1 << 5)
                   ^ hash2) << 5)
                 ^ hash3) << 5)
               ^ hash4;
    }
 
    public static int CombineHashCodes(int hash1, int hash2, int hash3, int hash4, int hash5)
    {
        return (((((((hash1 << 5)
                     ^ hash2) << 5)
                   ^ hash3) << 5)
                 ^ hash4) << 5)
               ^ hash5;
    }
 
    public static int CombineHashCodes(object object1, object object2)
    {
        return CombineHashCodes(object1.GetHashCode()
            , object2.GetHashCode());
    }
 
    public static int CombineHashCodes(object object1, object object2, object object3)
    {
        return CombineHashCodes(object1.GetHashCode()
            , object2.GetHashCode()
            , object3.GetHashCode());
    }
 
    public static int CombineHashCodes(object object1, object object2, object object3, object object4)
    {
        return CombineHashCodes(object1.GetHashCode()
            , object2.GetHashCode()
            , object3.GetHashCode()
            , object4.GetHashCode());
    }
}

This leaves us with the final version of Davy's GetHashCode() method looking like this:

public override int GetHashCode()
{
      return CombineHashCodes(this.Street, this.City, this.Region, this.PostalCode, this.Country);
}

That's pretty clean and easy to understand, right?


Technorati Tags: , , , ,

Friday, August 10, 2007  |  From Marc's Musings

This is not right:

internal static bool DoesDbExist(SqlConnection conn, string database)
{
    using (SqlCommand cmd = conn.CreateCommand())
    {
        // prefer this to a where clause as this is not prone to injection attacks
        cmd.CommandText = "SELECT name FROM sys.databases";
        cmd.CommandType = CommandType.Text;

        using (SqlDataReader reader = cmd.ExecuteReader())
        {
            while (reader.Read())
            {
                string dbName = reader.GetString(0);
                if (string.Compare(dbName, database, true, CultureInfo.CurrentCulture) == 0)
                {
                    // the database already exists - return
                    return true;
                }
            }
        }
    }

    return false;
}


This is right:

internal static bool DoesDbExist(SqlConnection conn, string database)
{
    using (SqlCommand cmd = conn.CreateCommand())
    {
        cmd.CommandText = "SELECT name FROM sys.databases WHERE name=@name";
        cmd.CommandType = CommandType.Text;
        cmd.Parameters.Add(new SqlParameter("@name", database));

        using (SqlDataReader reader = cmd.ExecuteReader())
        {
            return reader.Read();
        }
    }
}


Someone please assure me that this is not how everyone else handles avoiding SQL injection.

Wednesday, June 20, 2007  |  From Marc's Musings

Today, in blinding science, it turns out that kids who "smoke" candy cigarettes are more likely to try the real thing later.  Shocking, huh?

Can I please have my tax dollars back on this one?

Technorati Tags:

Wednesday, June 20, 2007  |  From Marc's Musings

I've just created a new project on CodePlex, and it's got the first (and hopefully only) release available. Enjoy UriTemplate.

Some background whining...

I admit that sometimes I get a little jealous of other developers, who are not as limited in the things they can adopt. In some cases its a cool new idea like doing RESTful applications. In other cases its a bit of nice functionality living in another platform like java. In still more cases I'm lusting after the cool new stuff in various Microsoft .Net CTPs, betas and such.

The real-life world I find myself in, though, often has me coding against legacy systems running on ASP.Net 1.1, on servers I cannot control or upgrade with inherited systems that barely grok the idea that WebControls can have properties. Woe is me, and probably many others of you out there.

Today, however, I'm taking back the future for slobs like me, and I'm doing it one class at a time.

Enter the idea of fire...

A while back, I was reading Steve Maine's excellent blog and found the interesting post UriTemplate 101, which talks all about a new class available in an upcoming release of .Net. The basic idea of this class is to let you specify a pattern of replaceable tokens to use when constructing or parsing URIs. The class looks to be quite nice, but being a future released, I just filed it away for later cogitation.

A log in the fireplace...

Dare and everyone else have been talking about this wonderful RESTful world forever, where everything is about URIs that mean something and state transitions occur by following those meaningful paths. Couple that with the long standing best-practice of building Web systems with "hackable URLs" . This resonates with me, and I start thinking about UriTemplate as the application. Of course I don't have any new stuff I'm building that would let me play that way... until last week.

A match could start something...

Suddenly, a new project appears on the near horizon... a chance to retrofit an cool new set of functionality to an existing ASP.Net 1.1 site. This new stuff would really benefit from hackable URLs and thus needs a good URL Rewriter and Virtual path handler. Sure, those exist, but almost all force me to map URLs to pages via some lovely RegEx matching. This project, however, is all content-driven and just cries out REST. I want a more general solution and UriTemplate sounds like a match.

Fanning the flames...

So, off I go, looking for the DLL for UriTemplate that Steve's talked about for "investigation". I spin and whirl and Google and Live Search (not a verbable word!) till I'm blue in the face, but I can't figure out where this wonderful class has even been sneaked out for peek.  In fact, none the searches turn up much more than  Joe Gregorio's original idea posting as a follow up to the application to RESTful development of templated URIs.

Sputter, sputter...

Eventually, I stumble across Jeff Newsom's curiously titled posting about some upcoming WCF features that shows using a UriTemplate in a WebInvokeAttribute. He also mentions in another post that some functionality was "folded into the  BizTalk Services SDK", which brings me right back to the start with Steve Maine's blog. So, I now know where to look.  A quick download of the BizTalk Services SDK and I've got some code to look at. Fire up Reflector and ugh...way to complex for me to use since I can't deploy the SDK to my production environment. I guess I'm going to have to write my own, but I'm sure not going to back port the one in the SDK.

I breathe some life into it...

So, last Friday, I finally got around to deciding it was time to just write the code myself, I did another search based on the links that I previously found and stumbled across James Snells posting about draft specification for URI Templates, which included a Java implementation. This code is simple and clean... very likeable. Too bad it's in the wrong language and built for the wrong platform. But I know Java, I know C# and I know how to make one look like the other.  A short time later and I've got a fully functional .Net 2.0 version of the package that James wrote.

This week, my coworker Ryan Stephenson did a quick back-port to the .Net 1.1 framework (I told you we had deployment restrictions!) and today I bundled it all up, created a project on CodePlex and made a quick home page for it.

Bask in the glow...

So, like I said way back up there... there's now a perfectly serviceable UriTemplate implemention available for schmucks like me. If you are interested, the goods are here

Saturday, June 16, 2007  |  From Marc's Musings

Thanks to some amazing work by Piyush Shah of Microsoft, the ASP.Net RssToolkit originally authored by Dmitry Robsman has grown up big and strong!

The new release adds some awesome features that many users have be asking for, some considerable tightening of the code base was done by myself and Jon Gallant, and I got off my lazy butt and update the Wiki using some documentation that Piyush wrote as a basis.

This release adds support for some huge features that I'll summarize here, but you should really head to the project home page to read the Wiki documentation.

New features:

  • Atom, RDF and OPML support! Available both for consuming and publishing, this includes full support for the required elements of the various RSS, Atom and RDF schemas. For OMPL feeds, supports aggregation of the referenced feeds (even if mixed format) into a single feed.
  • Strong-typed code generation of classes that fully understand the feed schema, including any custom extensions like the Yahoo Media RSS extensions.
  • RSS/Atom/RDF schema validation during aggregation of OPML
  • Ability to reflect or generate any feed in any of the formats supported (including pulling Atom feeds in and morph to RSS out).
  • DownloadManager can be used to cache any feed format and supports app-relative paths under ASP.Net applications
  • Added support for enclosures and qualified namespaces for feed elements
  • Now packaged as a Visual Studio solution with proper projects for all the sub-projects
  • Sporting a new complete set of Visual Studio Team unit tests (sorry, nUnit guys... haven't created parallel ones yet)

See this post for details on the earlier version 1.0 releases

Thursday, June 14, 2007  |  From Marc's Musings

Anyone who has been bitten when going from SQL Server 2000 to SQL Server 2005 due to the (intentional) decision by Microsoft to ignore the ORDER BY in a VIEW that returns the entire result set can now get a HotFix to enable the legacy behavior. After installing the HotFix, you will also have to turn on traceflag 168.


Monday, June 11, 2007  |  From Marc's Musings

There is an extensive Dark Matter halo around the bright galaxy of human thought and intellect. [via: comment here]
David Crow is on of the IDiots.


Monday, June 11, 2007  |  From Marc's Musings

Recent research shows that including a scientific explanation of psychological phenomena increases people's acceptance of the explanation. This holds true even when the "science" is irrelevant. I suspect this extends to other explanations in other areas. Would you be more likely to accept a web-standard when backed with lots of numbers that you don't really (take time to) understand?
[via: Language Log]


Thursday, May 31, 2007  |  From Marc's Musings

Charles Petzold knows a lot about Windows; he's a smart guy, no doubt. But he's also occasionally just plain out rude:

We still don't know if Sen. Brownback believes that humans and dinosaurs lived together, but now we do know that the Senator has a dogmatic disdain for science that precludes him from the job he's currently seeking[more].

Don't mistake me, I'm not a Creationist. The simple fact is that macro-evolutionary theory as the origin of life on Earth is untestable and calling it out as such is valid within the scientific method.

More importantly, declaring someone unfit to be president because they don't share your faith in the scientific method as THE way to describe something is smug and self-aggrandizing.

Of course, that's just my opinion.


Saturday, April 28, 2007  |  From Marc's Musings

I'm getting caught up on my blogs and I'm happy to say that Tim Ewald finally gets it, now I can REST.

Thursday, November 30, 2006  |  From Marc's Musings

As a research project, Scott Eric Kaufman is seeing how fast a meme can spread. If you have a blog, please post a link to his post.

 Marc's Musings News Feed 

Last edited Dec 7, 2006 at 10:16 PM by codeplexadmin, version 1

Comments

No comments yet.