Tips, Troubleshooting

Enable NuGet PackageRestore on CC.NET

Last week I decided to revisit my MSBuild/NuGet patterns (see 1 and 2) and see if I could make any improvements to what I had come up with before.  In particular, I wanted to integrate Brad Wilson’s gist on downloading NuGet at runtime.  I did that for a couple projects, including CompositionTests and it works great.

But last night, NuGet 2.0 came out and CompositionTests stopped building on my CruiseControl.NET server, along with another project I haven’t released yet.  I did what I usually do when something breaks first thing in the morning: I hit “Force Build” and see what happens.  But it still failed and eventually I went searching to see if there were any problems with NuGet 2.0.  Sure enough I found something, but its not a bug, it’s a feature!  NuGet 2.0 requires consent for package restore, because we all just know that the NSA is very interested in what packages we are using.  The NuGet team has been very open about this change and gave fair warning to get ready, which I read and ignored.

Well, I can’t ignore it anymore can I?


First and most obvious is a sad red circle in CCTray:


Like I said, normally I don’t even check the log before hitting “Force Build” but I already did that dance, so lets look at the log.  I formatted this to fit your screen, and pulled some paths out for brevity, but you get the idea:

<error file="...\CompositionTests\.nuget\nuget.targets"
        timeStamp="06/19/2012 23:36:45">
    Package restore is disabled by default. To give consent, open the Visual Studio
    Options dialog, click on Package Manager node and check 'Allow NuGet to 
    download missing packages during build.' You can also give consent by setting 
    the environment variable 'EnableNuGetPackageRestore' to 'true'.
<error code="MSB3073"
        timeStamp="06/19/2012 23:36:45">
    The command ""...\nuget.exe" install "...\packages.config" -source "" -o 
    "...\packages"" exited with code -1.

Since we are working with a build server, checking a checkbox in package manager is not an option.  In this scenario, NuGet gives us the another way to provide consent via an environment variable.  Lets try that.


These steps will apply to Windows Server 2008 R2, because that’s what I’m running CC.NET on.

Set Environment Variable

  • Log on to the build server and open the Control Panel
  • Go to System & Security > System > Advanced System Settings
  • Click “Environment Variables”
  • Click “New…” under “System Variables”
  • Enter “EnableNuGetPackageRestore” as the name and “true” for the value.
  • Click OK > OK > OK.


Restart the CruiseControl.NET service

  • Open Control Panel > System & Security > Administrative Tools > Services
  • Select CruiseControl.NET and click “Restart” to restart the service.



Use “Force Build” to force a failing project to start in CC.NET.


And you’re done!

Tips, Troubleshooting

How NuGet Chooses the Nuspec File When Building From a Project File

If you have a class library project, it’s a good idea to learn how to use NuGet to package the resulting assembly. It’s a good idea even if you do not plan to share your assembly with the world, because you can still take advantage of the smarts built into NuGet to make managing your own packages easier. It’s not very hard at all to get up and running with your first package, and then you can start customizing from there.

Once you start customizing, you begin to realize there are some scenarios where you might want some slightly different packages. For example, maybe you have some code contract assemblies, but you don’t always want them included. Perhaps you have a library and a shell, and you want to give people the option of downloading just the library in one package, but to download both in another.

I have a scenario like that in at least a couple of my projects. I copied my nuspec file, customized it, added it back to my project then setup a script in the post-build event to create both packages. Worked great the first time I tried it, then failed when I tried to replicate the process in the next project. In the both projects, I kept the “easy” path (described below) which builds against the project file and finds the nuspec on its own. In both projects, I used the same scripts and naming conventions. Yet when it came time to create the project-based package the first project correctly selected the nuspec file that belongs with the project-based package, while the second project consistently selected the nuspec file that belongs with the customized package.

I searched around for an answer to how NuGet was making its decision, but could not find anything. Maybe I wasn’t using the right search terms, but I decided it would be best to stop being lazy and just go read the source. I have written a few CLI applications over the years, but I have to say I have never designed one as nice as NuGet. Next time I have a CLI to write, I’m going to study NuGet more carefully before writing any code.

As you may have guessed by the title of this post, I did find the answer to my question. You can go read the source, or you can read this post.

(Go ahead and read the source anyway, it is great.)

The Easy Way

The easiest way to get your assembly into a NuGet package is to follow the outline here: Creating and Publishing a Package. First download NuGet.exe then skip down to the section under the heading “From a Project.” Read the page for full details, but it boils down to this once you are used to the process:

  • Open command window and navigate to your project folder. I will assume your project is called “foo.csproj”
  • nuget spec
  • notepad foo.nuspec
  • Delete a few lines of junk from the nuspec then save.
  • nuget pack foo.csproj –Prop Configuration=Release

Done, now you can copy your package to your local package source or upload it to

The Question

Now you want to create a custom nuspec to go with the one that you generated with NuGet.exe. Let’s call it “foo.custom.nuspec.” Now when you run “nuget pack” the process may or may not pick the correct nuspec. If it picks the custom nuspec file, it will build the incorrect package. After all, the premise is that you want both the default package and a custom package. If NuGet does pick the wrong metadata file, how do you get it to change its mind? You could be like me and try a bunch of things.

  • Maybe it’s picking the newest file?
  • Maybe it’s picking the oldest?
  • Shortest file name?
  • Longest?
  • Blah, blah blah, blah blah blah

Actually, I didn’t try all of those. I did try a couple, and then I went to the code.

The Answer

Here is the code that selects the nuspec file in version 1.6:

private string GetNuspec()
  return GetNuspecPaths().FirstOrDefault(File.Exists);

private IEnumerable<string> GetNuspecPaths()
  // Check for a nuspec in the project file
  yield return GetContentOrNone(file => Path.GetExtension(file).Equals(

  // Check for a nuspec named after the project
  yield return Path.Combine(
    Path.GetFileNameWithoutExtension(_project.FullPath) + Constants.ManifestExtension);

We can see right away that the search ends as soon as it finds an existing file that ends with “nuspec.” We can also see that the process prefers to get that file from the project file before it goes looking in the project directory. The “GetContentOrNone” method looks at the project file and chooses files marked with build action “Content” or “None” and yields back the matches. That was the key for me to figuring out why NuGet picked out the wrong file.

I added my nuspec files to the project for convenience. In one project, I had decided from the start to keep both Nuspec files, so I left the original alone and created a copy, renamed and added the copy to the project. This caused the new file’s ProjectItem to appear after the original in the csproj xml file.

In the other project, I had only thought that I wanted to customize the nuspec, then realized later that I wanted both. In the second project, I had customized the nuspec that was already there, then generated a new “default” nuspec, then added the new default to the project. This caused the default nuspec file’s ProjectItem to appear after the customized nuspec in the project xml. The order of operations mattered. I opened the misbehaving csproj in a text editor, rearranged the order of the ProjectItems, and tried again, problem solved.


For easy reference, here is the search path NuGet takes to choosing the file when the pack target is a project file:

  1. Project Members are preferred
    • NuGet checks items that have “None” or “Content” build actions for the .nuspec extension.
    • The first of these found is used.
  2. If there are no nuspec files in the project, NuGet tries to find a .nuspec file in the project file’s directory with the same name as the file.

Maybe this information is already out there somewhere, but if it is, I had a hard time finding it. If you have this problem, I hope this helps you.

Coding, Tips, Troubleshooting

Stop Guessing About MEF Composition And Start Testing

Diagnosing MEF Failures | SoCalCodeCamp @ CSU Fullerton | 1/29/2012

I gave my first talk at a Code Camp last weekend. The topic was “Diagnosing MEF Failures” at it was well received by the 5 attendees.  A recording of the talk is embedded at the top of this post, and the direct link is just below, along with the slides and demo code.  This blog post compliments the talk by providing a step by step guide about how to test your MEF composition using ApprovalTests and Microsoft’s CompositionDiagnostics library.

Finally, I revisted this subject in June 2012.  If you only want to know the easiest way to setup composition tests, read MEF Composition Tests, Redux instead.  If you’re interested in knowing how everything works under the hood, read this article first, then read the June article.


Diagnosing MEF composition failure is tricky, especially when parts have two or more nested dependencies.  In this scenario, MEF’s error messages aren’t very helpful.  MEF was unable to satisfy one or more parts in the chain, but the error message points you to the part at the top of the chain, to really fix the problem you need to know about the bottom of the chain.   If you start by looking where the error message sends you, you can end up feeling like you’re on a wild goose chase because nothing appears to be wrong with that part.

Recognizing this problem, the MEF team created the a static diagnostics library as a “Sample”, but when MEF was integrated into the BCL with .NET 4, this useful library never made it in.  You can still get the code from the MEF CodePlex site, along with a shell that allows you to define directory or assembly catalogs from the command line and analyze the parts within.  This shell is useful and flexible (for example, you could use it to examine deployed parts in production without changing any code) but sometimes tedious to use.  An alternate shell (Visual MEFx) brings some nice GUI touches to the concept.  The GUI makes it much easier to play with what if scenarios while examining the collection of parts.

Either of these tools can be used to analyze real, deployed compositions, even when dealing with parts authored by a third party.  That’s a really powerful concept.  But for some MEF users (myself included), third parties are not a big concern.  I’m using MEF to compose a collection of known parts, so I can say with confidence what the composition should look like.  Sometimes I might want to play out a what-if scenario, but most of the time I just want some assurance that all my parts are there, and therefore all my application’s expected behaviors will be there.

Ideally, I’d like to automate this assurance without relying on a GUI, or trying to spin up a separate process during integration testing.  Finally, it would be great if I could be assured that my composition looked correct without having to write one test for every part that should be in the container.  How I achieve these goals is the subject of this post.


If you don’t know about ApprovalTests you really should take a moment to learn about them.  Read about this library at, and be sure to listen to the Herding Code podcast.  You don’t need to understand much about ApprovalTests to follow the recipe I plan to lay out below, but even if you aren’t interested in MEF at all, you should still check out ApprovalTests.

Presumably if you are reading this you are already familiar with MEF, but if not, get started here: “What is MEF?”.  You don’t have to read the whole thing to start getting the point of what MEF is for, maybe just the first few sections.  Follow that with the MEF Programming Guide on CodePlex.

If you would rather read than watch my video, I primarily drew from two references while putting together my talk.  Part 13 of the MEF programming guide is about Debugging and Diagnostics.  Finally, this entry on Daniel Plaisted’s Blog is a great reference which includes background on why MEF fails in general, quite a few corner cases, and an overview of live and static debugging tools.  My talk focuses on using the tools, so read Daniel’s blog if you’re really interested in studying MEF failure.

Since Daniel covers MEFx and Visual MEFx, I wont discuss them here, instead I’ll focus on presenting the recipe for testing MEF composition with ApprovalTests.

What You Will Need

  • For starters, you’ll need a project that uses MEF, but since you are here I’ll assume you already have that.
  • You’ll need to download ApprovalTests and reference it in your test project.  There is a version on NuGet, but its not always up to date with the latest release from the SourceForge site.  The recipe should work with 1.0.11 or greater, so its up to you if you want the latest and greatest.
  • You’ll need a version of the CompositionDiagnostics library.  A quick way to get your hands on a binary copy is to download MEFx.  The MEFx zip contains two files, one is a command line shell, and the other is the library.  You can obtain the source by downloading MEF Preview 9.  Note that there are MEF 2 previews also in the release list, you want the preview from MEF 1.

The CompositionDiagnostics source may be useful if you run into any issues with imports showing up out of order on different machines.  If you have this problem, add this method to the CompositionInfoTextFormatter:

private static string GetPartDefinitionKey(object definition)
    if (definition == null)
        return null;
    var compositionElement = definition as ICompositionElement;
    return compositionElement != null ?
        compositionElement.DisplayName :

Then update the Write() method to order the PartDefinitionInfos by this “key”.

Test Setup

I’m using MSTest, it works fine for me.  If you have a favorite test framework, this shouldn’t be too hard to adapt.

If you don’t have a test project for your solution, set one up.  It will be important to keep your test and production catalogs in sync.  You don’t want to fix a problem in test and still have your composition fail in production.  I use directory catalogs, so I keep my catalogs in sync using a simple pre-build event in the test project properties:

copy /y “$(SolutionDir)$(SolutionName)\bin\x86\Debug\*.dll” “$(TargetDir)”

Add references to ApprovalTests.dll and ApprovalUtilities.dll.

Unzip the MEFx zip file, or compile the code yourself.

Grab the Microsoft.ComponentModel.Composition.Diagnostics.dll file, and place it near your solution (a lib folder would be a good option) then take a reference to the library.

Add a test class to the solution and call it whatever makes sense to you, I’ll use the name “IntegrationTest” in my example.

Import these namespaces into your IntegrationTest file.

using System;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using System.IO;
using System.Reflection;
using ApprovalTests;
using ApprovalTests.Reporters;
using Microsoft.ComponentModel.Composition.Diagnostics;

Add a UseReporterAttribute to your test class.   DiffReporter is my favorite:

public class IntegrationTest

The CompositionDiagnostics library provides us with a few new types.  The most interesting to me is the CompositionInfo class.  This is where the magic happens.  We create a catalog and a container as usual, then instantiate an instance of CompositionInfo, passing the catalog and container into the constructor.  I find it useful to put this into its own method, since I usually have at least two integration tests.

private static CompositionInfo GetCompositionInfo()
    var catalog = new DirectoryCatalog(".");
    var host = new CompositionContainer(catalog);
    var compositionInfo = new CompositionInfo(catalog, host);
    return compositionInfo;

Now you can add a test method.  Here’s the code, followed by an explanation:

public void DiscoverParts()
        using (var stringWriter = new StringWriter())
    catch (ReflectionTypeLoadException ex)
            lex => Console.WriteLine(lex.ToString()));

The CompositionDiagnostics library provides another useful class, the CompositionInfoTextFormatter, which we can use to create a text representation of the data in the CompositionInfo.  The CompositionInfo class does all the heavy lifting.  We use the method Approvals.Approve to verify the text representation, more on this later, for now its enough to know that this line takes the place of an “Assert” call in an ordinary unit test.

This test also catches the ReflectionTypeLoadException. Type loading problems can cause your test to fail when the CompositionInfo is under construction. Therefore, you would never reach the ApprovalTest. MSTest will catch this and dump the exception in the test result, but the really interesting information on that exception is in the LoaderExceptions property which isn’t shown by default. So, in the catch block each loader exception is dumped to the console before allowing the exception to go on it’s merry way. This saves the extra step of rerunning the test under the debugger just to drill into the LoaderExceptions.

That’s it for the first test, if you’ve been following along you should be able to run it.

The Results

When you run the test for the first time, it will fail.  This is not unusual for ApprovalTests.  The ApprovalTest library needs “approved” output to verify the test output against.  Since we used the DiffReporter, you are notified of the failure when a diff tool launches showing output on one side and a blank slate on the other.  (If nothing happens, you probably need TortiseSVN, which provides the default diff tool.  You can switch to NotepadLauncher if you decide you don’t like the idea of installing TortiseSVN just for this purpose.  Other reporters are supported, but that’s outside the scope of this entry.)


You can flip the left and right sides to make it easier to read.

So what is this?  Briefly, the diagnosis begins by looking at the Parts property on the catalog, examining the metadata for each, and eventually tries to compose each part defined in the catalog.  This will uncover any problems with the composition by throwing exceptions.  The analyzer catches the exceptions, and stores them in a data structure along with the other metadata.  Any parts that fail for reasons not related to importing other parts have the potential to be the root cause of the composition failure.  Each will be marked as a [Primary Rejection] by the text formatter.  Exceptions and the other metadata will also be printed with each part.

That’s the sad path.  On the happy path, your composition succeeds and each part is still listed, along with some metadata and information about which parts satisfied nested imports, if any.  So, in our test, we get the CompositionInfo, and format the data using the CompositionInfoTextFormatter.  The output is captured by a StringWriter, and passed to Approvals.Approve() for comparison.  If the output is correct, we should see no exceptions listed, no rejections, and each part [SatisfiedBy] the exports we expected to be imported.  If so, we can right click on the “received” file and choose “Use this whole file”.  Save.  It is now approved.  Rerunning the test should result in a pass.

If the composition has problems, we need to work on those problems until the output looks right, just as we would in any other TDD scenario.  Once it is right, approve it so it can be checked in the future.  Whenever any change is made to the composition (expected or not) this test will fail and let you know that you either need to approve the new composition, or resolve the unexpected failure.

Since the ReflectionTypeLoadException causes the test to fail before the approval stage, you should look at the trace output or test results when these occur.  Your loader exceptions will likely be MissingMethodExceptions with messages explaining that certain methods have no implementation.  Whichever part is meant to provide those implementations cannot load, and you may have to do some detective work to figure out why.  Often I find that these problems are related to parts expecting different versions of the same assemblies, or that the parts refer to assemblies that are unavailable.

Another Test

I usually have this next test in my IntegrationTest as well.

public void InstantiateService()
      s => s.GetType().FullName);
  catch (ReflectionTypeLoadException ex)
      lex => Console.WriteLine(lex.ToString()));

I look at it now and I wonder why.  This test will grab all instances of a certain type of part, compose them, and return them.  ApprovalTests will approve the list.  This doesn’t really do anything new compared to the full dump of the CompositionInfo in DiscoverParts.

I think that over time I must have become over zealous with the Don’t Repeat Yourself principle and used the GetCompositionInfo() method when I shouldn’t have.  The original form of this test was probably this:

public void InstantiateService()
    var catalog = new DirectoryCatalog(".");
    var host = new CompositionContainer(catalog);
      s => s.GetType().FullName);
  catch (ReflectionTypeLoadException ex)
      lex => Console.WriteLine(lex.ToString()));

So, what’s the difference/use of this second version?  In the version that uses GetCompositionInfo, the container and catalog construction happen inside the method, then are passed to a CompositionInfo.  Later, the test just used the container (accessed through the Host property) to ask for the IPizzaMaker instances.  So the construction of the CompositionInfo was a waste, but you might live with that for the sake of staying DRY.  However, since we know that the CompositionInfo can die during construction, its more than just a waste, it’s a potential source of test failure that has nothing to do with the ability to instantiate IPizzaMakers.

In the second version we construct the host without bothering with the CompositionInfo, and we avoid ReflectionTypeLoadExceptions unless they have something to do with IPizzaMaker.  So this test might tell you something interesting if making IPizzaMakers was your primary concern, but it wont give you the type of root cause analysis provided in the CompositionDiagnostics output.  So using both can potentially give you a smidge more insight into your composition.

If you reuse GetCompositionInfo to access the host, it tends to fail in concert with the DiscoverParts test, and so tells you nothing new.


In MEF 2, the InstantiateService test has the potential to take center stage, while the DiscoverParts test will be more of a niche player.  This is because the MEF team has made big improvements to the baked in diagnostics and exception messages.  First of all, MEF always knew what the root cause was, it just failed to tell you about it in the error message.  In MEF 2 Preview 5, the composition exception indicates the primary rejection at the top of a chain of failures leading all the way down to the part you asked for.

By default, the container will still wait for you to ask for a part that requires the rejected part (or the rejected part itself) before any exception is thrown.  This is part of MEFs “Stable Composition” feature.  You can disable this feature by passing the CompositionOptions.DisableSilentRejection option to the container’s constructor.  The result will be that MEF will throw an exception as soon as it rejects the part.  You will definitely want to use this option on the container used for integration tests, and if you aren’t working with third parties, disabling silent rejection is recommended on your production container as well.

When you run InstantiateService with silent rejection disabled, you should get an exception if any part in the container would be rejected, not just the IPizzaMakers.  This allows InstantiateService to detect a larger class of failures in MEF 2 compared to MEF 1.

The DiscoverParts test will still have its niche in MEF 2.  This test will remain useful in detecting missing optional parts.  A part that is optional and missing will not throw an exception even when silent rejection is disabled.  Although other parts want to import the optional parts, they don’t “depend” on them to operate, since classes that import optional parts should all be able handle the case where the parts aren’t available.  Since its missing (meaning—not in the catalog) there’s no way it could have a problem itself and throw an exception.

So if the part is optional should your test pass or fail?  Its up to you to decide, but remember that the whole premise of this test strategy is that you already know what the composition should look like.  You could have a collection of three services you want hosted that work together in an assembly line fashion. You might want to reserve the right to increase the number of steps in the assembly line in the future so you use [ImportMany] to get the whole collection.  Now you’ve just made your imports optional, but do you really want to deploy your application if service 2 of 3 is missing?  The DiscoverParts test can detect this for you, but “noisy” rejection can’t.  It’s a niche, but it’s a niche I find myself in, so I think the DiscoverParts test will remain useful even after MEF 2 arrives.


Really, I just hope that someone will find this idea useful.  Feel free to run with it and let me know if you can think of any other strategies for testing composition.  In particular I’d be interested to hear ways that people have found for testing scenarios with third parties or in production.  If you are interested in additional thoughts from me on the subject, please read: MEF Composition Tests, Redux.

Thanks for reading.

Coding, Troubleshooting

Manually Creating Referential Constraints in Entity Framework 4

This entry explains something that’s confused me for a long time.  To prevent myself from forgetting, I’m writing it all down.

I’m working with a legacy database that does not always model everything perfectly.  As usual with databases, I have to make the painful decision to either workaround the model, or refactor the database.  In particular, there are many foreign key candidates that are not mapped in the database, but would be really nice if I had them available in EF.  I “know” that EntityA.Key and EntityB.Key represent the same record and I’ve always been confused about how to let EF know what I know without actually going into the database and creating the FK.

Consider these two entities:


The BatchRecord entity represents a production record, and the AuditRecord entity provides information related to a quality review process.  Not all records will be audited, but no record can be audited unless it went through production first.  AuditedRecordId is populated with the BatchRecordId for the record selected for audit.  So, there will always exist a BatchRecord entity with the same id, but there might not be an AuditRecord entity for each BatchRecord entity.  Since AuditedRecordId is a key, each BatchRecord can only be audited one time.  So, this relationship has a zero or one cardinality.

I know all this, but there is no foreign key enforcing this on the SQL side.  I want Entity Framework to know about this relationship, so that I can use navigation properties.  So how to do it?

Add An Association

I know Code First is the new hotness, but I’m not actually in a code first situation.  I have all kinds of legacy database problems, like tables that don’t follow conventions that would make it easy for Code First to figure out what’s going on.  So I tend to port things into the designer then clean up the mess.

So, poking around the designer we see that we can add things to the model, and one of the items we can add is an “Association”.  Since FKs are automatically mapped to associations, this seems promising.  Lets try it.


Behold the Add Association dialog:


With a minimal amount of poking around, we can figure this one out.  I’ll leave the name alone and use the two dropdowns labeled “Entity” to point at AuditRecord and BatchRecord.  Then I’ll use the multiplicity to set BatchRecord’s multiplicity to “One”, and AuditRecord’s multiplicity to “Zero or One”.  I’ll leave the rest as is.



At this point, you might be forgiven for thinking, “Hey this isn’t so hard.”  Don’t worry, this cruise ship is about to run aground.  After clicking Ok, everything seems fine.

Look Ma, I made an association!


Unfortunately, the association is a lie.  I could try to build, but we can fail faster by right clicking the model and selecting “Validate.”

Error 11008

(Hint: the steps described in this section don’t work.  If you want the solution scroll down the the section that starts Don’t Give Up Yet.)

When trying to validate this model, Entity Framework produces “Error 11008: Association ‘BatchRecordAuditRecord’ is not mapped.”  Huh?  Lets Google that.

We find this MSDN page:

Entity Designer Errors

Error 11008: Association Is Not Mapped

This error occurs when an association in the conceptual model is not mapped to the data source. To resolve this error, map the association to the data source. For more information, see How to: Create and Edit Association Mappings (Entity Data Model Tools).

And clicking the link we find these instructions:

To create an association mapping

  1. Right-click an association in the design surface and select Table Mapping.

    This displays the association mapping in the Mapping Details window.


  2. Click Add a Table or View.

    A drop-down list appears that includes all the tables in the storage model.


  3. Select the table to which the association will map.

    The Mapping Details window displays both ends of the association and the key properties for the entity type at each End.


  4. For each key property, click the Column field, and select the column to which the property will map.

As of this writing 0 of 2 people found these instructions helpful.  I’m one of the two.  Feel free to visit MSDN and also find it not helpful.

For the sake of completeness, I’ll go ahead and follow their advice.  After right clicking on the Associating and selecting Table Mapping, I added the table backing BatchRecord to the mapping details.  Seems strange to map a referential constraint to a single table, but they say EF has a steep learning curve, right?  Some strangeness is to be expected.

Now to map each key.  I select the BatchRecord table key as the column to map BatchRecordId to.  Again, seems strange since Entity Framework already knows that BatchRecordId is mapped to that column.  But strangeness aside, it seems to work, so far so good.

Now to map AuditedRecordID to it’s column.  Hmm.  Seems that I have a problem.  The “Column” drop down only lists the columns from BatchRecord’s table.  Where are the columns for the table backing AuditRecord?

Maybe I can add another table to this mapping… nope.

Maybe I’m supposed to map it to the same column on BatchRecord’s table?  Seems strange but the designer accepts that.

Ok so that was a little rough, but it all worked out, right?  Better Validate again.  Dang!  Now Entity Framework produces “Error 3021: Problem in mapping fragments starting at line 331:Each of the following columns in table BatchRecords is mapped to multiple conceptual side properties…” and goes on to complain about using the key from BatchRecord’s table twice.

Maybe I’m supposed to leave that end unmapped.  Double Dang!  Now validation produces this message “Error 11010: Association End ‘AuditRecord’ is not mapped.”

Maybe its time to dust off Linq to SQL?  This was easier over there.

Don’t Give Up Yet

I wish I could point to some brilliant piece of documentation or blog post that pointed the way to the promised land.  But I can’t.  I found the answer to my problem through dumb luck  While futzing around with the association properties I noticed this:


And said to myself. “I wonder what that is?”  Clicking in the property field reveals a button with an ellipsis in it, and clicking that takes you to the land of milk and honey.  But before I do that, I delete my half-baked Table Mapping by clicking on “Maps to BatchRecords” and selecting “<Delete>” from the drop down.  Now I head back to that property sheet and click the “…” button.

Behold, the Referential Constraint Dialog:


This dialog does exactly what you want it to do.  I choose BatchRecord as the principal, and AuditRecord is automatically selected as the dependent.  BatchRecordId is automatically chosen as the Principal Key.  I use the dropdown to set AuditedRecordId as the Dependent Property.



After I click OK, Visual Studio fills in the property sheet.  I also notice that when I right click on the Association, “Table Mapping” is no longer offered.  I knew it seemed weird to map an association to a table.  More importantly, Validation completes without errors.

That Felt Nice

The Referential Constraint dialog felt nice.  In three clicks I setup the association and I never once felt dirty or stupid.  The dialog did exactly what I expected it to (I think… I’ll have to test it tomorrow.)  The only problem with the dialog is that its buried in the property sheet and Error 11108 leads you down the wrong path.  I’m not sure what scenario the MSDN documentation is talking about, but it seems pretty unhelpful.

I could be completely wrong about this, but I’ll know pretty soon if I am.