Saturday, July 13, 2024

Autoincrementing and Deploying Dataverse Plugin Package

How to Automate Incrementing and Deploying Dataverse Plugin Packages

First things first, this isn’t an article for how to setup an ALM process.  It’s for local dev, when you want to quickly build and deploy change with the minimal clicks possible. With that out of the way, lets get started:

Microsoft recently introduced the ability to create Dataverse Plugin Packages that allow for including dependent assemblies.  This can cause some annoyances when having to deploy changes to dev for testing, because each time the plugin package nuget file is built, it gets a new assembly version in the name, and uploading it to Dataverse via the Plugin Registration Tool, requires that the package be selected to upload.  This causes quite a few extra clicks which can be removed by this semi-hacky work around.

How It Works

  1. In Visual Studio, the process is started by triggering the building of Plugin project in DevDeploy mode
    1. Via a property, the csproj skips generating a the nuget package
  2. An MS Build Target parses the FileVersion property, incrementing the in-memory revision of the version of the property by one ie <FileVersion>1.5.12.24</FileVersion> is updated to <FileVersion>1.5.12.25</FileVersion>.  This does not actually update the file csproj file itself though, which is handled by the next step
  3. A Post-Build Event runs that:
    1. Calls an exe to update the FileVersion in the csproj file
    2. Deletes old nupkg files
    3. Runs dotnet pack in the to create the nuget package, and since the csproj FileVersion has been updated, it will get the correct version
    4. Runs the PAC CLI to select the correct Auth for the Org
    5. Runs The PAC CLI to push the plugin


How to Implement


Prerequisites:

  1. Visual Studio contains a Plugin Project using the newer "Microsoft.NET.Sdk" csproj format that successfully builds a Plugin Package nupkg file.
  2. The PAC CLI has been installed, and the PAC AUTH has been run to create an Auth Connection, and a Name has been assigned.
  3. The Plugin Package has already been built and deployed to the Dataverse Instance, and the PackageId has been recorded
  4. Downloaded, unblocked, and extracted the VersionUpdater to a “VersionUpdater” folder in the directory of the solution (Changes to where this is located will result in changes in the Post Build script in Step 4).  This tool will accept a command line argument of the csproj path so it can find the file version and update it.


Step 1: Create DevDeploy Build Configuration

  1. Right click on the solution file.
  2. Select Properties
  3. Click the Configuration Button
  4. In the “Active solution configuration” drop down select <New>
  5. Enter a name of DevDeploy
  6. Select to “Copy settings from” Release
  7. Ensure “Create new project configurations” is checked.
  8. Click OK to create the new DevDeploy solution build configuratoin


Step 2: Edit the Plugin’s csproj file

Add a Property group in the Plugin’s csproj (Prerequisite 1) with the following values:

  • DeploymentConfigurationName: The Solution configuration to use to build the plugin that gets deployed.
  • DeploymentOutDir: The project relative path to the Deploymnet Configuration’s output directory.  Should always be bin\$(DeploymentConfigurationName)\
  • DeploymentPacAuthName: The name of the deployment Auth to use (Prerequisite 2)
  • GeneratePackageOnBuild: Set to false.   Prevents the default building of the Nuget Plugin Package until after the version has been incremented.  This should also decrease build times for non-deployment builds
  • PluginPackageId: The GUID of the plugin package (Prerequisite 3)

Add a Target to Update the value of FileVersion so the plugin assembly is built with the new version.  The end result should be the following items added to the csproj:

<!-- Plugin Package Deployment Settings -->
<PropertyGroup>
  <DeploymentConfigurationName>Release</DeploymentConfigurationName>
  <DeploymentOutDir>bin\$(DeploymentConfigurationName)\</DeploymentOutDir>
  <DeploymentPacAuthName>Acme Dev</DeploymentPacAuthName>
  <GeneratePackageOnBuild>false</GeneratePackageOnBuild>
  <PluginPackageId>2b66504a-f03e-ef11-8409-7c1e520b27e1</PluginPackageId>
</PropertyGroup>
 
<!-- Updates the File Version in memory so that the plugin dll is built with the correct version.  Apparently msBuild already has an in memory version of the cs proj, and updating the file as a pre build won't update the assembly version -->
<Target Name="IncrementFileVersion" BeforeTargets="PrepareForBuild" Condition="'$(Configuration)' == 'DevDeploy'">
  <PropertyGroup>
    <FileVersionRevisionNext>$([MSBuild]::Add($([System.String]::Copy($(FileVersion)).Split('.')[3]), 1))</FileVersionRevisionNext>
    <FileVersion>$([System.String]::Copy($(FileVersion)).Split('.')[0]).$([System.String]::Copy($(FileVersion)).Split('.')[1]).$([System.String]::Copy($(FileVersion)).Split('.')[2]).$(FileVersionRevisionNext)</FileVersion>
  </PropertyGroup>
  <Message Text="Setting Plugin Assembly FileVersion to: $(FileVersion) " Importance="high" />
</Target>


Step 3 Set the Post Build Script

  1. Right click on the Plugin’s csproj file in Visual Studio
  2. Select Properties
  3. Copy and paste the following build text into the “Post-build event” script
if $(ConfigurationName) == DevDeploy (
  echo Incrementing Version '$(SolutionDir)VersionUpdater\VersionUpdater.exe Increment --project $(ProjectPath)'
  "$(SolutionDir)VersionUpdater\VersionUpdater.exe" Increment --project "$(ProjectPath)"

  echo Deleting old nupkg file del "$(ProjectDir)$(DeploymentOutDir)*.nupkg" /q
  del "$(ProjectDir)$(DeploymentOutDir)*.nupkg" /q

  echo dotnet pack $(ProjectPath) --configuration $(DeploymentConfigurationName) --output "$(ProjectDir)$(DeploymentOutDir)"
  dotnet pack $(ProjectPath) --configuration $(DeploymentConfigurationName) --output "$(ProjectDir)$(DeploymentOutDir)"

  echo Switching To "$(DeploymentPacAuthName)" Auth Connection
  PAC auth select -n "$(DeploymentPacAuthName)"

  echo *** Pushing Plugin ***
  echo PAC plugin push -id $(PluginPackageId) -pf "$(ProjectDir)$(DeploymentOutDir)$(TargetName).$(FileVersion).nupkg"
  PAC plugin push -id $(PluginPackageId) -pf "$(ProjectDir)$(DeploymentOutDir)$(TargetName).$(FileVersion).nupkg"
)


Step 4 Deploy!

  1. Select the Visual Studio Build Configuration of DevDeploy.
  2. Build the Plugin Project
  3. Watch the Build Output for any errors or a successful deployment message:

Build started at 11:30 AM...
1>------ Build started: Project: Acme.Dataverse.Plugin, Configuration: DevDeploy Any CPU ------
1>Setting Plugin Assembly FileVersion to: 1.0.1.79
1>Acme.Dataverse.Plugin -> C:\_dev\Acme\Acme.Dataverse.Plugin\bin\DevDeploy\Acme.Dataverse.Plugin.dll
1>Incrementing Version 'C:\_dev\Acme\CodeGeneration\VersionUpdater.exe Increment --project C:\_dev\Acme\Acme.Dataverse.Plugin\Acme.Dataverse.Plugin.csproj'
1>Updating Version from 1.0.1.78 to 1.0.1.79.
1>Deleting old nupkg file del "C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\*.nupkg" /q
1>dotnet pack C:\_dev\Acme\Acme.Dataverse.Plugin\Acme.Dataverse.Plugin.csproj --configuration Release --output "C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\"
1>MSBuild version 17.9.8+610b4d3b5 for .NET
1>  Determining projects to restore...
1>  Restored C:\_dev\Acme\Acme.Dataverse.Plugin\Acme.Dataverse.Plugin.csproj (in 564 ms).
1>  1 of 2 projects are up-to-date for restore.
1>  Acme.Dataverse -> C:\_dev\Acme\Acme.Dataverse\bin\Release\Acme.Dataverse.dll
1>  Acme.Dataverse.Plugin -> C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\Acme.Dataverse.Plugin.dll
1>  Acme.Dataverse.Plugin -> C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\publish\
1>  The package Acme.Dataverse.Plugin.1.0.1.79 is missing a readme. Go to https://aka.ms/nuget/authoring-best-practices/readme to learn why package readmes are important.
1>  Successfully created package 'C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\Acme.Dataverse.Plugin.1.0.1.79.nupkg'.
1>Switching To "Acme Dev" Auth Connection
1>New default profile:
1>    * UNIVERSAL Acme Dev                    : daryl@Acme.com                   Public https://acme-dev.crm.dynamics.com/
1>
1>*** Pushing Plugin ***
1>PAC plugin push -id 2b66504a-f03e-ef11-8409-7c1e520b27e1 -pf "C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\Acme.Dataverse.Plugin.1.0.1.79.nupkg"
1>Connected as daryl@Acme.com
1>Connected to... Acme Dev
1>
1>Updating plug-in package C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\Acme.Dataverse.Plugin.1.0.1.79.nupkg
1>
1>Plug-in package was updated successfully
1>Acme.Dataverse.Plugin -> C:\_dev\Acme\Acme.Dataverse.Plugin\bin\DevDeploy\publish\
1>Done building project "Acme.Dataverse.Plugin.csproj".
========== Build: 1 succeeded, 0 failed, 1 up-to-date, 0 skipped ==========
========== Build completed at 11:31 AM and took 22.547 seconds ==========


Enjoy not having to manually deploy!

Monday, March 20, 2023

Separating Plugin Logic: A Guide to Testing Dataverse Plugins with IOC

I’m not a pure TDD developer.  I frequently take my best guess at a Dataverse plugin, then apply TDD until everything works.  This can lead to situations where my “rough draft” plugin is complete, but when I go to write my first test, I realize that I have to test allot, and that’s going to be very painful.  The solution to this is to restructure your plugin code so you can test logic independently of each other.  I ran into having to do this recently and decided that maybe a guide of what I do could be helpful to others.  So, if you ever find yourself in this situation and need a little help, this is the guide for you!

Background

The business requirement in my example is to create a “Total Fees” record per year for contacts, which contained the sum of fees from a grandchild record, where the year was determined by the connecting child record.  This resulted in a data model like this:


The plugin would trigger a recalc of fees for a contact, if:

  1. A grandchild was added
  2. A grandchild was removed.
  3. A grandchild fees was updated
  4. A child was added
  5. A child was removed
  6. A child year was updated

And this is a simplistic view still, since there are plenty of situations where changes shouldn’t trigger a recalc (like the fees being updated from null to 0, or a fee getting added when there is no child id, etc).  For now, let’s abstract all that /* logic */ which gives us these methods in the plugin, with the “OnX” methods being called from the Execute automatically by the plugin base class depending on the context, each each “OnX” method calling the RecalcTotalsForContact method:

private void OnGrandchildChange(ExtendedPluginContext context) { /* logic */ }

private void OnGrandchildCreate(ExtendedPluginContext context) { /* logic */ }

private void OnChildChange(ExtendedPluginContext context) { /* logic */ }

private void OnChildCreate(ExtendedPluginContext context) { /* logic */ }

private void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year)
{
    context.Trace("Triggering Recalc for Contact {0}, and Year {1}.", contactId, year);

    var yearStart = new DateTime(year, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
    var nextYearStart = yearStart.AddYears(1);
    var qe = QueryExpressionFactory.Create<Acme_Grandchild>(v => new { v.Acme_Fees });
    qe.AddLink<Acme_Child>(Acme_Grandchild.Fields.Acme_ChildId, Acme_Child.Fields.Id)
        .WhereEqual(
            Acme_Child.Fields.Acme_ContactId, contactId,
            new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.GreaterEqual, yearStart),
            new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.LessThan, nextYearStart));

    var totalFees = context.SystemOrganizationService.GetAllEntities(qe).Sum(v => v.Acme_Fees.GetValueOrDefault());
    var upsert = new Acme_ContactTotal
    {
        Acme_ContactId = new EntityReference(Contact.EntityLogicalName, contactId),
        Acme_Name = year + " Net Fees",
        Acme_Total = new Money(totalFees),
        Acme_Year = year.ToString()
    };
    upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_ContactId, contactId);
    upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_Year, year.ToString());

    context.SystemOrganizationService.Upsert(upsert);
}

Separating The Logic

When testing, we want to be able to test the “OnX” methods separately from the actual calculation logic in the RecaclTotalsForContact.  In order to do that we will need to be able to inject the calculation logic into the plugin, allowing it to run using a mock object that can be used to verify that the RecalcTotalsForContact was called correctly when testing, and using the actual logic when running on the Dataverse server.

There are 100 different ways to inject the logic into the plugin, but one of the simplest is to encapsulate the RecalcTotalsForContact logic into an interface and inject it into the IServiceProvider that is already in the plugin infrastructure.  Using this approach, the first step is to encapsulate the logic into an IContactTotalCalculator interface (Some purists will never put the interface and the implementation in the file, but if you’re only ever going to have one implementation, IMHO it makes finding the implementation much simpler to be in the same file):

public interface IContactTotalCalculator
{
    void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year);
}

public class ContactTotalCalculator : IContactTotalCalculator
{
    public void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year)
    {
        context.Trace("Triggering Recalc for Contact {0}, and Year {1}.", contactId, year);

        var yearStart = new DateTime(year, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
        var nextYearStart = yearStart.AddYears(1);
        var qe = QueryExpressionFactory.Create<Acme_Grandchild>(v => new { v.Acme_Fees });
        qe.AddLink<Acme_Child>(Acme_Grandchild.Fields.Acme_ChildId, Acme_Child.Fields.Id)
            .WhereEqual(
                Acme_Child.Fields.Acme_ContactId, contactId,
                new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.GreaterEqual, yearStart),
                new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.LessThan, nextYearStart));

        var totalFees = context.SystemOrganizationService.GetAllEntities(qe).Sum(v => v.Acme_Fees.GetValueOrDefault())
        var upsert = new Acme_ContactTotal
        {
            Acme_ContactId = new EntityReference(Contact.EntityLogicalName, contactId),
            Acme_Name = year + " Net Fees",
            Acme_Total = new Money(totalFees),
            Acme_Year = year.ToString()
        };
        upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_ContactId, contactId);
        upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_Year, year.ToString());

        context.SystemOrganizationService.Upsert(upsert);
    }
}

Then update the plugin to get the IContactTotalCalculator from the ServiceProvider, defaulting to the ContactTotalCalculator implementation if no implementation exists (which won’t on the Dataverse server):

private void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year)
{
    var calculator = context.ServiceProvider.Get<IContactTotalCalculator>() ?? new ContactTotalCalculator();
    calculator.RecalcTotalsForContact(context, contactId, year);
}

With this simple change, The ContactTotalCalculater is now completely separate from the plugin and can be tested separately with ease!  The plugin triggering logic can now also be tested independently of the actual recalculation logic but there are a few more step required.  Here is a test helper method for the grand children logic that can be called multiple times with different pre-images and targets and the expected children that should be triggered to be recalculated:

private static void TestRecalcTriggered(
    IOrganizationService service,
    ITestLogger logger,
    MessageType message,
    Acme_Grandchild preImage,
    Acme_Grandchild target,
    string failMessage,
    params Acme_Child[] triggeredChildren)
{
    // CREATE LOGIC CONTACT TOTAL CALCULATOR MOCK THAT ACTUALLY DOES NOTHING
    var mockCalculator = new Moq.Mock<IContactTotalCalculator>();
    var plugin = new SumContactFeesPlugin();
    var context = new PluginExecutionContextBuilder()
        .WithFirstRegisteredEvent(plugin, p => p.EntityLogicalName == Acme_Grandchild.EntityLogicalName
                                               && p.Message == message)
        .WithTarget(target);
    if (preImage != null)
    {
        context.WithPreImage(preImage);
    }

    var serviceProvider = new ServiceProviderBuilder(service, context.Build(), logger)
        .WithService(mockCalculator.Object).Build(); // INJECT MOCK INTO SERVICE PROVIDER

    //
    // Act
    //
    plugin.Execute(serviceProvider);

    //
    // Assert
    //
    foreach (var triggeredChild in triggeredChildren)
    {
        mockCalculator.Verify(m =>
                m.RecalcTotalsForContact(It.IsAny<IExtendedPluginContext>(), triggeredChild.Acme_ContactId.Id, triggeredChild.Acme_Year.Year),
            failMessage);
    }

    // VERIFY MOCK CALLED THE EXPECTED # OF TIMES
    try
    {
        mockCalculator.VerifyNoOtherCalls();
    }
    catch
    {
        Assert.Fail(failMessage);
    }
}

Please note that I’m using Moq for my mocking framework and XrmUnitTest for my ServiceProviderBuilder.  You can use any mocking framework/Dataverse Testing framework that you’d like, they’ll all provide the same logic with similar effort.  The key concept is to inject the mock implementation into the IServiceProvider provided to the IPlugin Execute method, and then verify that it has been called the correct number of times with the correct arguments.

Thursday, January 5, 2023

How to Filter Dates in Canvas Apps Using Greater Than/Less Than Operators

Defining the Problem

Recently I was attempting to filter an on-premise SQL table by a DateTime field using a “greater than” operator, and displaying the results in a Data Table control.  When I applied the “greater than” condition to my filter, it would return 0 results.  The crazy thing was I wasn’t seeing any errors.  So I then turned on the Monitor tool and took a look at the response of the getRows request:

{
  "duration": 1130.2,
  "size": 494,
  "status": 400,
  "headers": {
    "Cache-Control": "no-cache,no-store",
    "Content-Length": 494,
    "Content-Type": "application/json",
    "Date": "Thu, 05 Jan 2023 13:36:12 GMT",
    "expires": -1,
    "pragma": "no-cache",
    "strict-transport-security": "max-age=31536000; includeSubDomains",
    "timing-allow-origin": "*",
    "x-content-type-options": "nosniff",
    "x-frame-options": "DENY",
    "x-ms-apihub-cached-response": true,
    "x-ms-apihub-obo": false,
    "x-ms-connection-gateway-object-id": "c29ec50d-0050-4470-ac93-339c4b208626",
    "x-ms-request-id": "e127bd54-0038-4c46-9a31-ce94547c226c",
    "x-ms-user-agent": "PowerApps/3.22122.15 (Web AuthoringTool; AppName=f3d6b68b-f463-43a2-bb2b-b1ea9bd1a03b)",
    "x-ms-client-request-id": "e127bd54-0038-4c46-9a31-ce94547c226c"
  },
  "body": {
    "status": 400,
    "message": "We cannot apply operator < to types DateTimeZone and DateTime.\r\n     inner exception: We cannot apply operator < to types DateTimeZone and DateTime.\r\nclientRequestId: e127bd54-0038-4c46-9a31-ce94547c226c",
    "error": {
      "message": "We cannot apply operator < to types DateTimeZone and DateTime.\r\n     inner exception: We cannot apply operator < to types DateTimeZone and DateTime."
    },
    "source": "sql-eus.azconn-eus-002.p.azurewebsites.net"
  },
  "responseType": "text"
}

Ah, Power Apps shows no error since it returned a 400 status, but the body contains the actual error: "We cannot apply operator < to types DateTimeZone and DateTime.\r\n     inner exception: We cannot apply operator < to types DateTimeZone and DateTime.\r\nclientRequestId: e927bd54-0038-4c46-9a31-ce94547c226c".  Apparently my DateTime column in SQL does not play well with Power App’s Date Time.  After some googling I found some community posts as well:


The Solution

The last community post above suggests that I should try the DateTimeOffset column type in SQL, and after another return to the googling I found a very similar issue described by Tim Leung, describing the same thing.  Unfortunately no one documented how to do this, so here I am, documenting how to do it for you dear reader, as well as future me !  Please be warned, I’m still not sure how DateTimeOffset plays with other tools/systems, so test first!)

  1. Update the DateTime Column in SQL Server
  2. ALTER TABLE dbo.<YourTableName>
    ALTER COLUMN <YourDateColumn> datetimeoffset(0) NOT NULL;

    UPDATE dbo.<YourTableName>
    SET <YourDateColumn> = CONVERT(datetime, <YourDateColumn>) AT TIME ZONE <YourTimeZone>;

    /*
    I don't believe there is a Daylight Saving Time option to timezones, but I just happened to be in EST, not EDT, so my last line looked like this:

        SET <YourDateColumn> = CONVERT(datetime, <YourDateColumn>) AT TIME ZONE 'Eastern Standard Time';

    Use SELECT * FROM Sys.time_zone_info to find your time zone.
    */

  3. Refresh the Data source in the app

  4. In Canvas Apps Studio, click data source options menu and select Refresh
  5. Reload the app
  6. I had problems with the Data Table control I was using not applying the timezone offset correctly.  Reloading the app seemed to fix this issue.

  7. Viola!


It’s not hard, but it definitely is a headache that I would hope Microsoft will solve.



Friday, July 1, 2022

Enabling or Disabling All Plugin Steps In Dataverse

The Cause

Recently a bug (working by design?) with the PowerPlatform.BuildTools version 0.0.81 caused all my plugin steps to become disabled.  After looking at the Azure DevOps Pipeline output I found this lovely difference between versions .77 and .81:

0.0.77

Import-Solution: MySolution_managed.zip, HoldingSolution: True, OverwriteUnmanagedCustomizations: True, PublishWorkflows: True, SkipProductUpdateDependencies: False, AsyncOperation: True, MaxAsyncWaitTime: 01:00:00, ConvertToManaged: False, full path: D:\a\1\a\MySolution_managed.zip

0.0.81

Calling pac cli inputs: solution import --path D:\\a\\1\\a\\MySolution_managed.zip --async true --import-as-holding true --force-overwrite true --publish-changes true --skip-dependency-check false --convert-to-managed false --max-async-wait-time 60 --activate-plugins false' ]

When this solution imported, it deactivated all of my plugin steps in my solution (which had over 100). Manually updating it would have been ugly.  Luckily there is a work around…


The Fix

  1. If you haven’t already, install the XrmToolBox, and set it up to connect to your environment.
  2. Install Sql4Cds
    1. Click Tool Library:
    2. image
    3. Make sure your display tools check boxes have “Not installed” checked and install the tool:
    4. image
    5. Open the Sql 4 CDS tool, connecting to your environment.
    6. Execute the following statement to find the Id of the plugin assembly that you want to enable all plugin steps for:
      1. SELECT pluginassemblyid, name FROM pluginassembly ORDER BY name
         
        
    7. image
    8. Find and copy the plugin assembly id you want to enable (I’ve left the values needed to disable plugins but commented out, in case that is required in the future as well dear reader), and paste into the following query:
    9.  
      UPDATE sdkmessageprocessingstep
      SET statecode = 0, statuscode = 1  -- Enable
      -- SET statecode = 1, statuscode = 2 -- Disable
      WHERE sdkmessageprocessingstepid in (     SELECT sdkmessageprocessingstepid     FROM sdkmessageprocessingstep     WHERE plugintypeid IN (         SELECT plugintypeid         FROM plugintype         WHERE pluginassemblyid = '95858c14-e3c9-4ef9-b0ef-0a2c255ea6df'     )     AND statecode = 1
      )
       
      
    10. Execute the query, get a coffee/tea and let it update all of your steps for you!



Wednesday, April 27, 2022

Using AutoFixture To Create Early Bound Entities

@AutoFixtureAutoFixture is an open source library that is used in testing to create objects without having to explicit set all the values.  I recently attempted to use it in a unit test to create an instance of an early bound entity, and assumed it would be extremely trivial, but boy was a wrong.  But now at least, you have the “joy” of reading this blog post about it.




The Problem(s)

This is what attempting to use an AutoFixture straight out of the box to create an entity looks like:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{   
    var fixture = new Fixture();
    // Fails here:
    // AutoFixture.ObjectCreationExceptionWithPath: AutoFixture was unable to create an instance from System.Runtime.Serialization.ExtensionDataObject,
    // most likely because it has no public constructor, is an abstract or non-public type.
    var contact = fixture.Create<Contact>();
    Assert.IsNotNull(contact.FirstName);
}

The error basically AutoFixture can’t create the ExtensionDataObject since it does not expose a public constructor.  OK, makes sense.  The simplest thing to do is to make a fluent build call and skip the property, but this doesn’t work because other types like Money, have the ExtensionData property and it will fail for those properties as well, and manually skip the ExtensionData property on every object would make AutoFixture viturally worthless.  The solution is to create an ISpecimenBuilder that tells AutoFixture how to create an ExtensionData (in actuality don’t, just set it to null).  This looks like this:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipExtensionData());
   
    // New error
    // AutoFixture.ObjectCreationExceptionWithPath: AutoFixture was unable to create an instance of type AutoFixture.Kernel.FiniteSequenceRequest
    // because the traversed object graph contains a circular reference. Information about the circular path follows below. This is the correct
    // behavior when a Fixture is equipped with a ThrowingRecursionBehavior, which is the default. This ensures that you are being made aware of
    // circular references in your code. Your first reaction should be to redesign your API in order to get rid of all circular references.
    // However, if this is not possible (most likely because parts or all of the API is delivered by a third party), you can replace this default
    // behavior with a different behavior: on the Fixture instance, remove the ThrowingRecursionBehavior from Fixture.Behaviors, and instead add
    // an instance of OmitOnRecursionBehavior:
    //
    //   fixture.Behaviors.OfType<ThrowingRecursionBehavior>().ToList()
    //       .ForEach(b => fixture.Behaviors.Remove(b));
    //   fixture.Behaviors.Add(new OmitOnRecursionBehavior());
    var contact = fixture.Create<Contact>();     Assert.IsNotNull(contact.FirstName);
}


public class SkipExtensionData : ISpecimenBuilder
{
    public object Create(object request, ISpecimenContext context)
    {
        var pi = request as PropertyInfo;
        if (pi == null)
        {
            return new NoSpecimen();
        }

        if (typeof(ExtensionDataObject).IsAssignableFrom(pi.PropertyType))
        {
            return null;
        }

        return new NoSpecimen();
    }
}

But once again, a new error is generated.  This time a circular reference error.  Extra points to the team at AutoFixture for putting the solution to the issue in the code. But after adding it, more issues still pop up.

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipExtensionData());
    fixture.Behaviors.OfType<ThrowingRecursionBehavior>().ToList()
        .ForEach(b => fixture.Behaviors.Remove(b));
    fixture.Behaviors.Add(new OmitOnRecursionBehavior());

    // Yet another error:
    // System.InvalidOperationException: Sequence contains no elements
    // Stack Trace:
    //   Enumerable.First[TSource](IEnumerable`1 source)
    //   Entity.SetRelatedEntities[TEntity](String relationshipSchemaName, Nullable`1 primaryEntityRole, IEnumerable`1 entities)
    //   Contact.set_ReferencedContact_Customer_Contacts(IEnumerable`1 value) line 6219
    var contact = fixture.Create<Contact>();

    Assert.IsNotNull(contact.FirstName);
}

This is a fun error where when setting a related entity collection to an empty collection, you get a Sequence contains no elements error.  (Which I could possible handle in the Early Bound Generator I guess) but this calls out something that in my opinion shouldn’t be getting populated, child collections of entities.  Only the properties of the entity that are actual properties and not LINQ relationships needs to be populated, so we can actually remove the recursive behavior check and resolve this final issue by tweaking the ISpecimenBuilder to skip these types of properties, which brings us to the first solution that doesn’t throw an exception:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipEntityProperties());

    var contact = fixture.Create<Contact>();

    // Fails!  FirstName is Null
    Assert.IsNotNull(contact.FirstName);
}

public class SkipEntityProperties: ISpecimenBuilder
{
    public object Create(object request, ISpecimenContext context)
    {
        var pi = request as PropertyInfo;
        if (pi == null)
        {
            return new NoSpecimen();
        }

        if (typeof(ExtensionDataObject).IsAssignableFrom(pi.PropertyType))
        {
            return null;
        }

        if (pi.DeclaringType == typeof(Entity))
        {
            return null;
        }

        // Property is for an Entity Class, and the Property has a generic type parameter that is an entity, or is an entity
        if (typeof(Entity).IsAssignableFrom(pi.DeclaringType)
            &&
            (pi.PropertyType.IsGenericType && pi.PropertyType.GenericTypeArguments.Any(t => typeof(Entity).IsAssignableFrom(t))
             || typeof(Entity).IsAssignableFrom(pi.PropertyType)
             )
           )
        {
            return null;
        }

        return new NoSpecimen();
    }
}

It was at this point that I couldn’t understand what was going on.  Why aren’t these values getting populated?  2 hours of debugging latter I finally realized that AutoFixture was setting the AttributeCollection of the Entity to null, effectively removing all other variables that were just being set by AutoFixture.  Some more internet researching later I discovered that there was an OmitSpecimen value that would leave the value untouched!  Armed this this knowledge the final solution presented itself!

The Solution

This final bit of code will correctly populate the attributes of the early bound entity:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipEntityProperties());

    var contact = fixture.Create<Contact>();

    Assert.IsNotNull(contact.FirstName);
}

public class SkipEntityProperties: ISpecimenBuilder
{
    public object Create(object request, ISpecimenContext context)
    {
        var pi = request as PropertyInfo;
        if (pi == null)
        {
            return new NoSpecimen();
        }

        if (typeof(ExtensionDataObject).IsAssignableFrom(pi.PropertyType))
        {
            return new OmitSpecimen();
        }

        if (pi.DeclaringType == typeof(Entity))
        {
            return new OmitSpecimen();
        }

        // Property is for an Entity Class, and the Property has a generic type parameter that is an entity, or is an entity
        if (typeof(Entity).IsAssignableFrom(pi.DeclaringType)
            &&
            (pi.PropertyType.IsGenericType && pi.PropertyType.GenericTypeArguments.Any(t => typeof(Entity).IsAssignableFrom(t))
             || typeof(Entity).IsAssignableFrom(pi.PropertyType)
             || typeof(AttributeCollection).IsAssignableFrom(pi.PropertyType)
             )
           )
        {
            return new OmitSpecimen();
        }

        return new NoSpecimen();
    }
}

Here is an example screen shot from above:
image

Notice how everything except the AccountId (Since it’s readonly) has been automatically populated with a default value?  It’s a beautiful thing!

If you found this help, please share it!

Friday, September 17, 2021

Long Functions Are Always A Code Smell

This article is in response to fellow MVP Alex Shelga’s recent article Long functions in dataverse plugins – is it still ” code smell”?.  I’ll start with the fact that there is plenty of room for personal preference and there is no magic equation that can be applied to code that can ultimately define code as good or great or bad.  Alex shared his opinion, and here I’ll share mine.  I’ll tell you right now, they will differ (which shouldn’t be a surprise if you’ve read the title).  It is my hope that no one feels that I’m “attacking” Alex (especially Alex), but that everyone can see this as what it is intended to be, a healthy juxtaposition of ideas.

Alex’s Argument

Before I go into my reasons for why long functions are always a code smell, I’ll list the two reasons Alex sees plugins as different and summarize his arguments for why that matters:

  1. Plugins are inherently stateless
  2. Often developed to provide a piece of very specific business logic

This, he says, “seems to render object-oriented approach somewhat useless in the plugins (other than, maybe, for “structuring”)”.  He then dives into this further and seems to imply that OO code is slower and more complicated, and is primarily used to allow for reusability, and if it’s not making the code more reusable, there is no reason to utilize it.  His final point then is that it doesn’t matter to the performance of the system or to unit testing if the code is longer, and in his personal preference, he finds a longer function more readable: “I’d often prefer longer code in such cases since I don’t have to jump back and forth when reading it / debugging it”.  (If this is you, make sure you memorize the Navigate Forward and Navigate Backward commands in your IDE (View.NavigateBackword Ctrl+- and View.NavigateBackword Ctrl+Shift+- in Visual Studio and Alt+Left Arrow and Alt+Right Arrow in VSCode) then you need to spend the next 10 minutes diving into functions to see what they are doing, and then backing out of them using the navigation shortcut keys.  It could change your life. Scout’s honor)

My Argument

There are no facts that he presents that are wrong, plugin logic is inherently stateless, and doesn’t lend itself to loads of reusability.  I also can’t argue if his personal choice of readability is for long functions is right or wrong. But what I can do is argue why I see shorter functions as more readable, as well as other reasons that I have that shorter functions are better for the health and maintainability of the plugin project.

Why (I find) shorter functions are more readable

If you were to pick up a 300 page book with the title “Execute”, that you’ve never read before, with no cover art or introduction, or chapters or table of contents, or synopsis on the back page, but given 60 seconds to examine it and tell someone what it was about, you’d be pretty hard pressed to give an accurate definition.  But, if the book had a table of contents with these chapter names:

  1. Start at the Beginning
  2. Create a Vision
  3. Share the Vision
  4. Create the Company
  5. Invest in Others
  6. Invite Others to Invest
  7. Grow/Multiply

You could guess fairly confidently it’s a book about starting and growing a business.  If you were only interested in the details of how to get additional investors in a business, you might start at chapter 6.  If however the chapter names were as follows:

  1. Prewar
  2. Early Victories
  3. Atrocities Beyond Belief
  4. Final Battles
  5. Capture
  6. The Trial
  7. The Verdict
  8. Final Words

You could guess that the book is about a soldier/general that committed war crimes and was executed.  If you were only interested in learning if the individual had any remorse for their acts, you might start reading at chapter 8.  So not only do these chapter titles allow you to get a very quick understanding of what the book is about, they also allow you to skip large sections of the book when attempting to find a very narrow topic.  The same is true for code and long functions.  If a function is longer than your screen is tall, the first time you look at it, you will have no idea about what it does beyond the reach of your screen without scrolling and reading.  You’d have to read the entire function to determine what it does. This means that if you’re looking for a bug, you’ll need to read and understand half (on average) of the lines in the function before you could find where the bug is.  But, if the function is 15 lines long with 8 well named functions calls, you’d have a much better guess at what the entire function does and where the bug lies.  For example, given this Execute function:

public void Execute(ExtendedPluginContext context)
{
    var data = GetData(context);
    UpdateAttributes(context, data);
    CreateChildRecords(context, data);
    UpdateTarget(context, data);
}

Now these are probably some pretty poor function names, but you can immediately see that the plugin is getting data, updating some attributes, creating child records and then updating the target.  But just a small improvement in the naming would give even more details:

public void Execute2(ExtendedPluginContext context)
{
    var account = GetAccount(context);
    SetMaxCreditLimit(context, account);
    CreateAccountLogEntries(context, account);
    UpdateTargetStatus(context, account);
}

Now it’s easy to see that there is a call to get the account, which is used it to set the max credit limit and create some log entries and then update the status of the target.  If there is a bug with the status getting updated incorrectly, or the max credit limit not being set, or the log entries not having enough details, it is easy to see what function needs to be looked at first, and what functions can be ignored.  Small functions (when done well) are more efficient for understanding.

Another positives from smaller functions is the error log in the trace.  If my Execute function is 300 lines long and it has a null ref, I’ve got to look at 300 lines of code to guess where the null ref could have occurred.  But since the function name is included in the stack trace for plugins (even when the line number isn’t), if the 300 lines where split into 10 functions of 30 lines, then I’d know the function that would be causing the error and would only have a tenth of the code to analyze for null ref.  That’s huge!

My final note comes into play with nesting “ifs”.  Many times I will walk into a project with 300 line Execute functions nested 10-12 levels deep with “if” statements.  This especially causes issues when it comes to trying to line up curly braces, or when an “else” statement occurs when the matching “if” is not on the screen:

                if (bar)
                {
                    if (baz)
                    {
                        Go();
                    }
                }
                else
                {
                    Fight();
                }
            }
            else
            {
                // Wait, what is this else-ing?
                Win();
            }
        }
    }
}

Although there is nothing that says a longer function has to nest “ifs”, if your function is only 10 lines long, it limits the maximum possible number of nested “ifs”.

When Shorter Functions Help With Testing

Alex mentioned that Testing frameworks like FakeXrmEasy (and I’ll through my XrmUnitTest framework in here as well) don’t care about the length of an Execute function.  It’s a black box.  While this is true, as a test creator, the more complex the logic, the more helpful it is to test it in parts, rather than the whole.  For example, in my Execute2 function above, if there are 3 different branches of logic in GetAccount and 2 in SetMaxCreditLimit, and 4 in CreateAccountLogEntries, and 1 in UpdateTargetStatus, this results in 24 different potential dependent paths to test.  Contrast this to testing the parts separately, and only having 10 different tests with only the required setup for each specific function.  This is much more maintainable.  Personally I believe that this can be taken to the extreme as well, and trying to test 100 functions to perfection is usually not the ideal time investment as well, so I may have a couple tests of the execute function start to finish, and cherry pick some of the more complicated functions to test, rather than try to test everything. 

In Conclusion

Take time to analyze other peoples opinions and determine if you agree or disagree to the point where you are be prepared to argue why.  We are all learning and growing in our craft as developers, which requires us to continue to allow new ideas to challenge our existing conventions.  Share it, blog about it, and grow, remembering to always “Raise La Bar”.