Thursday, November 28, 2024

PowerFx Formatting If Statement Idiosyncrasies

Disclaimer: This entire blog is an opinion, and opinions are like butts, we all have one.  This is not doctrinal truth that must be observed or else you “are dead to me”.  It is just my opinion which I offer with my reasoning in the hopes that it will convince others to make their low-code developer lives and the lives of those that come after them, better. (If you do disagree with, I would love to hear your counter arguments in the comments)

Auto Formatting If Statements Sucks Has Room For Improvement

This is how a simple if statement is auto-formatted within the Canvas Apps:



This is better than a single line, but still sucks has room for improvement.

  1. "If(" does not require it’s own line to distinguish itself from the surrounding code.  The indentation already serves that purpose and vertical spacing in large PowerFx functions is at a premium (but yet it’s premium-ness shouldn’t override readability, more on that later).
  2. The parts of the if are not clearly distinguished as it contains a single uninterrupted block of text .  This is an extremely simple "if" statement, but it takes a lot of attention to visual detail (aka brain power/cognitive load) to separate the condition portion, the true-statement, and the false/else-statement(s).  This leads to bugs and increases the time required to understand what the code is, what it is doing, and what changes may need to be made.

By including the condition on the same line as the if, and most importantly SEPERATING ELSE/ELSE-IF’S WITH AN UNINDENTED COMMA, each part of the if statement becomes clearly defined, and it takes up no less vertical space than before:



This becomes even more important on longer if statements, or if statements with an else/if:



Although vertical spacing is at a premium in PowerFx (all programing languages really), it should always be subservient to readability.  For longer more complicated if statements, readability should be win out, and the parts of the condition can be split among multiple lines, but double indented to clearly define where the true-statement starts:



The choosing of when to split up the if-condition among multiple lines vs keeping it on a single line is much more personal preference than anything else, and is one that may personally fluctuate from time to time (or at least it does for me).

One final thing to note, eliminating an if statement usually results in cleaner code.  In cases where there is just an if statement used to set a Boolean value, removing it is simple and should be done in almost all situations, just like this one:



Happy coding formatting!

Saturday, July 13, 2024

Autoincrementing and Deploying Dataverse Plugin Package

How to Automate Incrementing and Deploying Dataverse Plugin Packages

First things first, this isn’t an article for how to setup an ALM process.  It’s for local dev, when you want to quickly build and deploy change with the minimal clicks possible. With that out of the way, lets get started:

Microsoft recently introduced the ability to create Dataverse Plugin Packages that allow for including dependent assemblies.  This can cause some annoyances when having to deploy changes to dev for testing, because each time the plugin package nuget file is built, it gets a new assembly version in the name, and uploading it to Dataverse via the Plugin Registration Tool, requires that the package be selected to upload.  This causes quite a few extra clicks which can be removed by this semi-hacky work around.

How It Works

  1. In Visual Studio, the process is started by triggering the building of Plugin project in DevDeploy mode
    1. Via a property, the csproj skips generating a the nuget package
  2. An MS Build Target parses the FileVersion property, incrementing the in-memory revision of the version of the property by one ie <FileVersion>1.5.12.24</FileVersion> is updated to <FileVersion>1.5.12.25</FileVersion>.  This does not actually update the file csproj file itself though, which is handled by the next step
  3. A Post-Build Event runs that:
    1. Calls an exe to update the FileVersion in the csproj file
    2. Deletes old nupkg files
    3. Runs dotnet pack in the to create the nuget package, and since the csproj FileVersion has been updated, it will get the correct version
    4. Runs the PAC CLI to select the correct Auth for the Org
    5. Runs The PAC CLI to push the plugin


How to Implement


Prerequisites:

  1. Visual Studio contains a Plugin Project using the newer "Microsoft.NET.Sdk" csproj format that successfully builds a Plugin Package nupkg file.
  2. The PAC CLI has been installed, and the PAC AUTH has been run to create an Auth Connection, and a Name has been assigned.
  3. The Plugin Package has already been built and deployed to the Dataverse Instance, and the PackageId has been recorded
  4. Downloaded, unblocked, and extracted the VersionUpdater to a “VersionUpdater” folder in the directory of the solution (Changes to where this is located will result in changes in the Post Build script in Step 4).  This tool will accept a command line argument of the csproj path so it can find the file version and update it.


Step 1: Create DevDeploy Build Configuration

  1. Right click on the solution file.
  2. Select Properties
  3. Click the Configuration Button
  4. In the “Active solution configuration” drop down select <New>
  5. Enter a name of DevDeploy
  6. Select to “Copy settings from” Release
  7. Ensure “Create new project configurations” is checked.
  8. Click OK to create the new DevDeploy solution build configuratoin


Step 2: Edit the Plugin’s csproj file

Add a Property group in the Plugin’s csproj (Prerequisite 1) with the following values:

  • DeploymentConfigurationName: The Solution configuration to use to build the plugin that gets deployed.
  • DeploymentOutDir: The project relative path to the Deploymnet Configuration’s output directory.  Should always be bin\$(DeploymentConfigurationName)\
  • DeploymentPacAuthName: The name of the deployment Auth to use (Prerequisite 2)
  • GeneratePackageOnBuild: Set to false.   Prevents the default building of the Nuget Plugin Package until after the version has been incremented.  This should also decrease build times for non-deployment builds
  • PluginPackageId: The GUID of the plugin package (Prerequisite 3)

Add a Target to Update the value of FileVersion so the plugin assembly is built with the new version.  The end result should be the following items added to the csproj:

<!-- Plugin Package Deployment Settings -->
<PropertyGroup>
  <DeploymentConfigurationName>Release</DeploymentConfigurationName>
  <DeploymentOutDir>bin\$(DeploymentConfigurationName)\</DeploymentOutDir>
  <DeploymentPacAuthName>Acme Dev</DeploymentPacAuthName>
  <GeneratePackageOnBuild>false</GeneratePackageOnBuild>
  <PluginPackageId>2b66504a-f03e-ef11-8409-7c1e520b27e1</PluginPackageId>
</PropertyGroup>
 
<!-- Updates the File Version in memory so that the plugin dll is built with the correct version.  Apparently msBuild already has an in memory version of the cs proj, and updating the file as a pre build won't update the assembly version -->
<Target Name="IncrementFileVersion" BeforeTargets="PrepareForBuild" Condition="'$(Configuration)' == 'DevDeploy'">
  <PropertyGroup>
    <FileVersionRevisionNext>$([MSBuild]::Add($([System.String]::Copy($(FileVersion)).Split('.')[3]), 1))</FileVersionRevisionNext>
    <FileVersion>$([System.String]::Copy($(FileVersion)).Split('.')[0]).$([System.String]::Copy($(FileVersion)).Split('.')[1]).$([System.String]::Copy($(FileVersion)).Split('.')[2]).$(FileVersionRevisionNext)</FileVersion>
  </PropertyGroup>
  <Message Text="Setting Plugin Assembly FileVersion to: $(FileVersion) " Importance="high" />
</Target>


Step 3 Set the Post Build Script

  1. Right click on the Plugin’s csproj file in Visual Studio
  2. Select Properties
  3. Copy and paste the following build text into the “Post-build event” script
if $(ConfigurationName) == DevDeploy (
  echo Incrementing Version '$(SolutionDir)VersionUpdater\VersionUpdater.exe Increment --project $(ProjectPath)'
  "$(SolutionDir)VersionUpdater\VersionUpdater.exe" Increment --project "$(ProjectPath)"

  echo Deleting old nupkg file del "$(ProjectDir)$(DeploymentOutDir)*.nupkg" /q
  del "$(ProjectDir)$(DeploymentOutDir)*.nupkg" /q

  echo dotnet pack $(ProjectPath) --configuration $(DeploymentConfigurationName) --output "$(ProjectDir)$(DeploymentOutDir)"
  dotnet pack $(ProjectPath) --configuration $(DeploymentConfigurationName) --output "$(ProjectDir)$(DeploymentOutDir)"

  echo Switching To "$(DeploymentPacAuthName)" Auth Connection
  PAC auth select -n "$(DeploymentPacAuthName)"

  echo *** Pushing Plugin ***
  echo PAC plugin push -id $(PluginPackageId) -pf "$(ProjectDir)$(DeploymentOutDir)$(TargetName).$(FileVersion).nupkg"
  PAC plugin push -id $(PluginPackageId) -pf "$(ProjectDir)$(DeploymentOutDir)$(TargetName).$(FileVersion).nupkg"
)


Step 4 Deploy!

  1. Select the Visual Studio Build Configuration of DevDeploy.
  2. Build the Plugin Project
  3. Watch the Build Output for any errors or a successful deployment message:

Build started at 11:30 AM...
1>------ Build started: Project: Acme.Dataverse.Plugin, Configuration: DevDeploy Any CPU ------
1>Setting Plugin Assembly FileVersion to: 1.0.1.79
1>Acme.Dataverse.Plugin -> C:\_dev\Acme\Acme.Dataverse.Plugin\bin\DevDeploy\Acme.Dataverse.Plugin.dll
1>Incrementing Version 'C:\_dev\Acme\CodeGeneration\VersionUpdater.exe Increment --project C:\_dev\Acme\Acme.Dataverse.Plugin\Acme.Dataverse.Plugin.csproj'
1>Updating Version from 1.0.1.78 to 1.0.1.79.
1>Deleting old nupkg file del "C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\*.nupkg" /q
1>dotnet pack C:\_dev\Acme\Acme.Dataverse.Plugin\Acme.Dataverse.Plugin.csproj --configuration Release --output "C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\"
1>MSBuild version 17.9.8+610b4d3b5 for .NET
1>  Determining projects to restore...
1>  Restored C:\_dev\Acme\Acme.Dataverse.Plugin\Acme.Dataverse.Plugin.csproj (in 564 ms).
1>  1 of 2 projects are up-to-date for restore.
1>  Acme.Dataverse -> C:\_dev\Acme\Acme.Dataverse\bin\Release\Acme.Dataverse.dll
1>  Acme.Dataverse.Plugin -> C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\Acme.Dataverse.Plugin.dll
1>  Acme.Dataverse.Plugin -> C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\publish\
1>  The package Acme.Dataverse.Plugin.1.0.1.79 is missing a readme. Go to https://aka.ms/nuget/authoring-best-practices/readme to learn why package readmes are important.
1>  Successfully created package 'C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\Acme.Dataverse.Plugin.1.0.1.79.nupkg'.
1>Switching To "Acme Dev" Auth Connection
1>New default profile:
1>    * UNIVERSAL Acme Dev                    : daryl@Acme.com                   Public https://acme-dev.crm.dynamics.com/
1>
1>*** Pushing Plugin ***
1>PAC plugin push -id 2b66504a-f03e-ef11-8409-7c1e520b27e1 -pf "C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\Acme.Dataverse.Plugin.1.0.1.79.nupkg"
1>Connected as daryl@Acme.com
1>Connected to... Acme Dev
1>
1>Updating plug-in package C:\_dev\Acme\Acme.Dataverse.Plugin\bin\Release\Acme.Dataverse.Plugin.1.0.1.79.nupkg
1>
1>Plug-in package was updated successfully
1>Acme.Dataverse.Plugin -> C:\_dev\Acme\Acme.Dataverse.Plugin\bin\DevDeploy\publish\
1>Done building project "Acme.Dataverse.Plugin.csproj".
========== Build: 1 succeeded, 0 failed, 1 up-to-date, 0 skipped ==========
========== Build completed at 11:31 AM and took 22.547 seconds ==========


Enjoy not having to manually deploy!

Monday, March 20, 2023

Separating Plugin Logic: A Guide to Testing Dataverse Plugins with IOC

I’m not a pure TDD developer.  I frequently take my best guess at a Dataverse plugin, then apply TDD until everything works.  This can lead to situations where my “rough draft” plugin is complete, but when I go to write my first test, I realize that I have to test allot, and that’s going to be very painful.  The solution to this is to restructure your plugin code so you can test logic independently of each other.  I ran into having to do this recently and decided that maybe a guide of what I do could be helpful to others.  So, if you ever find yourself in this situation and need a little help, this is the guide for you!

Background

The business requirement in my example is to create a “Total Fees” record per year for contacts, which contained the sum of fees from a grandchild record, where the year was determined by the connecting child record.  This resulted in a data model like this:


The plugin would trigger a recalc of fees for a contact, if:

  1. A grandchild was added
  2. A grandchild was removed.
  3. A grandchild fees was updated
  4. A child was added
  5. A child was removed
  6. A child year was updated

And this is a simplistic view still, since there are plenty of situations where changes shouldn’t trigger a recalc (like the fees being updated from null to 0, or a fee getting added when there is no child id, etc).  For now, let’s abstract all that /* logic */ which gives us these methods in the plugin, with the “OnX” methods being called from the Execute automatically by the plugin base class depending on the context, each each “OnX” method calling the RecalcTotalsForContact method:

private void OnGrandchildChange(ExtendedPluginContext context) { /* logic */ }

private void OnGrandchildCreate(ExtendedPluginContext context) { /* logic */ }

private void OnChildChange(ExtendedPluginContext context) { /* logic */ }

private void OnChildCreate(ExtendedPluginContext context) { /* logic */ }

private void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year)
{
    context.Trace("Triggering Recalc for Contact {0}, and Year {1}.", contactId, year);

    var yearStart = new DateTime(year, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
    var nextYearStart = yearStart.AddYears(1);
    var qe = QueryExpressionFactory.Create<Acme_Grandchild>(v => new { v.Acme_Fees });
    qe.AddLink<Acme_Child>(Acme_Grandchild.Fields.Acme_ChildId, Acme_Child.Fields.Id)
        .WhereEqual(
            Acme_Child.Fields.Acme_ContactId, contactId,
            new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.GreaterEqual, yearStart),
            new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.LessThan, nextYearStart));

    var totalFees = context.SystemOrganizationService.GetAllEntities(qe).Sum(v => v.Acme_Fees.GetValueOrDefault());
    var upsert = new Acme_ContactTotal
    {
        Acme_ContactId = new EntityReference(Contact.EntityLogicalName, contactId),
        Acme_Name = year + " Net Fees",
        Acme_Total = new Money(totalFees),
        Acme_Year = year.ToString()
    };
    upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_ContactId, contactId);
    upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_Year, year.ToString());

    context.SystemOrganizationService.Upsert(upsert);
}

Separating The Logic

When testing, we want to be able to test the “OnX” methods separately from the actual calculation logic in the RecaclTotalsForContact.  In order to do that we will need to be able to inject the calculation logic into the plugin, allowing it to run using a mock object that can be used to verify that the RecalcTotalsForContact was called correctly when testing, and using the actual logic when running on the Dataverse server.

There are 100 different ways to inject the logic into the plugin, but one of the simplest is to encapsulate the RecalcTotalsForContact logic into an interface and inject it into the IServiceProvider that is already in the plugin infrastructure.  Using this approach, the first step is to encapsulate the logic into an IContactTotalCalculator interface (Some purists will never put the interface and the implementation in the file, but if you’re only ever going to have one implementation, IMHO it makes finding the implementation much simpler to be in the same file):

public interface IContactTotalCalculator
{
    void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year);
}

public class ContactTotalCalculator : IContactTotalCalculator
{
    public void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year)
    {
        context.Trace("Triggering Recalc for Contact {0}, and Year {1}.", contactId, year);

        var yearStart = new DateTime(year, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc);
        var nextYearStart = yearStart.AddYears(1);
        var qe = QueryExpressionFactory.Create<Acme_Grandchild>(v => new { v.Acme_Fees });
        qe.AddLink<Acme_Child>(Acme_Grandchild.Fields.Acme_ChildId, Acme_Child.Fields.Id)
            .WhereEqual(
                Acme_Child.Fields.Acme_ContactId, contactId,
                new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.GreaterEqual, yearStart),
                new ConditionExpression(Acme_Child.Fields.Acme_Year, ConditionOperator.LessThan, nextYearStart));

        var totalFees = context.SystemOrganizationService.GetAllEntities(qe).Sum(v => v.Acme_Fees.GetValueOrDefault())
        var upsert = new Acme_ContactTotal
        {
            Acme_ContactId = new EntityReference(Contact.EntityLogicalName, contactId),
            Acme_Name = year + " Net Fees",
            Acme_Total = new Money(totalFees),
            Acme_Year = year.ToString()
        };
        upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_ContactId, contactId);
        upsert.KeyAttributes.Add(Acme_ContactTotal.Fields.Acme_Year, year.ToString());

        context.SystemOrganizationService.Upsert(upsert);
    }
}

Then update the plugin to get the IContactTotalCalculator from the ServiceProvider, defaulting to the ContactTotalCalculator implementation if no implementation exists (which won’t on the Dataverse server):

private void RecalcTotalsForContact(IExtendedPluginContext context, Guid contactId, int year)
{
    var calculator = context.ServiceProvider.Get<IContactTotalCalculator>() ?? new ContactTotalCalculator();
    calculator.RecalcTotalsForContact(context, contactId, year);
}

With this simple change, The ContactTotalCalculater is now completely separate from the plugin and can be tested separately with ease!  The plugin triggering logic can now also be tested independently of the actual recalculation logic but there are a few more step required.  Here is a test helper method for the grand children logic that can be called multiple times with different pre-images and targets and the expected children that should be triggered to be recalculated:

private static void TestRecalcTriggered(
    IOrganizationService service,
    ITestLogger logger,
    MessageType message,
    Acme_Grandchild preImage,
    Acme_Grandchild target,
    string failMessage,
    params Acme_Child[] triggeredChildren)
{
    // CREATE LOGIC CONTACT TOTAL CALCULATOR MOCK THAT ACTUALLY DOES NOTHING
    var mockCalculator = new Moq.Mock<IContactTotalCalculator>();
    var plugin = new SumContactFeesPlugin();
    var context = new PluginExecutionContextBuilder()
        .WithFirstRegisteredEvent(plugin, p => p.EntityLogicalName == Acme_Grandchild.EntityLogicalName
                                               && p.Message == message)
        .WithTarget(target);
    if (preImage != null)
    {
        context.WithPreImage(preImage);
    }

    var serviceProvider = new ServiceProviderBuilder(service, context.Build(), logger)
        .WithService(mockCalculator.Object).Build(); // INJECT MOCK INTO SERVICE PROVIDER

    //
    // Act
    //
    plugin.Execute(serviceProvider);

    //
    // Assert
    //
    foreach (var triggeredChild in triggeredChildren)
    {
        mockCalculator.Verify(m =>
                m.RecalcTotalsForContact(It.IsAny<IExtendedPluginContext>(), triggeredChild.Acme_ContactId.Id, triggeredChild.Acme_Year.Year),
            failMessage);
    }

    // VERIFY MOCK CALLED THE EXPECTED # OF TIMES
    try
    {
        mockCalculator.VerifyNoOtherCalls();
    }
    catch
    {
        Assert.Fail(failMessage);
    }
}

Please note that I’m using Moq for my mocking framework and XrmUnitTest for my ServiceProviderBuilder.  You can use any mocking framework/Dataverse Testing framework that you’d like, they’ll all provide the same logic with similar effort.  The key concept is to inject the mock implementation into the IServiceProvider provided to the IPlugin Execute method, and then verify that it has been called the correct number of times with the correct arguments.

Thursday, January 5, 2023

How to Filter Dates in Canvas Apps Using Greater Than/Less Than Operators

Defining the Problem

Recently I was attempting to filter an on-premise SQL table by a DateTime field using a “greater than” operator, and displaying the results in a Data Table control.  When I applied the “greater than” condition to my filter, it would return 0 results.  The crazy thing was I wasn’t seeing any errors.  So I then turned on the Monitor tool and took a look at the response of the getRows request:

{
  "duration": 1130.2,
  "size": 494,
  "status": 400,
  "headers": {
    "Cache-Control": "no-cache,no-store",
    "Content-Length": 494,
    "Content-Type": "application/json",
    "Date": "Thu, 05 Jan 2023 13:36:12 GMT",
    "expires": -1,
    "pragma": "no-cache",
    "strict-transport-security": "max-age=31536000; includeSubDomains",
    "timing-allow-origin": "*",
    "x-content-type-options": "nosniff",
    "x-frame-options": "DENY",
    "x-ms-apihub-cached-response": true,
    "x-ms-apihub-obo": false,
    "x-ms-connection-gateway-object-id": "c29ec50d-0050-4470-ac93-339c4b208626",
    "x-ms-request-id": "e127bd54-0038-4c46-9a31-ce94547c226c",
    "x-ms-user-agent": "PowerApps/3.22122.15 (Web AuthoringTool; AppName=f3d6b68b-f463-43a2-bb2b-b1ea9bd1a03b)",
    "x-ms-client-request-id": "e127bd54-0038-4c46-9a31-ce94547c226c"
  },
  "body": {
    "status": 400,
    "message": "We cannot apply operator < to types DateTimeZone and DateTime.\r\n     inner exception: We cannot apply operator < to types DateTimeZone and DateTime.\r\nclientRequestId: e127bd54-0038-4c46-9a31-ce94547c226c",
    "error": {
      "message": "We cannot apply operator < to types DateTimeZone and DateTime.\r\n     inner exception: We cannot apply operator < to types DateTimeZone and DateTime."
    },
    "source": "sql-eus.azconn-eus-002.p.azurewebsites.net"
  },
  "responseType": "text"
}

Ah, Power Apps shows no error since it returned a 400 status, but the body contains the actual error: "We cannot apply operator < to types DateTimeZone and DateTime.\r\n     inner exception: We cannot apply operator < to types DateTimeZone and DateTime.\r\nclientRequestId: e927bd54-0038-4c46-9a31-ce94547c226c".  Apparently my DateTime column in SQL does not play well with Power App’s Date Time.  After some googling I found some community posts as well:


The Solution

The last community post above suggests that I should try the DateTimeOffset column type in SQL, and after another return to the googling I found a very similar issue described by Tim Leung, describing the same thing.  Unfortunately no one documented how to do this, so here I am, documenting how to do it for you dear reader, as well as future me !  Please be warned, I’m still not sure how DateTimeOffset plays with other tools/systems, so test first!)

  1. Update the DateTime Column in SQL Server
  2. ALTER TABLE dbo.<YourTableName>
    ALTER COLUMN <YourDateColumn> datetimeoffset(0) NOT NULL;

    UPDATE dbo.<YourTableName>
    SET <YourDateColumn> = CONVERT(datetime, <YourDateColumn>) AT TIME ZONE <YourTimeZone>;

    /*
    I don't believe there is a Daylight Saving Time option to timezones, but I just happened to be in EST, not EDT, so my last line looked like this:

        SET <YourDateColumn> = CONVERT(datetime, <YourDateColumn>) AT TIME ZONE 'Eastern Standard Time';

    Use SELECT * FROM Sys.time_zone_info to find your time zone.
    */

  3. Refresh the Data source in the app

  4. In Canvas Apps Studio, click data source options menu and select Refresh
  5. Reload the app
  6. I had problems with the Data Table control I was using not applying the timezone offset correctly.  Reloading the app seemed to fix this issue.

  7. Viola!


It’s not hard, but it definitely is a headache that I would hope Microsoft will solve.



Friday, July 1, 2022

Enabling or Disabling All Plugin Steps In Dataverse

The Cause

Recently a bug (working by design?) with the PowerPlatform.BuildTools version 0.0.81 caused all my plugin steps to become disabled.  After looking at the Azure DevOps Pipeline output I found this lovely difference between versions .77 and .81:

0.0.77

Import-Solution: MySolution_managed.zip, HoldingSolution: True, OverwriteUnmanagedCustomizations: True, PublishWorkflows: True, SkipProductUpdateDependencies: False, AsyncOperation: True, MaxAsyncWaitTime: 01:00:00, ConvertToManaged: False, full path: D:\a\1\a\MySolution_managed.zip

0.0.81

Calling pac cli inputs: solution import --path D:\\a\\1\\a\\MySolution_managed.zip --async true --import-as-holding true --force-overwrite true --publish-changes true --skip-dependency-check false --convert-to-managed false --max-async-wait-time 60 --activate-plugins false' ]

When this solution imported, it deactivated all of my plugin steps in my solution (which had over 100). Manually updating it would have been ugly.  Luckily there is a work around…


The Fix

  1. If you haven’t already, install the XrmToolBox, and set it up to connect to your environment.
  2. Install Sql4Cds
    1. Click Tool Library:
    2. image
    3. Make sure your display tools check boxes have “Not installed” checked and install the tool:
    4. image
    5. Open the Sql 4 CDS tool, connecting to your environment.
    6. Execute the following statement to find the Id of the plugin assembly that you want to enable all plugin steps for:
      1. SELECT pluginassemblyid, name FROM pluginassembly ORDER BY name
         
        
    7. image
    8. Find and copy the plugin assembly id you want to enable (I’ve left the values needed to disable plugins but commented out, in case that is required in the future as well dear reader), and paste into the following query:
    9.  
      UPDATE sdkmessageprocessingstep
      SET statecode = 0, statuscode = 1  -- Enable
      -- SET statecode = 1, statuscode = 2 -- Disable
      WHERE sdkmessageprocessingstepid in (     SELECT sdkmessageprocessingstepid     FROM sdkmessageprocessingstep     WHERE plugintypeid IN (         SELECT plugintypeid         FROM plugintype         WHERE pluginassemblyid = '95858c14-e3c9-4ef9-b0ef-0a2c255ea6df'     )     AND statecode = 1
      )
       
      
    10. Execute the query, get a coffee/tea and let it update all of your steps for you!



Wednesday, April 27, 2022

Using AutoFixture To Create Early Bound Entities

@AutoFixtureAutoFixture is an open source library that is used in testing to create objects without having to explicit set all the values.  I recently attempted to use it in a unit test to create an instance of an early bound entity, and assumed it would be extremely trivial, but boy was a wrong.  But now at least, you have the “joy” of reading this blog post about it.




The Problem(s)

This is what attempting to use an AutoFixture straight out of the box to create an entity looks like:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{   
    var fixture = new Fixture();
    // Fails here:
    // AutoFixture.ObjectCreationExceptionWithPath: AutoFixture was unable to create an instance from System.Runtime.Serialization.ExtensionDataObject,
    // most likely because it has no public constructor, is an abstract or non-public type.
    var contact = fixture.Create<Contact>();
    Assert.IsNotNull(contact.FirstName);
}

The error basically AutoFixture can’t create the ExtensionDataObject since it does not expose a public constructor.  OK, makes sense.  The simplest thing to do is to make a fluent build call and skip the property, but this doesn’t work because other types like Money, have the ExtensionData property and it will fail for those properties as well, and manually skip the ExtensionData property on every object would make AutoFixture viturally worthless.  The solution is to create an ISpecimenBuilder that tells AutoFixture how to create an ExtensionData (in actuality don’t, just set it to null).  This looks like this:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipExtensionData());
   
    // New error
    // AutoFixture.ObjectCreationExceptionWithPath: AutoFixture was unable to create an instance of type AutoFixture.Kernel.FiniteSequenceRequest
    // because the traversed object graph contains a circular reference. Information about the circular path follows below. This is the correct
    // behavior when a Fixture is equipped with a ThrowingRecursionBehavior, which is the default. This ensures that you are being made aware of
    // circular references in your code. Your first reaction should be to redesign your API in order to get rid of all circular references.
    // However, if this is not possible (most likely because parts or all of the API is delivered by a third party), you can replace this default
    // behavior with a different behavior: on the Fixture instance, remove the ThrowingRecursionBehavior from Fixture.Behaviors, and instead add
    // an instance of OmitOnRecursionBehavior:
    //
    //   fixture.Behaviors.OfType<ThrowingRecursionBehavior>().ToList()
    //       .ForEach(b => fixture.Behaviors.Remove(b));
    //   fixture.Behaviors.Add(new OmitOnRecursionBehavior());
    var contact = fixture.Create<Contact>();     Assert.IsNotNull(contact.FirstName);
}


public class SkipExtensionData : ISpecimenBuilder
{
    public object Create(object request, ISpecimenContext context)
    {
        var pi = request as PropertyInfo;
        if (pi == null)
        {
            return new NoSpecimen();
        }

        if (typeof(ExtensionDataObject).IsAssignableFrom(pi.PropertyType))
        {
            return null;
        }

        return new NoSpecimen();
    }
}

But once again, a new error is generated.  This time a circular reference error.  Extra points to the team at AutoFixture for putting the solution to the issue in the code. But after adding it, more issues still pop up.

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipExtensionData());
    fixture.Behaviors.OfType<ThrowingRecursionBehavior>().ToList()
        .ForEach(b => fixture.Behaviors.Remove(b));
    fixture.Behaviors.Add(new OmitOnRecursionBehavior());

    // Yet another error:
    // System.InvalidOperationException: Sequence contains no elements
    // Stack Trace:
    //   Enumerable.First[TSource](IEnumerable`1 source)
    //   Entity.SetRelatedEntities[TEntity](String relationshipSchemaName, Nullable`1 primaryEntityRole, IEnumerable`1 entities)
    //   Contact.set_ReferencedContact_Customer_Contacts(IEnumerable`1 value) line 6219
    var contact = fixture.Create<Contact>();

    Assert.IsNotNull(contact.FirstName);
}

This is a fun error where when setting a related entity collection to an empty collection, you get a Sequence contains no elements error.  (Which I could possible handle in the Early Bound Generator I guess) but this calls out something that in my opinion shouldn’t be getting populated, child collections of entities.  Only the properties of the entity that are actual properties and not LINQ relationships needs to be populated, so we can actually remove the recursive behavior check and resolve this final issue by tweaking the ISpecimenBuilder to skip these types of properties, which brings us to the first solution that doesn’t throw an exception:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipEntityProperties());

    var contact = fixture.Create<Contact>();

    // Fails!  FirstName is Null
    Assert.IsNotNull(contact.FirstName);
}

public class SkipEntityProperties: ISpecimenBuilder
{
    public object Create(object request, ISpecimenContext context)
    {
        var pi = request as PropertyInfo;
        if (pi == null)
        {
            return new NoSpecimen();
        }

        if (typeof(ExtensionDataObject).IsAssignableFrom(pi.PropertyType))
        {
            return null;
        }

        if (pi.DeclaringType == typeof(Entity))
        {
            return null;
        }

        // Property is for an Entity Class, and the Property has a generic type parameter that is an entity, or is an entity
        if (typeof(Entity).IsAssignableFrom(pi.DeclaringType)
            &&
            (pi.PropertyType.IsGenericType && pi.PropertyType.GenericTypeArguments.Any(t => typeof(Entity).IsAssignableFrom(t))
             || typeof(Entity).IsAssignableFrom(pi.PropertyType)
             )
           )
        {
            return null;
        }

        return new NoSpecimen();
    }
}

It was at this point that I couldn’t understand what was going on.  Why aren’t these values getting populated?  2 hours of debugging latter I finally realized that AutoFixture was setting the AttributeCollection of the Entity to null, effectively removing all other variables that were just being set by AutoFixture.  Some more internet researching later I discovered that there was an OmitSpecimen value that would leave the value untouched!  Armed this this knowledge the final solution presented itself!

The Solution

This final bit of code will correctly populate the attributes of the early bound entity:

[TestMethod]
public void EarlyBoundAutoFixture_Should_Generate()
{
    var fixture = new Fixture();
    fixture.Customizations.Add(new SkipEntityProperties());

    var contact = fixture.Create<Contact>();

    Assert.IsNotNull(contact.FirstName);
}

public class SkipEntityProperties: ISpecimenBuilder
{
    public object Create(object request, ISpecimenContext context)
    {
        var pi = request as PropertyInfo;
        if (pi == null)
        {
            return new NoSpecimen();
        }

        if (typeof(ExtensionDataObject).IsAssignableFrom(pi.PropertyType))
        {
            return new OmitSpecimen();
        }

        if (pi.DeclaringType == typeof(Entity))
        {
            return new OmitSpecimen();
        }

        // Property is for an Entity Class, and the Property has a generic type parameter that is an entity, or is an entity
        if (typeof(Entity).IsAssignableFrom(pi.DeclaringType)
            &&
            (pi.PropertyType.IsGenericType && pi.PropertyType.GenericTypeArguments.Any(t => typeof(Entity).IsAssignableFrom(t))
             || typeof(Entity).IsAssignableFrom(pi.PropertyType)
             || typeof(AttributeCollection).IsAssignableFrom(pi.PropertyType)
             )
           )
        {
            return new OmitSpecimen();
        }

        return new NoSpecimen();
    }
}

Here is an example screen shot from above:
image

Notice how everything except the AccountId (Since it’s readonly) has been automatically populated with a default value?  It’s a beautiful thing!

If you found this help, please share it!