Tuesday, June 23, 2020

Setting Sub-Grid FilterXml In The Unified Interface And Other Naughty Things

This post started as a twitter poll where I asked if I should blog about an unsupported solution I developed for a rather unusual business requirement dealing with adding option set values to a control that didn’t actually exist in the Option Set of the system.  Because 5 more people voted for me to blog the solution than not, the code for that twitter poll is at the end of this blog post.  But, the real reason I suspect that most of you are here is to be able to set the fetchXml/filterXml of a sub-grid in the new Unified Interface of Dynamics CE / CDS, so let’s get started…

Most JS devs are capable enough to start snooping into the JS Dom of the grid control and find the setFilterXml function.  One would think calling this undocumented function would do what one desires, but nope, it does not.  There have even been attempts to re-write the function that may have worked at one time, but have never worked for me ( https://medium.com/@meelamri23/dynamically-set-fetchxml-to-subgrid-on-dynamics-365-v9-uci-a4a531200e73, https://community.dynamics.com/crm/f/microsoft-dynamics-crm-forum/299697/dynamics-365-unified-interface-inject-fetchxml-into-subgrid).  There is also the supported method of writing a plugin to edit the FetchXml on the server in a Retrieve Multiple plugin, (https://community.dynamics.com/crm/f/microsoft-dynamics-crm-forum/216881/how-to-set-up-custom-fetchxml-for-subgrid-in-dynamics-crm, https://sank8sinha.wordpress.com/2020/01/07/adding-filtered-views-in-uci-in-dynamics-365-crm-finally-achieved/) but unfortunately there is no client context and this may not be possible in some situations and requires a lot of extra effort.  

So what is the solution?  After hitting my head against the brick wall that is the setFilterXml function of the grid control, I decided to focus on figuring out how the fetch xml for the grid was getting determined in the first place.  When attempting to edit the metadata of an option set, as mentioned above, I discovered a function to access the global page state: “Xrm.Utility.getGlobalContext()._clientApiExecutor._store.getState()”.  This function returns a state object that contains the entire page model (metadata, ribbon rules, business process flows, etc).  I had used it to edit option set metadata to allow for dynamic values to be added to it (Picture a payment screen where customer’s previous payment information was in the option set drop down, which is way easier to select from rather than a lookup control.  The resultant function “resetOptions” is in the code block at the bottom of this page.) and I decided to see if I could find the metadata used to generate the Rest call to the server for the option set.

I fired up the Debugger window once more and dived into the call hierarchy used to generate the rest call to the server and discovered that the query was being built from the same metadata cache on the page.  “metadata.views” was an object with GUID properties of view metadata.  A couple quick edits in the debugger console and a refresh of the grid later, and I was in luck!  Editing the fetch xml in the metadata of the page state resulted in directly updating the fetch xml used to query and populate the grid results!  

I’ve since gone through and created a function to do the heavy lifting of finding the view id for the grid by the name of the grid control, and replacing any filters from the view with the filter xml provided as a parameter.  (Please note, this is unsupported.  It could break at any point so use it at your own risk.  With that being said everything I see points to that being unlikely for the foreseeable future)  The function is located below as TypeScript, because TypeScript is awesome and it’s not too hard to remove the typing if you’re just using plain old JS.  I’ve also gone through and documented the entirety of the GlobalState object returned from the “getState()” function as a TypeScript definition file, in the hopes that in the future I can use it to do more  “naughty” unsupported customizations.  You can access it here and is designed to go in your npm “node_modules/@types” folder.  (If someone wants to add it do the work of uploading it to GitHub and making it an npm package, be my guest!)

Call setSubgridFilterXml to set the sub-grid control fetch xml.  Since the metadata is shared at the page level, each grid will require a unique view to keep form interfering with other grids.
 * Updates the Fetch XML of the Metadata which is used to generate the OData Query.
 * Since the metadata is shared at the page level, each grid will require a unique view to keep from interfering with other grids.
 * @param context Global Context
 * @param formContext Form Context
 * @param gridName Name of the Grid
 * @param filterXml Fetch Xml to set the Grid to
export function setSubgridFilterXml(context: Xrm.GlobalContext, formContext: Xrm.FormContext, gridName: string, filterXml: string): void {
    console.info("Unsupported.setSubgridFilterXml(): Executing for grid: ", gridName, ", fetchXml: ", filterXml);
    const gridControl = formContext.getControl(gridName) as Xrm.Controls.GridControl;
    if (!gridControl) {
        console.warn(`No subgrid control found found name ${gridName} in Unsupported.setSubgridFilterXml()`);
    try {
        const viewId = gridControl.getViewSelector().getCurrentView().id
            .replace("{", "")
            .replace("}", "");
        const view = getState(context).metadata.views[viewId];
        if (!view) {
            console.warn(`No view was found in the metadata for grid ${gridName} and viewId ${viewId}.`);
        const originalXml = view.fetchXML;
        const fetchXml = removeFilters(removeLinkedEntities(originalXml));
        const insertAtIndex = fetchXml.lastIndexOf("</entity>");
        // Remove any white spaces between XML tags to ensure that different filters are compared the same when checking to refresh
        view.fetchXML = (fetchXml.substring(0, insertAtIndex) + filterXml + fetchXml.substring(insertAtIndex)).replace(/>\s+</g"><");

        if (view.fetchXML !== originalXml) {
            // Refresh to load the new Fetch            
        }     } catch (err) {         CommonLib.error(err);         alert(`Error attempting unsupported method call setSubGridFetchXml for grid ${gridName}`);     } } function getState(context: Xrm.GlobalContext) {     return (context as XrmUnsupportedGlobalContext.Context)._clientApiExecutor._store.getState(); } function removeFilters(fetchXml: string): string {
    return removeXmlNode(fetchXml, "filter");

function removeLinkedEntities(fetchXml: string) {
    return removeXmlNode(fetchXml, "link-entity");

function removeXmlNode(xml: string, nodeName: string) {
    // Remove Empty tags i.e. <example /> or <example a="b" />
    xml = xml.replace(new RegExp(`<[\s]*${nodeName}[^/>]*\\/>`"gm"), "");

    const startTag = "<" + nodeName;
    const endTag = `</${nodeName}>`;
    let endIndex = xml.indexOf(endTag);

    // use first end Tag to do inner search
    while (endIndex >= 0) {
        endIndex += endTag.length;
        const startIndex = xml.substring(0, endIndex).lastIndexOf(startTag);
        xml = xml.substring(0, startIndex) + xml.substring(endIndex, xml.length);
        endIndex = xml.indexOf(endTag);
    return xml; }

This code can be used to allow for dynamically defining the option set values in an option set.  It will still fail on save if the integer value is not actually defined in the system
 * Crm only supports filtering option sets.  This supports resetting the options, although it will still fail on save unless other precautions are taken.
 * It will allow for setting the value.
 * @param context Global Context
 * @param formContext Form Context
 * @param attributeName The name of the attribute
 * @param options The options to reset the Option Sets to
export function resetOptions(context: Xrm.GlobalContext, formContext: Xrm.FormContext, attributeName: string, options: Xrm.OptionSetValue[]) {
    console.warn("Unsupported.resetOptions(): Executing for attribute: " + attributeName);
    const att = formContext.getAttribute(attributeName);
    if (!att) {
        console.warn(`No Attribute found for ${attributeName} in resetOptions.`);

    const metadata = getState(context).metadata.attributes[att.getParent().getEntityName()][attributeName];
    const nonExistingValues = options.filter(v => {
        return metadata.OptionSet.findIndex(o => {
            return o.Value === v.value as any;
        }) < 0;
    }).map(osv => {
        return {
            Color: "#0000ff",
            DefaultStatus: undefined,
            InvariantName: undefined,
            Label: osv.text,
            ParentValues: undefined,
            State: undefined,
            TransitionData: null,
            Value: osv.value
        } as XrmUnsupportedGlobalContext.Metadata.OptionSetMetadata;

    metadata.OptionSet = nonExistingValues.concat(metadata.OptionSet);
Happy Coding!

Monday, April 13, 2020

Handle All Your Plugin Exceptions In One Place, And Then Hide It!

One of the simplest to understand, best practice of writing code (outside of maybe limiting # of lines in a method) is don’t create duplicate code.  Having duplicate code leads to tons of maintenance issues as bugs are fixed in some places, and not others.  Following this best practice, it’s very common recommended best practice to let your Plugin Base class logic handle catching and logging exceptions.

*Important Side Bar* Before going any farther, there are 101 ways to setup debugging in Visual Studio and in certain situations this doesn’t apply, but for the most part, I will assume that your Visual Studio Debugger is setup with most default settings still in place, including the “Enable Just My Code” option.  Also, I don’t know everything about the VS debugger or C# debugger attributes, so if there is a better way, please let us all know.  Alright, lets continue…

One of the problems this creates for those that have unit tests, is (more than likely) that when debugging a unit test, if the plugin itself throws an exception (NullRef anyone?) the debugger will stop execution at the throw statement in the plugin base, since that’s the last place that the exception is caught, before surfacing it to the application, normally CRM, in this case, it would be your test harness.  This is rather annoying, since you don’t see the actual like of code the error occurred at since it was caught by the plugin base class when thrown.  As the picture below shows, this gives 0 context as to where the exception has actually occurred without digging into the stack trace within the exception (as opposed to the Call Stack in VS)


With the equally less than helpful call stack:


So how do we get VS to stop debugging where the error happens, rather than at the plugin base exception handler logic?  Enter “Debugger Attributes” to the Rescue!

There are quite a few “Debugger Attribute” classes in the System.Diagnostics namespace, but the one that makes most sense here is the DebuggerStepThroughAttribute (Or, if you have an open source project that is distributed as code (code gists, source only nuget packages, submodules, etc) or if you want to use the “Enable Just My Code” debugger option to be able to control if you can set a break point and debug or not, then the DebuggerNonUserCodeAttribute makes more sense.).  By adding this attribute to your base plugin class that contains the throw (or method level if you so desire)

public abstract class DLaBGenericPluginBase<T> : IRegisteredEventsPlugin where T: IExtendedPluginContext

this will force VS to give the desired result:


Since the method handling the exception is “hidden” from the debugger, now the call site of the method that throws the exception, will be where the debugger stops.  Above we can see the actual method call that resulted in the null ref exception was GetByName(), and below the exception call site is shown in the calls stack, so the path to the exception is easily navigated to:


But wait, what happened to the base plugin call site?  Where did it go?  It’s represented by the [External Code] line.  It can’t be stepped into, or debugged without changing your debug settings or removing the attributes but it still shows up in the actual exception stack trace.  So spend less time diving into the exception stack trace, and instead, allowing VS to put you on the actual line where the exception is actually occurring.


Happy Coding!

Thursday, January 16, 2020

How To Determine If Your Canvas App Is In Studio Or Play Mode

Sometimes, you want to run different logic if you’re editing a canvas app, vs “playing” a canvas app.  There is no out of the box function to call, but there is a fairly small work around for it:

 * Determine Studio or Play mode *
SaveData([true], "IsMobileApp"); // SaveData only works in the Power Apps App, not the web player
LoadData(IsMobileApp, "IsMobileApp", true);
// TenantId is required for the web player SaveData only works in the Power Apps App, 
// not the web player so if both are blank/empty then it's studio mode
Set(AppMode, If(IsBlank(Param("tenantId")) && IsEmpty(IsMobileApp), "Studio", "Play"));


Since the TenantId is included in the URL for the play web page, and you can’t open the studio from the PowerApps Mobile App, the function should cover all bases for determining if you’re in Studio or Play mode.  This makes it possible to automatically change how the app behaves if it is opened in Studio mode.  I personally use it to show certain hidden sections of the app that are helpful when debugging/creating.



Tuesday, December 3, 2019

How To Force Canvas Apps To Update An Edited Component

Frustrations With Updating a Component

Canvas Apps Components are an experimental feature that allow app creators to define a component that can be used in multiple places within an app, or in multiple apps, allowing for reuse and more DRY apps.  On a recent project I ran into an issue where, when attempting to update a component, it created a new component, and left the old version of my component in the app.  This means if I wanted the new changes from the component, I would have had to manually replace every instance of my component in the app.  Not fun!

What's Going On?

The Canvas App studio is attempting to be nice and keep from losing any changes that were made in a component inside your app.  It does this by "un-linking" the component from the source component whenever you update the component in the app that it is being used in, rather than the source app that it is being exported from.  So how does one "re-link" the app specific component to the source?

Canvas App Packager to the Rescue!!!

Using the Canvas App Packager, I was able to see what was going on in the app itself to link/unlink the component by unpacking the app and looking at the changes under the hood.  By default when a component is imported into an app, the Properties.json file contains the template for the component with the following header information:

But when an edit is made to the component in the app, the OriginalName property is removed, and the IsComponentLocked variable is set to false.  To allow the component to be refreshed when the component is reloaded, these will need to be added back.  The simplest approach to determining the OriginalName, is to re-add the component, making a duplicate, and then unpacking the app again to see what the OriginalName should be.  After adding back the OriginalName, flip the IsComponentLocked back to true, and pack and re-import the app.  Viola!  Now if the source component is imported again, it will actually update the component in the app rather than creating a duplicate!

Tuesday, October 8, 2019

How to Enable PCF Components for Older Canvas Apps

The Backstory

I wanted to try and embed a browser into one of my existing canvas apps but ran into a snag.  I followed the instructions in the docs on enabling PCF components (https://docs.microsoft.com/en-us/powerapps/developer/component-framework/component-framework-for-canvas-apps) but I could only input Canvas based components, not PCF components because the "Import Components" "Code (experimental)" tab wasn't showing up even after I turned on the components preview option for the app:

I was able to eventually get the PCF Components to show up, but that required me to turn on every single preview/experimental feature of the app.  I was concerned that maybe this was because my app was running on an old version of Canvas Apps, so I upgraded by App to the latest version of the app, and PCF components were still not showing up (Again, never had any problem with Canvas components showing up).  I then proceeded to add every single experimental feature in the app settings, and again, the PCF components tab showed up, but when I imported the app into a new environment, the "Explicit Column Selection" feature broke the app.  Turning off this feature removed my PCF control from the app, so I was in a no-win situation.

To test my theory that the issue was because my app had some legacy bloat which was causing it to fail, I created a brand new app, and the PCF components showed up exactly as expected.  I then extracted my app using the CanvasApp Packager (https://github.com/daryllabar/CanvasAppPackager) and compared the differences in the extract json and found the fix!

Actual How To

To get get the PCF Controls experimental feature to show up in your older canvas app follow these steps:

  1. Export your app from the make.powerapps.com site to your machine.
  2. Unpack the app using the CanvasApp Packager (https://github.com/daryllabar/CanvasAppPackager).
  3. Open the Extract\Apps\<App Name>\Properties.json file.
  4. Search for the AppPReviewFlagsKey array.
  5. Add "nativecdsexperimental" to the end of the array e.g. "AppPreviewFlagsKey":["delayloadscreens","componentauthoring", "nativecdsexperimental"]
  6. Pack the app using the CanvasApp Packager.
  7. Import back into your make.powerapps.com environment.
  8. Enjoy being able to select your PCF components in your older Canvas App!

Saturday, May 11, 2019

Negotiating the CDS/CRM/Xrm Plugin Trace Log Length Limitation

With the Dynamics 365 CRM 2015 U1 update, the Plugin Trace entity was added to the platform.  This provided an OOB implementation for the ITracingService (although it only works in Sandboxed plugins) to log to that was a much needed addition to the platform.  Over time, my dependency and use of the ITracingService within the DLaB.Xrm library has greatly increased.  By default, the plugin base auto logs the name of the plugin that is executing, the start and stop time, each IOrganizationService call that is made, and on exceptions, the entire plugin context to make debugging easier.   With all of this logging, it is becoming more and more common for plugins to exceed the 10,240 character limit.  This results in the beginning of the trace log getting truncated.

So what’s the solution?  You could completely abandon the built in ITracingService, and trace to Application Insights.  As much as I love that solution, for anything but large CRM/CDS implementations it may be overkill.  With the assumption that most of the time, the information that is helpful for debugging will be at the beginning or the end of the trace, I’ve updated the default ITracingSevice in the DLaB.Xrm library, when in cases of the tracing being longer than 10,240 characters, to retrace the first 5,120 characters, and then retrace the last 5,120.

Let’s take a look at the implementation:
public class ExtendedTracingService: IMaxLengthTracingService
    private ITracingService TraceService { get; }
    public int MaxTraceLength { get; }
    private StringBuilder TraceHistory { get; }
    public ExtendedTracingService(ITracingService service, int maxTraceLength = 10244) {
        TraceService = service;
        MaxTraceLength = maxTraceLength;
        TraceHistory = new StringBuilder();
    public virtual void Trace(string format, params object[] args) {
            if (string.IsNullOrWhiteSpace(format) || TraceService == null)
            var trace = args.Length == 0
                ? format
                : string.Format(format, args);
        catch (Exception ex)
            AttemptToTraceTracingException(format, args, ex);

    /// <summary>
    /// If the max length of the trace has been exceeded, the most important parts of the trace are retraced.
    /// If the max length of the trace has not been exceeded, then nothing is done.
    /// </summary>
    public void RetraceMaxLength()
        if(TraceHistory.Length <= MaxTraceLength)
        var trace = TraceHistory.ToString().Trim();
        if(trace.Length <= MaxTraceLength)
            // White Space 
        //Assume the three Traces will each add New Lines, which are 2 characters each, so 6
        var maxLength = MaxTraceLength - 6;
        if(maxLength <= 0)
        var snip = Environment.NewLine + "..." + Environment.NewLine;
        var startLength = maxLength / 2 - snip.Length; // Subtract snip from start
        if(startLength <= 0)
            // Really short MaxTraceLength, don't do anything
        Trace(trace.Substring(0, startLength));
        Trace(trace.Substring(trace.Length -(maxLength - (startLength + snip.Length))));
The ExtendedTracingService wraps the default ITracingService from the platform, and then on calls to trace, it intercepts the call, and adds the trace to an in memory StringBuilder first, before actually tracing the call.  The final step in the plugin base is to then call RetraceMaxLength().  This will check the length of the StringBuilder, and if it’s over the max length, trace the first part of the traces, and then the last part, with an “…” in the middle to serve as a “Snip” statement.

If you’re already using the DLaB.Xrm.Source library, get the latest (>= version from NuGet, and enjoy ensuring you always see the beginning and end of reach trace.  If you’re not using the DLaB.Xrm.Source Library, why not? It’s free, open source, and because it’s a source only NuGet package, doesn’t require ILMerge when being used from a plugin.  You can even use the Visual Solution Accelerator in the XrmToolBox to bring it into your existing CRM/CDS VS solution.

Here is an example log: (I’ve removed a great deal of text, just notice that the “…” serves as the signal that the trace was too long, and the middle portion has been truncated)

Starting Timer for Execute Request for dlab_LeadSearch with * Parameters *,     Param[IsMovers]: False,     Param[PhoneNumber]: 5553214321.
Timer Ended (  0.096 seconds)
Starting Timer: IsRejectedOrCreateLead
Partner Not First Party
Lead validations:
is 30 Days Logic: True
is Same Day Inquiry: True
is rejected: True - 30 Day Lead Logic
Timer Ended (  0.000 seconds): IsRejectedOrCreateLead
Starting Timer: is30DaysLogic
Timer Ended (  0.000 seconds): is30DaysLogic
Starting Timer for Create Request for dlab_leadqualifier with Id 00000000-0000-0000-0000-000000000000 and Attributes     [dlab_name]: Homer Simpson
     [dlab_azureid]: 317578
     [dlab_5daylogic]: 0
     [dlab_7daylogic]: 0
     [dlab_14daylogic]: 0
     [dlab_30daylogic]: 0
     [dlab_30daylogiccurrent]: 1
     [dlab_jornayalogic]: 0
     [dlab_dayslogic]: 0
     [dlab_existinglead]: True
Timer Ended (  0.033 seconds)
Start of isUpdatePath - 2019-05-11-07:32:41 407
Starting Timer for Update Request for lead with Id c97e6f52-6431-e911-8190-e0071b663e41 and Attributes     [trans
    Param[skipExport]: False
     Param[primaryPhoneDnc]: False
     Param[secondaryPhoneDnc]: False
* Output Parameters *
PostEntityImages: Empty
PreEntityImages: Empty
* Shared Variables *
     Param[Example.Xrm.LeadApi.Plugins|dlab_createLeadRequest|PostOperation|00000000-0000-0000-0000-000000000000]: 1
Has Parent Context: False
Stage: 40   

Tuesday, November 13, 2018

How To Fix Email Always Being Dirty

Recently, some customers complained that the email form was always showing as dirty.  Normally this happens when something is triggered post save that makes the form dirty again, and tracking it down can be difficult.  The simplest thing (assuming you have some JS executing on the on save) is to put a break point in the on save, and see what attributes are dirty in the console window:
Xrm.Page.getAttribute().map(function (a) { return a.getIsDirty() + " - " + a.getName(); });
The other option is to look at the requests either in the F12 Developer Tools, or in Fiddler, and see what data is being sent to the server.  In this case though, the only field that was being marked as dirty was the description, but I couldn’t visually see any changes that were occurring.  The description in the email in this case contained some html that was inserted via a signature template, so I decided to see if there was anything in the html that was different, so I edited the JS to include storing the description in a class level variable that then could be used to compare with the current value in an on change event.  Sure enough, there were some differences in the html.  The spaces after semicolons and colons in tags were being removed, as well as quotes around numbers (ie. size=”3” –> size=3).  Finally I also noticed that blank characters were being encoded differently as well (“&#160;” vs “&nbps;”).  After a good deal of trial and error, I finally came up with this supported solution (Note, this type of solution can be applied to any situation where you want to ignore formatting differences.  Also, this is for an 8.2 CRM instance, so if this issue occurs in the new UI for CRM, you’ll need to access the context in the correct manner): 
var serverEmailDescription = "";
var ignoreDescriptionUpdates = true;
 * When dealing with html in the body, the form will format it differently than the server, resulting in some changes happening post save.
 * This then shows up as the field being dirty, but saving it again, will update the format again, and cause it to still look dirty
 * Fix is to mark it as submitmode = never if it isn't really dirty.  This will prevent the form from looking like it needs to be saved.
function handleDescriptionAlwaysBeingDirty() {
    var description = Xrm.Page.getAttribute("description");
    if (!description) {
    serverEmailDescription = removeServerDifferentFormatting(description.getValue());
    Xrm.Page.getAttribute("modifiedon").addOnChange(function() {
        serverEmailDescription = removeServerDifferentFormatting(Xrm.Page.getAttribute("description").getValue());
        ignoreDescriptionUpdates = true;
        setTimeout(function() { ignoreDescriptionUpdates = false; }, 1);
    setTimeout(function () { ignoreDescriptionUpdates = false; }, 1);
function removeServerDifferentFormatting(v) {
    // Some Html Tags get surrounded with \"
    // Some spaces are added for ";" and ":"
    // Blank spaces are encoded differently
    return v.replace(newRegExp(": ", "g"), ":")
        .replace(new RegExp("\\\"", "g"), "")
        .replace(new RegExp("; ", "g"), ";")
        .replace(new RegExp("&#160;", "g"), "&nbsp;");
function submitIfActuallyDirty() {
    var att = Xrm.Page.getAttribute("description");
    var description = removeServerDifferentFormatting(att.getValue());
    if (ignoreDescriptionUpdates) {
        serverEmailDescription = description;
    if ((description === serverEmailDescription) === (att.getSubmitMode() === "dirty")) {
        att.setSubmitMode(description === serverEmailDescription ? "never" : "dirty");
The main function is the handleDescriptionAlwaysBeingDirty function, which is called onLoad of the form.  It ensures that the description field is on the form, and then adds an onChange function to the modifiedOn and description attributes.  It also caches the initial value of the description, as well as setting the submit mode to “never”.
The modifiedOn attribute will get updated by the server post save, so the function can be used to store what the value of the description field was just after saving.  Due to the issue of the formatting being different, I first attempt to replace anything that would be different between how the server stores the format, and how the client framework updates it post save, before caching the description. 
Whenever the description is updated, either via a user, or post save, the formatting is normalized to compare it with the normalized cached version to see if it truly has changed.  If it has, the submit mode is changed from “never” to “dirty”.  The submit mode is updated for two reasons:
  1. There is no supported method to update the IsDirty flag.
  2. When an attribute isn’t set to be submitted, the form doesn’t check to see if it’s dirty when showing the “Unsaved Changes” text in the lower right hand corner of the screen.  This solves our infinite loop issue with the description always being updated!
Please note, this is not a perfect solution.  If someone edits the description of the e-mail and adds a space after a colon or a semicolon, the change won’t be registered,  Also the function removeServerDifferentFormatting may not include all of the possible formatting changes.  But it works on my machine and resolves the current issue at hand, and hopefully it is helpful for you as well!