Saturday, 10 June 2017

Locker Service, Lightning Components and JavaScript Libraries

Locker Service, Lightning Components and JavaScript Libraries

Window

Introduction

As I’ve previously blogged, the Summer 17 release of Salesforce allows you to turn the locker service on or off based on the API version of a component. This is clearly an awesome feature, but there is a gotcha which I came across this week while working with the Lightning Testing Service Pilot.

I have a JavaScript library containing functionality that needs to work as a singleton (so a single instance that all Lightning Components have access to). This library is a static resource that is loaded via the <ltng:require /> standard component.

One window to rule them all?

In JavaScript world there is a single Window object, representing the browser’s window, which all global objects, functions and variables are attached to. When my JavaScript library is loaded, an immediately invoked function expression executes and attaches an my library object to the Window.

In Lightning Components world, with the locker service enabled, the single Window object changes somewhat. Instead of a Window, your components see a SecureWindow object which is shared among all components in the same namespace. This SecureWindow is isolated from the real Window for security reasons. In practice, this means that if you mix locker and non-locker Lightning Components on the same page, there are two different window concepts which know nothing about each other.

Example code 

The example here is a lightning application that attaches a String to the Window object when it initialises and includes two components that each attempt to access this variable from the window, one at ApI 40 and one at Api 39. The app also attempts to access the variable, just to show that it is correctly attached.

App

<aura:application >
    <aura:handler name="init" value="{!this}" action="{!c.doInit}" />
    <button onclick="{!c.showFromWindow}">Show from in App</button> <br/>
    <c:NonLockerWindow /> <br/>
    <c:LockerWindow /> <br/>
</aura:application>

Controller

({
	doInit : function(component, event, helper) {
		window.testValue="From the Window";
	},
	showFromWindow : function(component, event, helper) {
		alert('From window = ' + window.testValue);
	}
})

Locker Component

<aura:component >
	<button onclick="{!c.showFromWindow}">Show from Locker Component</button>
</aura:component>

Controller

({
	showFromWindow : function(component, event, helper) {
		alert('From window = ' + window.testValue);
	}
})

Non Locker Component

<aura:component >
	<button onclick="{!c.showFromWindow}">Show from non-Locker Window</button>
</aura:component>

Controller

({
	showFromWindow : function(component, event, helper) {
		alert('From window = ' + window.testValue);
	}
})

Executing the example

If I set the API version of the app to 40, this enables the locker service. Click the three buttons in turns shows the following alerts:

From App

Screen Shot 2017 06 10 at 06 31 24

As expected, the variable is defined.

Non-Locker Component

Screen Shot 2017 06 10 at 06 31 37

As the application is running with the locker service enabled, the variable is attached to a secure window. The non-locker service component cannot access this so the variable is undefined.

Locker Component


Screen Shot 2017 06 10 at 06 31 44

As this component is also executing the with the locker service enabled, it has access to the secure window for it’s namespace. As the namespace between the app and this component is the same, the variable is available.

Changing the app to API version 39 allows the non-locker component to access the variable from the regular JavaScript window, while the locker component doesn’t have access as the variable is not defined on the secure window.

So what?

This had a couple of effects on my code if I mix Api versions to that my contains a combination of locker and non-locker components:

  •  I can’t rely on the library being made available by the containing component or app. Thus I have to ensure that every component loads the static resource. This is best practice anyway, so not that big a deal
  • I don’t have a singleton library any more. While this might not sound like a big deal, given that I can load it into whatever window variant I have, it means that if I change something in that library from one of my components, it only affects the version attached to the window variant that my component currently have access. For example, if I set a flag to indicate debug mode is enabled, only those components with access to the specific window variant will pick this up. I suspect I’ll solve this by having two headless components that manage the singletons, one with API 40 and one with API < 40, and send an event to each of these to carry out the same action.

Related Posts

 

 

 

Sunday, 4 June 2017

Visualforce Page Metrics in Summer 17

Visualforce Page Metrics in Summer 17

Metrics

Introduction

The Summer 17 release of Salesforce introduces the concept of Visualforce Page Metrics via the SOAP API. This new feature allows you to analyse how many page views your Visualforce pages received on a particular day. This strikes me as really useful functionality - I create a lot of Visualforce pages (although more Lightning Component based these days), to allow users to manage multiple objects on a single page for example. After I’ve gone to the effort of building the page I’m always curious as to whether anyone is actually using it!

SOAP Only

A slight downside to this feature is that the information is only available via the SOAP API. The release notes give an example of using the Salesforce Workbench, but ideally I’d like a Visualforce page to display this information without leaving my Salesforce org. Luckily, as I’ve observed in previous blog posts, the Ajax Toolkit provides a JavaScript wrapper around the SOAP API that can be accessed from Visualforce. 

Sample Page

In my example page I’m grouping the information by date and listing the pages that were accessed in order of popularity. There’s not much information in the page as yet because I’m executing this from a sandbox, so the page may get unwieldy in a production environment and need some pagination or filter criteria.

Screen Shot 2017 06 04 at 06 25 29

Show me the code

Once the Ajax Toolkit is setup, the following query is executed to retrieve all metrics:

var result = sforce.connection.query(
   "SELECT ApexPageId,DailyPageViewCount,Id,MetricsDate FROM VisualforceAccessMetrics " +
   "ORDER BY MetricsDate desc, DailyPageViewCount desc");

The results of the query can then be turned into an iterator and the records extracted - I’m storing these as an array in an object with a property per date:

var it = new sforce.QueryResultIterator(result);
        
while(it.hasNext()) {
    var record = it.next();
            
    var dEle=metricByDate[record.MetricsDate];
    if (!dEle) {
        dEle=[];
        metricByDate[record.MetricsDate]=dEle;
    }
            
    // add to the metrics organised by date
    dEle.push(record);
}
        

 This allows me to display the metrics by Visualforce page id, but that isn’t overly useful, so I query the Visualforce pages from the system and store them in an object with a property per id - analogous to an Apex map:

result = sforce.connection.query(
    "Select Id, Name from ApexPage order by Name desc");
        
it = new sforce.QueryResultIterator(result);
        
var pageNamesById={};
        
while(it.hasNext()) {
    var record = it.next();
    pageNamesById[record.Id]=record.Name;
}

 I can then iterate the date properties and output details of the Visualforce page metrics for those dates:

for (var dt in metricByDate) {
    if (metricByDate.hasOwnProperty(dt)) {
        var recs=metricByDate[dt];
        output+='<tr><th colspan="3" style="text-align:center; font-size:1.2em;">' + dt + '</td></tr>';
	output+='<tr><th>Page</th><th>Date</th><th>Views</th></tr>';
	for (var idx=0, len=recs.length; idx<len; idx++) {
            var rec=recs[idx];
            var name=pageNamesById[rec.ApexPageId];
            output+='<tr><td>' + name + '</td>';
            output+='<td>' + rec.MetricsDate + '</td>';
            output+='<td>' + rec.DailyPageViewCount + '</td></tr>';
        }
    }
}

You can see the full code at this gist.

Related Posts

Saturday, 6 May 2017

Selenium and SalesforceDX Scratch Orgs

Selenium and SalesforceDX Scratch Orgs

Screen Shot 2017 05 06 at 13 57 48

Introduction

Like a lot of other Salesforce developers I use Selenium from time to time to automatically test my Visualforce pages and Lightning Components. Now that I’m on the SalesforceDX pilot, I need to be able to use Selenium with scratch orgs. This presents a slight challenge, in that Selenium needs to open the browser and login to the scratch org rather than the sfdx cli. Wade Wegner’s post on using scratch orgs with the Salesforce Workbench detailed how to get set a scratch org password so I started down this route before realising that there’s a simpler way, based on the sfdx force:org:open command. Executing this produces the output:

Access org <org_id> as user <username> with the following URL: https://energy-agility-9184-dev-ed.cs70.my.salesforce.com/secur/frontdoor.jsp?sid=<really long sid>

so I can use the same mechanism once I have the URL and sid for my scratch org which, as Wade’s post pointed out, I can get by executing sfdx force:org:describe. Even better, I can get this information in JSON format, which means I can easily process it in a Node script. Selenium also has a Node web driver so the whole thing comes together nicely.

In the rest of this post I’ll show how to create a Node script that gets the org details programmatically, opens a Chrome browser, opens a page that executes some JavaScript tests and figures out whether the tests succeeded or not. The instructions are for MacOS as that is my platform of choice.

Setting Up

In order to control the chrome browser from Selenium you need to download the Chrome Webdriver and add it to your system PATH - I have a generic tools directory that is already on my path so I downloaded it there. 

Next, clone the github repository by executing:

 git clone https://github.com/keirbowden/dxselenium.git

The Salesforce application is based on my Unit Testing Lightning Components with Jasmine talk from Dreamforce 16. You probably want to update the config/workspace-scratch-def.json file to change the company detail etc to your own information. 

Setting up the Scratch Org

Change to the cloned repo directory:

cd dxselenium

Then login to your dev hub:

sfdx force:auth:web:login --setdefaultdevhubusername --setalias my-hub-org

 and create a scratch org  - to make life easier I set the —setdefaultusername parameter so I don’t have to specify the details on future commands.

sfdx force:org:create --definitionfile config/workspace-scratch-def.json --setalias LCUT —setdefaultusername

Finally for this section, push the source:

sfdx force:source:push

Setting up Node

(Note that I’m assuming here that you have node installed).

Change to the node client directory:

cd node

Get the dependencies:

npm install

Executing the Test

Everything is now good to go, so execute the Node script that carries out the unit tests:

node ltug.js

You should see a chrome browser starting and the Node script producing the following output:

Getting org details
Logging in
Opening the unit test page
Running tests
Checking results
Status = Success

The script exits after 10 seconds to give you a chance to look at the page contents if you are so inclined. 

The Chrome browser output can be viewed on youtube:

Show me the Node

The Node script is shown below:

var child_process=require('child_process');

var webdriver = require('selenium-webdriver'),
By = webdriver.By,
until = webdriver.until;

var driver = new webdriver.Builder()
.forBrowser('chrome')
.build();

var exitStatus=1;

console.log('Getting org details');
var orgDetail=JSON.parse(child_process.execFileSync('sfdx', ['force:org:describe', '--json']));
var instance=orgDetail.instanceUrl;
var token=orgDetail.accessToken;
console.log('Logging in');
driver.get(instance + '/secur/frontdoor.jsp?sid=' + token);
driver.sleep(10000).then(_ => console.log('Opening the unit test page'));
driver.navigate().to(instance + '/c/JobsTestApp.app');
driver.sleep(2000).then(_ => console.log('Running tests'));
driver.findElement(By.id('slds-btn')).click();
driver.sleep(2000).then(_ => console.log('Checking results'));

driver.findElement(By.id("status")).getText().then(function(text) {
    console.log('Status = ' + text);
    if (text==='Success') {
    exitStatus=0;
    }
});
driver.sleep(10000);
driver.quit().then(_ => process.exit(exitStatus));

After the various dependencies are setup, the org details are retrieve via the sfdc force:org:describe command:

var orgDetail=JSON.parse(child_process.execFileSync('sfdx', ['force:org:describe', '--json']));

From the deserialised orgDetail object, the instance URL and access code are extracted:

var instance=orgDetail.instanceUrl;
var token=orgDetail.accessToken;

And then the testing can begin. Note that the Selenium web driver is promise based, but also provides a promise manager which handles everything when using the Selenium API. In the snippet below the driver.sleep won’t execute until the promise returned by the driver.get function has succeeded.

driver.get(instance + '/secur/frontdoor.jsp?sid=' + token);
driver.sleep(10000).then(_ => console.log('Opening the unit test page'));

However, when using non-Selenium functions, such as logging some information to the console, the promise manager isn’t involved so I need to manage this myself by supplying a function to be executed when the promise succeeds, via the then() method.

Note that I’ve added a number of sleeps as I’m testing this on my home internet which is pretty slow over the weekends.

The script then opens my test page and clicks the button to run the tests:

driver.navigate().to(instance + '/c/JobsTestApp.app');
driver.sleep(2000).then(_ => console.log('Running tests'));
driver.findElement(By.id('slds-btn')).click();

Finally, it locates the element with the id of status and checks that the inner text contains the word ‘Success’ - note that again I have to manage the promise result as I’m outside the Selenium API.

driver.findElement(By.id("status")).getText().then(function(text) {
    console.log('Status = ' + text);
    if (text==='Success') {
    exitStatus=0;
    }
});

Related Posts

 

Saturday, 29 April 2017

Locker Service in Summer 17

Locker Service in Summer 17

Introduction

The Summer 17 release of Salesforce sees the activation of the Lightning Components Locker Service critical update - something that I’d say has been anticipated and feared in equal measure since it was announced. If you’ve been hiding under a rock for the last couple of years, the Locker Service (among other things) adds a security layer to your Lightning Components JavaScript, isolating components by namespace to ensure that your Evil Co-worker can’t write components can’t go tinkering with the standard Salesforce components for nefarious purposes.

The Breaking Changes Problem

The problem with enforcing the Locker Service is that it breaks code that was written before the Locker Service was known about.  In many cases this was work that a customer paid a third party to carry out who has long since departed. Breaking that functionality through a change to the platform can be contentious, with third parties expecting to be paid to fix problems and customers expecting them to be fixed for nothing as key functionality no longer works. Now there were warnings in the docs from the get-go, basically saying this works now but might not work in the future, and I have no sympathy for anyone that wrote code that flew in the face of this warning. However, there are other considerations - some third party libraries break for example, and that really isn’t something that could be defended against back in the day. Changes to the platform that break existing code that was written with best endeavours just isn’t cool.

The Breaking Changes Solution

The Summer 17 release notes preview contain an entry that will be music to the ears of any customer or consultant in this position - the Locker Service will be enforced based on API version. Anything on Summer 17 or later (API 40) will be subject to the locker service, while anything earlier (API 39 or lower) will not. You can think of this a bit like the ‘without sharing’ keyword - apply that to an Apex class and it bypasses sharing settings, and apply API 39 to any Lightning Component and it will bypass the locker service. From the horse’s mouth (the release notes preview) :

When a component is set to at least API version 40.0, which is the version for Summer ’17, LockerService is enabled. LockerService is disabled for any component created before Summer ’17 because these components have an API version less than 40.0. To disable LockerService for a component, set its API version to 39.0 or lower.

I think this solution is pretty cool - it allows existing code to continue working while enforcing appropriate security on new code - whoever at Salesforce managed to persuade the security team to go this route, kudos to you!

Note that this is from the preview release notes so the situation could change, although let’s hope it doesn’t!

Use These Powers for Good

This new functionality shouldn’t be taken as an invitation to allow your Lightning Components to blaze a trail of destruction on every page that is unfortunate enough to include them. It should only be used as a last resort going forward. If for no other reason than it ties your component to an ageing API version so you’ll miss out on all the cool stuff that comes in the future.

Related Posts

 

Saturday, 22 April 2017

Salesforce Health Check Custom Baseline

Salesforce Health Check Custom Baseline

Introduction

The Salesforce Health Check has been around for a year or so now, debuting in the Spring 16 release of Salesforce (and bearing a striking resemblance to an app exchange listing with the same name).  The Salesforce Help topic gives chapter and verse on this so I’m not going to spend any time on the basic functionality, except to say that it’s a great tool for allowing you to see at a glance how your Salesforce org shapes up security-wise. There has been one caveat though, the baseline it is compared against is set by Salesforce not you, which means that if your security standard differs from the one true path you’ll see warnings and errors. As anyone who has accepted a unit test failure for more than one build knows, as soon as people expect errors they stop counting how many there are. Thus you may start out accepting a single warning, before you know it you have a number of potential security problems which are being ignored because “that page always shows errors”.

Custom Baselines

Spring 17 introduced the beta of custom baselines - this allows you to deviate from the Salesforce standard and supply your own baseline which reflects your security requirements. From now on if your Health Check page shows an error or exception, that means you have a real security issue and need to deal with it quickly.

While you could create a custom baseline from scratch, the easiest way is to export the standard baseline and amend it. Navigate to Setup -> Security Controls -> Health Check and click the gear icon, then ‘Export XML’ from the resulting context menu:

 

Screen Shot 2017 04 22 at 15 27 33

 

This downloads the baseline to a file named ‘baseline.xml’ (or baseline (1,2,3,etc).xml if you keep downloading it to the same place on a mac!), which you can then open in your favourite editor - I like Atom for XML files. Again, the Salesforce Help does a great job of explaining the format of the XML file so I’m not going to cover this. A couple of things to bear in mind:

  • You must change the Name and DeveloperName of the Baseline element, otherwise you’ll be trying to overwrite the standard, which you can’t do.
  • When you import the file, do it via the Lightning Experience. If you try this in class and you get an error you get no information that an error has occurred. According to the help “If your import fails, you receive a detailed message in Lightning Experience to help you resolve the problem”, which is pretty big talk when the actual message is Screen Shot 2017 04 22 at 16 03 16

Changing the Baseline

One area where my dev org is considered substandard is the password expiration time. I have my passwords set up never to expire, as forcing users to change their passwords regularly often results in them choosing predictable passwords that are easier to break. The Salesforce health check standard generates a Medium Risk alert if the value is over 90 days and a High Risk alert if the value is over 180 days.

Screen Shot 2017 04 22 at 15 40 22

Here’s the section of the file that configures this:

Screen Shot 2017 04 22 at 15 41 05

If I change the standard value to the numeric equivalent of Never Expires, 2147483647.0, and the warning to one higher:

Screen Shot 2017 04 22 at 15 57 54

and import the updated XML file using the context menu shown above, I can then switch my Health Check to the custom baseline and my password expiration is now at a satisfactory level:

Screen Shot 2017 04 22 at 16 05 10

I am not a security consultant

Notwithstanding the fact that forcing users to change their passwords regularly is out of favour in some places, you should not take this post as my advising you about your password policies in any shape or form. If you base your security settings on things that you read in random blog posts then best of luck to you - I did it in a dev org to show the functionality as there’s nothing that I really care about in there.

I’d expect the majority of custom baselines to be making the security standard more restrictive, in regulated industries for example, but what you should set up is a baseline that aligns with your corporate security policies.

Here comes the wish list

Anyone familiar with my blogs or Medium stories knows that I usually have a wish list around Salesforce functionality, so if any product managers are reading this, here’s what I’d like to see:

  • A way to email out the health check, run against a custom baseline, on a schedule. Security and compliance departments can receive this first thing in the morning and spend the day focusing on other systems.
  • Notifications when the health check result changes - if my Evil Co-Worker blags admin rights and changes the configuration to allow previous passwords to be re-used, I want to know about it. (Ideally I’d receive an automated report at the end of every day detailing everything the Evil Co-Worker has done, but that might be asking too much).
  • A way to snapshot the health check output regularly, so that I can see if an org is trending towards a more or less baseline compliant security setup. 
  • Custom entries - for example, I can easily spin through the ApexClass sobjects and figure out how many aren’t using ‘with sharing’. Security isn’t just about configuration, it’s also about code!

Related Posts

 

Saturday, 15 April 2017

Lightning Design System in Visualforce Part 3 - Built In SLDS

Lightning Design System in Visualforce Part 3 - Built In SLDS

Apexslds

Overview

In the past, using the Salesforce Lightning Design System (LDS) in Visualforce (or Lightning Components for that matter) required downloading the latest version from the home page and uploading it as a static resource to each Salesforce org that you wanted to use it on. I dread to think how many copies of exactly the same zip file have been uploaded over the last 18 months or so, but I’d imagine a significant amount of storage is currently dedicated to just this purpose. Probably only beaten out by a million copies of jQuery and Bootstrap. In the Spring 17 release of Salesforce, this is no longer the case - a single Visualforce tag can now do the heavy lifting for you.

The SLDS Tag

Simply adding <apex:slds /> to your page and nesting your markup in a div styled with the slds-scope class, and you are good to go. For example, the following page:

<apex:page showHeader="false" sidebar="false" standardStylesheets="false"
           standardController="Account" applyHTmlTag="false">
    <html xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
        <body>
            <apex:slds />
            <div class="slds-scope">
                <div class="slds-page-header" role="banner">
                    <div class="slds-grid">
                        <div class="slds-col slds-has-flexi-truncate">
                            <div class="slds-media slds-no-space slds-grow">
                                <div class="slds-media__figure">
                                    <svg aria-hidden="true" class="slds-icon slds-icon-standard-account">
                                        <use xlink:href="{!URLFOR($Asset.SLDS,
'/assets/icons/standard-sprite/svg/symbols.svg#account')}"></use> </svg> </div> <div class="slds-media__body"> <p class="slds-text-title--caps slds-line-height--reset">Account</p> <h1 class="slds-page-header__title slds-m-right--small slds-align-middle slds-truncate"
title="{!Account.Name}">{!Account.Name}
</h1> </div> </div> </div> </div> <ul class="slds-grid slds-page-header__detail-row"> <li class="slds-page-header__detail-block"> <p class="slds-text-title slds-truncate slds-m-bottom--xx-small" title="Description">
Description
</p> <p class="slds-text-body--regular slds-truncate" title="{!Account.Description}">
{!Account.Description}
</p> </li> <li class="slds-page-header__detail-block"> <p class="slds-text-title slds-truncate slds-m-bottom--xx-small" title="Industry">
Industry
</p>{!Account.Industry} </li> <li class="slds-page-header__detail-block"> <p class="slds-text-title slds-truncate slds-m-bottom--xx-small" title="Visualforce">
Visualforce
</p>No static resources were used! </li> </ul> </div> </div> </body> </html> </apex:page>

renders as:

Screen Shot 2017 04 15 at 12 29 30

which is pretty cool, and makes throwing a page together to test out some ideas in a new org a lot easier than it has been.

What about Images?

Without the LDS static resource, image references need to be handled a slightly different way, via the $Asset global. Use this wherever you’d use your static resource previously. E.g. in the example markup above, I use the $Asset global as follows:

<svg aria-hidden="true" class="slds-icon slds-icon-standard-account">
   <use xlink:href="{!URLFOR($Asset.SLDS, '/assets/icons/standard-sprite/svg/symbols.svg#account')}"></use>
</svg>

although continuing the pattern of making sure SVG is difficult to use, you have to add a custom namespace to the page:

<html xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">

and you can’t do that unless you turn off the standard Salesforce header, sidebar and stylesheets. If you see an SVG on a Salesforce page in the wild, take a moment to appreciate the hoops that the developer jumped though in order get it there.

So no more static resources?

Well that depends. The SLDS tag always pulls in the latest version of the Lightning Design System, so much depends on whether you want that behaviour.It means that things may change underneath you, possibly in a breaking way. If it’s for your internal Salesforce org and you have people who will be able to make any changes required by the latest version, then emphatically yes. If you are building pages for a consulting customer who expects them to continue working in the future with zero effort, then maybe not. As always, there is no substitute for thinking about how the application will be used, both now and in the future. 

Related Posts

 

Saturday, 11 March 2017

One Trigger to Rule Them All? It Depends.

One Trigger to Rule Them All? It Depends

Trg

Introduction

Anyone involved in Salesforce development will be familiar with triggers and the religious wars around how they should be architected. My view is that, like many things in life, there is no right answer - it all depends on the context. The options pretty much boil down to one of the two following options:

One Trigger per Object

This approach mandates a single trigger file that handles all possible actions, declared along the lines of:

trigger AccountTrigger on Account (
  before insert, before update, before delete,
  after insert, after update, after delete, after undelete) {

  // trigger body
}

One Trigger per Object and Action

This approach takes the view that each trigger should handle a single action on an objectf:

trigger AccountTrigger_bu on Account (before update) {

  // trigger body
}
...
trigger AccountTrigger_au on Account (after update) {

  // trigger body
}

I’ve read many blog posts stating flat out that the first way is best practice. No nuances, it’s just the right way and should be used everywhere, always. Typically the reason for this is that is how the author does it, therefore everyone should. I strongly disagree with this view. Not that one trigger per object shouldn’t be used, but that it shouldn’t be used without applying some thought.

Note: one trigger per object and action is the maximum granularity - never have two triggers for the same action on the same object as then you have no control over the order of execution and this will inevitably bite you. Plus you’ve striped business logic across multiple locations and made life harder for everyone.

Consulting versus Product Development

The reason I have this view is that I work across consultancy and product development at BrightGen. Most of the work I do nowadays is related to our business accelerators, such as BrightMEDIA, but I still have Technical Architect responsibility across a number of consulting engagements, which are often implementations for companies that don’t have a lot of internal Salesforce expertise, and what they have isn’t the developer skill set.

One message I’m always repeating to our consultants is to have some empathy with our customers and think about those that come after us. Thus we use clicks not code and try to take the simplest approach that will work, so that we don’t leave the customer with a system that requires them to come back to us every time they need to change anything. Obviously we like to take our customers into service management after a consultancy engagement, but we want them to choose to do that based on the good job that we have done, rather than because we have locked them in by making things complex.

Sample Scenario

So here’s a hypothetical example - as part of a solution I need to take some action when a User is updated, but not for any other trigger events. At some point later a customer administrator wants to know if there is any automated processing that takes place after a user is inserted to triage a potential issue.

If I’ve gone with the one trigger per object and action combination, they can simply go to the setup page for the object in question and look at the triggers. The naming convention makes it clear that the only trigger in place is to handle the case when a user is updated, so they can stop this particular line of enquiry (assuming my Evil Co-Worker hasn’t chosen an inaccurate name just to cause trouble).

Screen Shot 2017 03 11 at 07 29 35

If I’ve gone with one trigger per object, the administrator is none the wiser. There is a single trigger, but nothing to indicate what it does. The administrator then has to look into the trigger code to figure out if there is any after insert processing. What they will then find is one of two things:

  • A load of wavy if statements checking the type of action - before vs after, insert vs update etc - and then calling out to external code. Most developers try to make sure that an external method is called only once, so you often end up with a wall of if statements for the administrator to enjoy
  • Delegation to a trigger handler, leaving the admin to look at another source file to try to figure out what is happening.

Now I don’t know about you, but if administrators are having to look at my source code, and even worse trying to understand Apex code to figure out something as basic as this, I’d feel like I’d done a pretty poor job.

Enter the Salesforce Optimizer

The Spring 17 Release introduced the Salesforce Optimiser - an external tool that analyses your implementation and sends you the results - here’s what it has to say about my triggers:

Screen Shot 2017 03 11 at 07 54 33

And there’s the dogma writ large again - a big red warning alert saying I should have one trigger per object, just because. Don’t get me wrong, I think the Salesforce Optimizer is a great idea and has the potential to be a real time saver, and that the intention is to help, but it’s a really blunt instrument that presents opinion as fact.

The chances are at some point my customers will run this and ask me why I’ve gone against the recommended approach, even though in their case it is absolutely the appropriate approach. I find I have no problem explaining this to customers, but I do have to take the time to do that. Thanks for throwing me under the bus Salesforce!

In Conclusion

What you shouldn’t take away from the above is that one trigger per object is the wrong approach - in many situations it’s absolutely the right approach and it’s the one I use in some of my product development. In other situations it isn’t the right approach and I don’t use it. What you should take away is that it’s important to think about how to use triggers for every project you undertake - going in with a dogmatic view that there is one true way to do things and everything will be brute-forced into that may make you feel like a l33t developer but is unlikely to be helpful in the long term. it may also mark you out as a Rogue High Performer, and you really don’t want that.