Salesforce automation provides a powerful set of tools for streamlining business processes and reducing manual work. Among the most significant tools in this suite are Flows, Triggers, Asynchronous (Async) processing, and Platform Events. Flows enable visual, declarative automation, allowing users to create complex logic and guide users through processes without needing to code. Triggers operate at the backend, providing fine-grained control over data manipulation in response to DML operations on objects. Async processing supports time-intensive tasks, such as data imports or callouts to external systems, by running these actions in the background to improve system efficiency. Finally, Platform Events facilitate real-time event-driven architecture by allowing Salesforce to publish and subscribe to events across different systems, making it possible to synchronize data or trigger actions based on specific events. Together, these tools enable comprehensive automation across a wide range of use cases, supporting scalable, responsive solutions in Salesforce.
Prerequisites
Before diving into each automation type and how to leverage them effectively, it’s helpful to cover some overarching best practices. First, consider creating ways to bypass specific automations when necessary. One of the most effective methods for this is using Custom Permissions, such as Exclude Triggers or Exclude Flows. These permissions can be assigned to specific users, often an Integration User, so that automations don’t execute during data loads or API calls. Second, it’s important to determine when to use a Flow versus a Trigger. Triggers are preferable when you need to manage complex data structures, such as storing and referencing data in a map format—something that Flow currently doesn’t support. On the other hand, if you’re handling straightforward operations, like analyzing data or updating fields, Flows are generally easier to maintain and modify. Additionally, choosing Flow over Trigger is often beneficial if development resources are limited, as Flows allow for powerful automation without the need for custom code.
Record Trigger Flows
Salesforce Record Triggered Flows are highly versatile and can automate both user-driven and background processes through various types of record triggered flows: before-save, after-save, before-delete, and asynchronous. When designing a Flow, understanding these distinctions is crucial to optimize performance and meet specific business requirements. Before-save Flows are best used when you need to update fields on the record being saved, as they run before the record is committed to the database. These flows are efficient because they avoid additional database writes, making them ideal for quick, straightforward updates like setting default values or populating lookup fields. After-save Flows, on the other hand, execute after the record is saved and are typically used when actions are dependent on the record's final state, such as creating related records or sending notifications. They allow more complex logic but come with additional performance considerations. Async Flows run in the background and are suitable for time-consuming operations or those that can be deferred, such as making API calls to external systems or sending bulk emails. By leveraging the right type of Flow for each scenario, you can ensure efficient, maintainable automation that aligns with business needs.
Structuring your flows carefully is critical to ensuring they are maintainable. For each object, aim to have a single flow per type—before-save, after-save, before-delete, and async—to keep things organized. However, there are valid scenarios where breaking this rule is beneficial. These include cases such as no-bypass flows, email notifications, large use cases, and async processes. For no-bypass flows, you might set up flows that bypass Custom Permissions to enforce actions that should always run, like setting Record Types or default values. For email notifications, if you have multiple email alerts, it can help to isolate these into a dedicated after-save flow for ease of maintenance. In async flows, it’s essential to set entry criteria on the flow to trigger them only when needed; these flows should be dedicated to specific tasks and clearly named by their function. Lastly, for large use cases, consider creating a standalone flow when managing numerous nodes. If this flow involves shared data used across other flows, it’s often better to centralize these operations in an Autolaunched flow. This Autolaunched flow can handle query results efficiently, returning output data or performing updates directly, which helps minimize SOQL queries and optimize resource usage.
Establishing clear naming conventions for Flows is essential to keeping them organized and maintainable. Flow names should generally follow a structured format: start with the object name, followed by the flow type (e.g., Before Save, After Save, Before Delete), and, if applicable, include a brief description of the specific use case. This approach ensures that Flow names are intuitive and consistent, making it easier for team members to understand each Flow's purpose at a glance. Here are some examples following this convention:
- Record Trigger: Object Name Before Save
- Record Trigger: Object Name Before Save – No Bypass
- Record Trigger: Object Name After Save
- Record Trigger: Object Name After Save (Email Alerts)
- Record Trigger: Object Name Before Delete
- Record Trigger: Object Name After Save (1 min Delay)
- Record Trigger: Object Name After Save Job (Function Name)
Salesforce Triggers
Salesforce Triggers are powerful tools, but without proper structure, they can cause significant issues, especially when running in bulk or being expanded upon over time. To mitigate these risks, implementing a Trigger Framework is essential, along with clear guidelines for the team. A Trigger Framework offers a structured approach to managing triggers and their handlers, defining what should execute and when. It also includes mechanisms to prevent recursion, create reusable functions, and enable improved error logging.
The core of a Trigger Framework is the pipeline, which controls the order and classification of handlers for each trigger action on an object. Through this pipeline, you can organize handlers based on related object updates or groups of data retrieved by queries. Structuring handlers in this way helps reduce the number of DML operations and queries, avoiding platform governor limits. Additionally, the framework can specify when a handler should execute, based on conditions like bypass permissions or data on the firing record, thus reducing unnecessary runs and conserving resources. To maintain readability and consistency, handlers should follow a standardized naming convention. This typically includes the object name (in full or abbreviated form), a brief descriptor of the handler's function, and the word "Handler" at the end, all within an 80-character limit.
Asynchronous and Platform Events
Next, we have async action usage within trigger automation. There are many variables to consider when determining when and how to use async actions properly to avoid platform limits.
- Nested Async Tasks: If the trigger that initiates an async task is possible to be inside another async update, consider using Platform Events. Platform Events can run in a new transaction outside the current context, allowing tasks to be initiated synchronously or asynchronously. This helps avoid the platform limit that prevents initiating one async task within another.
- Trigger vs. Scheduled Job: Determine if the task needs to run on record value changes or if it can be part of a scheduled job using a queue system. A queue system involves the trigger creating a record that adds it to a queue of records needing specific actions. A scheduled job runs periodically (e.g., every 1-5 minutes) to process the queue, updating records and removing them from the queue. This job reschedules itself to avoid duplicate runs. The main issue to watch for is row lock conflicts, which occur if a user updates the same record at the same time as the job.
- Large Transactions: If the async task will not have conflicts but involves a large transaction, using future methods, batch Apex, or Queueable Apex from a trigger is a viable solution.
Platform Limitation Awareness
When designing Salesforce automation, it's crucial to be aware of platform-imposed limits to avoid performance issues and errors. Both Flows and Triggers are subject to governor limits, including a maximum of 100 SOQL queries and 150 DML operations per transaction. However, Flows have unique limitations compared to Apex, particularly around data handling, recursion control, and permissions enforcement. For example, Flows cannot create or manipulate data structures like maps, which are useful in Apex Triggers for managing complex data relationships. This limitation can make certain use cases more challenging or impractical to implement in Flow alone. Additionally, Flows lack native recursion prevention, so care must be taken to avoid unintentional loops that could hit limits. Flows also fully enforce user permissions, which may require additional design considerations when bypassing certain actions for specific user roles. For asynchronous (Async) versus synchronous (Sync) operations, limits vary significantly. Async transactions allow up to 200 SOQL queries, double the heap size at 12 MB, and offer a CPU limit of 60,000 milliseconds—six times that of Sync operations. This increased capacity often makes Async operations more suitable for tasks with high data volumes or complex processing needs, though Sync may still be preferable for real-time requirements. Other advantages to Async is the ability to use the finish method which is run once all batches complete to then clean up data and/or kick off another batch class which we call daisy chaining.
Monitoring
Monitoring is an essential aspect of Salesforce Automation Architecture, allowing administrators to track system events, especially errors. In the current landscape, errors can often be obscured or difficult to replicate without a debug log, which can be challenging to manage. There are two primary approaches to address this:
- Salesforce Shield: This tool provides robust monitoring capabilities to track and report errors effectively. However, it comes with additional licensing costs.
- Custom Monitoring Solution: An alternative is to build an internal monitoring tool. This could involve creating a custom object in your org to log error records via a service that triggers when an error is caught in try-catch blocks. These DML operations would only execute during error states, so careful placement of these error-catching mechanisms is crucial. While not a comprehensive solution, it can effectively cover common scenarios.
When choosing between these options, consider your environment: if you have managed packages or significant custom code, Salesforce Shield might be the more comprehensive choice. On the other hand, a custom-built tool is more suited to fully custom orgs where you control all the code.
Conclusion
In summary, Salesforce's suite of automation tools—Flows, Triggers, async processing, and Platform Events—provides a powerful framework for creating efficient, responsive solutions that reduce manual work and streamline business processes. By following best practices for each tool, carefully structuring automations, and implementing a well-organized Trigger Framework, you can build scalable and maintainable automation that aligns with business needs. Thoughtful design, clear naming conventions, and performance considerations are essential for long-term success, ensuring your Salesforce automation architecture remains efficient, flexible, and resilient as your organization grows. With these tools and practices in place, Salesforce can become a powerful engine for operational excellence and innovation in your organization.
Now that you have read the high-level discussion of Automation Architecture usage let's talk about implementing it into your organization in a few simple steps.
Implementation
Before implementing a Trigger Framework, establish clear rules and strategies for your team to follow. Skipping this step risks inconsistent implementations, conflicts with the framework, or issues like governor limit violations. Key elements to include in your rule's documentation are:
- Naming Conventions: Standardize names for Flows, Classes, Triggers, methods, and variables.
- Flow Groupings: Define what should be included in each Record-Triggered Flow.
- Trigger Handler Groupings: Specify how handlers should be organized for each object.
- Async and Platform Event Usage: Outline when to use each for optimal performance.
These foundational guidelines ensure consistency and scalability. To enforce these rules, leverage tools like Semgrep within your version control system, helping maintain compliance and prevent deviations.
Trigger Framework & Error Monitoring
Choosing the right Trigger Framework is crucial for effective automation. I’ve developed a simple yet powerful Framework that allows admins to maintain a pipeline of handlers, control their execution order, and break them into smaller, more manageable components for easier maintenance. The Framework enables you to toggle functionality on or off without constantly updating code and includes an ErrorLogger for tracking issues. It comes with Record Trigger Templates to kickstart creating Record-Triggered Flows, along with prebuilt Custom Permissions, Permission Sets, and a Permission Set Group to manage bypassing Flows and Triggers. A baseline Permission Set for the Framework ensures all users can access it without extra effort. The Framework also features a Double Fire-safe mechanism, providing control over when processes should or shouldn’t fire multiple times within a Salesforce Trigger’s context. Additionally, it allows precise control over handler execution using Custom Permissions or criteria of your choice.
Example Implementation
Before diving into implementation snippets, let’s review the key features this framework supports:
- Execution Control: Define precise conditions for when the framework should execute.
- DoubleFire-Safe Technology: Prevent unintended multiple executions within a single trigger context.
- Comprehensive Trigger Contexts: Supports all key contexts, including Before/After, Insert, Update, Delete, and Undelete.
- Pipeline DML Operations: Perform DML actions in bulk at the end of the pipeline, optimizing performance and reducing governor limit usage.
Below is an example code snippet for the Trigger implemented on an Object in salesforce.
trigger TestObjectTrigger on Test_Object__c(before insert, after insert, before update, after update, before delete, after delete, after undelete) {
new TriggerPipeline(Schema.Test_Object__c.sObjectType);
}
Next is the example code snippet for a Handler added to a pipeline on an object
public with sharing class TestObjectHandler extends ATriggerHandler {
// Optional if you want to fire all the time no need to check if it should or not.
public override Boolean shouldExecute() {
return !FeatureManagement.checkPermission('Exclude_Trigger');
}
/** Override this method and return the Type of the implementing trigger class
* (e.g. return Type.getName('NameOfImplementingTriggerClass'))
* (e.g. return NameOfImplementingTriggerClass.class
*
* Required do not delete method.
*/
public override Type getType() {
return TestObjectHandler.class;
}
/**
* Override this method and set isDoubleFireSafe to true in any trigger that needs to set
* special rules for determining if a record has already been trigged upon. Only applies to
* update operations. Salesforce guarantees that triggers only fire once for insert and delete
* operations.
*
* @return a Boolean indicating if a trigger is using the double-fire safe trigger framework
*
* Optional if you want to fire only once.
*/
public override Boolean isDoubleFireSafe() {
return false;
}
// Add Remove the methods below based on what is needed for your use case
/**
* beforeInsert method to fire in the before insert context of a trigger.
*
* @param triggerData TriggerData the trigger data found in the Trigger class
*/
public override void beforeInsert(TriggerData triggerData) {
// Code Goes Here
}
/**
* afterInsert method to fire in the after insert context of a trigger.
*
* @param triggerData TriggerData the trigger data found in the Trigger class
*/
public override void afterInsert(TriggerData triggerData) {
// Code Goes Here
}
/**
* beforeUpdate method to fire in the before update context of a trigger.
*
* @param triggerData TriggerData the trigger data found in the Trigger class
*/
public override void beforeUpdate(TriggerData triggerData) {
// Code Goes Here
}
/**
* afterUpdate method to fire in the after update context of a trigger.
*
* @param triggerData TriggerData the trigger data found in the Trigger class
*/
public override void afterUpdate(TriggerData triggerData) {
// Code Goes Here
}
/**
* beforeDelete method to fire in the before delete context of a trigger.
*
* @param triggerData TriggerData the trigger data found in the Trigger class
*/
public override void beforeDelete(TriggerData triggerData) {
// Code Goes Here
}
/**
* afterDelete method to fire in the after delete context of a trigger.
*
* @param triggerData TriggerData the trigger data found in the Trigger class
*/
public override void afterDelete(TriggerData triggerData) {
// Code Goes Here
}
/**
* afterUndelete method to fire in the after undelete context of a trigger.
*
* @param triggerData TriggerData the trigger data found in the Trigger class
*/
public override void afterUndelete(TriggerData triggerData) {
// Code Goes Here
}
}
You can check out more details on this Trigger Framework at my GitHub Repo.
Top comments (0)