Too Many DML Rows

Fixing the ‘Too Many DML Rows: 10001’ Error in Salesforce

When working with Salesforce triggers, one of the most common errors that frustrate developers is:
System.LimitException: Too many DML rows: 10001

We faced this exact issue while implementing a requirement on the Customer__c object.
The goal was to automatically update related Order__c and Order_Item__c records whenever certain fields on the Customer__c record changed — especially the Primary_Email__c and Main_Contact__c fields.

Everything worked fine during testing with small datasets. But once real data came in, Salesforce governor limits struck back hard.

The Problem: Governor Limit on DML Rows

Here’s what happened:
Each Customer update triggered logic that cascaded into updating thousands of related Order and Order-Item records.

Salesforce allows a maximum of 10,000 DML rows per transaction, so once we crossed that limit, the trigger crashed with:

System.LimitException: Too many DML rows: 10001

No matter how much we optimized the loops or reduced SOQL calls, this wasn’t something that could be fixed within the synchronous trigger context.

The Solution: Batch Chaining from Trigger Handler

To handle this, we introduced a batch process directly from the trigger handler.

Instead of trying to update all Order records in the same transaction, we moved that part into a Batch Apex job that runs asynchronously and safely bypasses the DML row limit.

Step 1: Customer Trigger

The trigger only calls the handler — no logic inside the trigger body itself.
This trigger works after update when Customer record is updated.

trigger CustomerTrigger on Customer__c (after insert, after update) {
    if (Trigger.isAfter && Trigger.isUpdate) {
        CustomerHandler.processUpdatedCustomers(Trigger.new, Trigger.oldMap);
    }
}

Step 2: Trigger Handler (Batch Invocation Logic)

The handler identifies Customer whose email or contact details changed and then uses a batch job to update related Order__c and Order_Item__c records.

You can clearly notice how the batch job is called only when needed.

public class CustomerHandler {
    public static void processUpdatedCustomers(List<Customer__c> newList, Map<Id, Customer__c> oldMap) {
        List<Id> customerIdsToUpdate = new List<Id>();
        Map<Id, Id> customerToContactMap = new Map<Id, Id>();

        for (Customer__c cust : newList) {
            Customer__c oldCust = oldMap.get(cust.Id);
            if (cust.Primary_Email__c != oldCust.Primary_Email__c) {
                customerIdsToUpdate.add(cust.Id);
                // Assuming the related contact is linked elsewhere in the system
                customerToContactMap.put(cust.Id, cust.Main_Contact__c);
            }
        }

        if (!customerToContactMap.isEmpty()) {
            // Run batch asynchronously to handle large volume safely
            Database.executeBatch(new OrderUpdateBatch(customerIdsToUpdate, customerToContactMap), 100);
        }
    }
}

Step 3: Batch Class for Order Updates

The batch processes records in manageable chunks (100 per batch).
This ensures that even if there are 50,000 related Orders, Salesforce processes them safely without hitting governor limits.

global class OrderUpdateBatch implements Database.Batchable<sObject>, Database.Stateful {

    global List<Id> customerIds;
    global Map<Id, Id> customerToContact;

    global OrderUpdateBatch(List<Id> custIds, Map<Id, Id> custContactMap) {
        this.customerIds = custIds;
        this.customerToContact = custContactMap;
    }

    global Database.QueryLocator start(Database.BatchableContext bc) {
        return Database.getQueryLocator([
            SELECT Id, Customer__c, Contact__c,
                   (SELECT Id, Contact__c FROM Order_Items__r)
            FROM Order__c
            WHERE Customer__c IN :customerIds
        ]);
    }

    global void execute(Database.BatchableContext bc, List<Order__c> orders) {
        List<Order__c> ordersToUpdate = new List<Order__c>();
        List<Order_Item__c> itemsToUpdate = new List<Order_Item__c>();

        for (Order__c ord : orders) {
            Id contactId = customerToContact.get(ord.Customer__c);
            if (contactId != null && ord.Contact__c != contactId) {
                ord.Contact__c = contactId;
                ordersToUpdate.add(ord);
            }

            for (Order_Item__c item : ord.Order_Items__r) {
                if (item.Contact__c != contactId) {
                    item.Contact__c = contactId;
                    itemsToUpdate.add(item);
                }
            }
        }

        if (!ordersToUpdate.isEmpty()) Database.update(ordersToUpdate, false);
        if (!itemsToUpdate.isEmpty()) Database.update(itemsToUpdate, false);
    }

    global void finish(Database.BatchableContext bc) {
        System.debug('Orders and Order Items updated successfully via Batch.');
    }
}

The above OrderUpdateBatch class safely updates all related Order__c and Order_Item__c records in smaller, manageable sets.

It first finds all orders linked to the updated customers, then checks if each order’s Contact__c matches the customer’s latest Main_Contact__c.

If not, it updates both the order and its related order items.
By processing 100 records at a time, it prevents hitting Salesforce’s 10,000 DML row limit and keeps the data consistent.

Why Do We Use a Constructor in This Batch Class?

You might have noticed the constructor inside the batch:

global OrderUpdateBatch(List<Id> custIds, Map<Id, Id> custContactMap) {
    this.customerIds = custIds;
    this.customerToContact = custContactMap;
}

So, why is this needed?

When we call our batch from the handler like this:

Database.executeBatch(new OrderUpdateBatch(customerIds, customerToContactMap), 100);

The constructor’s job is to receive input data (like record IDs or parameter maps) and store them in instance variables that the batch can use later.

In our example:

  • custIds → the list of Customer IDs whose data changed.
  • custContactMap → a mapping of Customer ID to new Contact ID.

These values are passed once when the batch is created, and then used inside the start() and execute() methods to query and update related records.

Why not use static variables instead?

Because static variables do not persist across transactions — and Batch Apex runs asynchronously in multiple transactions.
Using a constructor ensures your input data stays available and reliable throughout the batch’s lifecycle.

In short:

The constructor in a Batch Apex class allows you to pass dynamic data safely from your trigger or handler into the batch, making it reusable, stable, and cleanly designed.

Conclusion

After implementing this Trigger-to-Batch Chaining pattern, the trigger no longer fails due to DML limits, and the updates run smoothly — even with thousands of Customer__c records and related Order__c / Order_Item__c records.

If you ever face a Too many DML rows error in your Salesforce org, try this approach.
It’s scalable, governor-limit friendly, and follows Salesforce best practices.

Bottom Line

Handling large data in Salesforce triggers requires careful planning to avoid governor limits. By using a Trigger-to-Batch Chaining approach, we can safely update thousands of related records without even hitting the ‘Too Many DML Rows’ errors. This pattern ensures data consistency, scalability, and follows Salesforce best practices — making complex automation both reliable and efficient.

Platforms like Mytutorialrack, run by Salesforce trainer Deepika Khanna, offer structured courses, real-world projects, and mentorship to help professionals master Salesforce solutions, such as Salesforce Development, and stand out as experts making a difference in the changing CRM landscape.

Share:

Recent Posts