Mass deleting picklist values in Salesforce with AJAX javascript hack (2018 version)

More than 5 years ago I wrote an article on how to Mass delete Picklist values in Salesforce, this is still my most visited article and I have been meaning to get back to it for years. At least now it seems like it will be a part of the standard functionality sometime in the near future (mass delete picklist values (setup)) but today I had to do this at a customer so I had to solve it once again.

I tried using my old script but ended up with errors, Lightning doesn’t like loading external JavaScript things into itself. I respect that and switched to Classic, this is a Sysadmin Only exercise anyways.

Aura Content Security Policy directive

It works out of the box in Classic but it’s very quiet about what’s happening and you don’t really know what’s happening. Also if you accidentally clicked the bookmark in a production environment you’re going to have a bad time

Fuck It, We’ll Do It Live

Updated JavaScript looks like this:

javascript:

var allClear = function() {
    location.reload();
};
var links = document.getElementsByTagName("a");
var whatToDelete = prompt("Do you want to Delete 'Active' or 'Inactive' picklist values?'''", "Inactive");
if(!(whatToDelete === "Active" || whatToDelete === "Inactive")) {
    window.alert("Invalid choice, quitting");
} else {
    var onlyInactive = whatToDelete === "Inactive";

    var delLinks = new Array();
    for (var i = 0; i < links.length-1; i++) {
      var link = links[i];

      if(onlyInactive) {
        if(link.innerHTML === "Activate") {
            var link = links[i-1];
        } else {
            continue;
        }
      }
      if (link.innerHTML == "Del") {
        delLinks.push(link);
      }
    }

    if(delLinks.length == 0) {
        window.alert("Nothing to delete");
    } else {
        var goAhead = confirm("You're about to delete " + delLinks.length + " picklist values");
        if(goAhead) {
            for (var i = 0; i < delLinks.length; i++) {
              var delLink = delLinks[i].href
              // Synchronous AJAX style
              xmlhttp = new XMLHttpRequest();
              xmlhttp.open("GET",delLink, false);
              console.log("Deleting #" + i + ": " +  delLink);
              xmlhttp.send();
            }

            window.setTimeout(allClear, 2000);
        }
    }
}

You can still load it from where I store it by creating a bookmark with this URL:

javascript:(function()%7Bvar s = document.createElement("script"); s.src = "https://superfredag.com/massdelete_v2.js"; void(document.body.appendChild(s));%7D)()

Clicking it on a Global Value Set Detail page will give you a prompt:

Clicking ok will go ahead and select the inactive Picklist values and prompt again:

Clicking Cancel will abort at all times.
Clicking on on this last Confirmation dialogue will delete your Inactive picklist values and the page will refresh.

If you want to delete Active Picklist values you'll have to change the "Inactive" string to "Active" after clicking the bookmark:

Same thing will happen next, you're asked to confirm:

Picklists are deleted and page reloaded. If you have a lot of values it might take some time so starting up the Developer Console is not a bad idea:

Deleting #262: https://mydomain--sandboxname.csXX.my.salesforce.com/setup/ui/picklist_masterdelete.jsp?id=01J0E000006zu8g&tid=0Nt&pt=0Nt0E0000000RFx&retURL=%2F0Nt0E0000000RFx&deleteType=0&_CONFIRMATIONTOKEN=VmpFPSxNakF4T0Mwd09TMHhORlF4TXpvd016bzBPUzR4TmpWYSxEY29WbGQtQUZ2NFM1SFM0Y3ZvNUpmLFpXSXdPRFZq
massdelete_v2.js:39 
Deleting #263: https://mydomain--sandboxname.csXX.my.salesforce.com/setup/ui/picklist_masterdelete.jsp?id=01J0E000006zu8h&tid=0Nt&pt=0Nt0E0000000RFx&retURL=%2F0Nt0E0000000RFx&deleteType=0&_CONFIRMATIONTOKEN=VmpFPSxNakF4T0Mwd09TMHhORlF4TXpvd016bzBPUzR4TmpWYSxkbHZYWnlodjdFT0xLb0lUT3FaNzJlLFpqUTNNV1Uw
massdelete_v2.js:39 
Deleting #264: https://mydomain--sandboxname.csXX.my.salesforce.com/setup/ui/picklist_masterdelete.jsp?id=01J0E000006zu8i&tid=0Nt&pt=0Nt0E0000000RFx&retURL=%2F0Nt0E0000000RFx&deleteType=0&_CONFIRMATIONTOKEN=VmpFPSxNakF4T0Mwd09TMHhORlF4TXpvd016bzBPUzR4TmpWYSxmd0xkOFBUN1F0VHRkUmlVNmsxQUl0LFltVXlZVE16
massdelete_v2.js:39 
Deleting #265: https://mydomain--sandboxname.csXX.my.salesforce.com/setup/ui/picklist_masterdelete.jsp?id=01J0E000006zu8j&tid=0Nt&pt=0Nt0E0000000RFx&retURL=%2F0Nt0E0000000RFx&deleteType=0&_CONFIRMATIONTOKEN=VmpFPSxNakF4T0Mwd09TMHhORlF4TXpvd016bzBPUzR4TmpaYSwxdnZiTndrNFFIN0R4UEU0SzhBZDM3LE1qSXpZalpq
massdelete_v2.js:39 
Deleting #266: https://mydomain--sandboxname.csXX.my.salesforce.com/setup/ui/picklist_masterdelete.jsp?id=01J0E000006zu8k&tid=0Nt&pt=0Nt0E0000000RFx&retURL=%2F0Nt0E0000000RFx&deleteType=0&_CONFIRMATIONTOKEN=VmpFPSxNakF4T0Mwd09TMHhORlF4TXpvd016bzBPUzR4TmpaYSw5QmxpX1BueEF1SDdSUVRDUHhic2FsLE9XVXdNRFF3
massdelete_v2.js:39 
Deleting #267: https://mydomain--sandboxname.csXX.my.salesforce.com/setup/ui/picklist_masterdelete.jsp?id=01J0E000006zu8l&tid=0Nt&pt=0Nt0E0000000RFx&retURL=%2F0Nt0E0000000RFx&deleteType=0&_CONFIRMATIONTOKEN=VmpFPSxNakF4T0Mwd09TMHhORlF4TXpvd016bzBPUzR4TmpaYSx2bjdkSUxRamljYUR4dTBlZzlSMmYyLFlqY3pNR016 

I deleted 350 picklist values in just over 2 minutes so it will not take forever, next step for this script would be to add a spinning progressbar and some bells and whistles but for now this at least solves the problem.

Since you need to have at least 1 value in a Value Set it will not be able to delete all of the Active picklist values but at least you'll save some mouse clicks.

You'll get a warning in the browser that synchronous XML requests are deprecated:

Running the requests asynchronous works fine but the browser will be super swamped if you're deleting hundreds of picklist values so this one is better. When the day finally comes and support for synchronous XML requests are removed I'll make sure to update this but until then this hack is good enough.

Exporting Salesforce Files (aka ContentDocument)

Last week a client asked me to help out, we had been creating a system that creates PDF files in Salesforce using Drawloop (today known as Nintex Document Generation which is a boring name).

Anyways, we had about 2000 PDF created in the system and after looking into it there doesn’t seem to be a way to download them in bulk. Sure you can use the Dataloader and download them but you’ll get the content in a CSV column and that doesn’t really fly with most customers.

I tried dataloader.io, Realfire and search through every link on Google or at least the first 2 pages and I didn’t find a good way of doing it.

There seems to be an old AppExchange listing for FileExporter by Salesforce Labs and I think this is the actual software FileExporter and it stopped working with the TLS 1.0 deprecation.

Enough of small talk, I had to solve the problem so I went ahead and created a very simple Python script that lets you specify the query to find your ContentVersion objects and also filter the ContentDocuments if you need to ignore some ids.

My very specific use case was that I was to export all PDF files with a certain pattern in the filename but only those that were related to a custom object that had a certain status. Given that you can’t do certain queries like this one:

SELECT ContentDocumentId, Title, VersionData, CreatedDate FROM ContentVersion WHERE ContentDocumentId IN (
SELECT ContentDocumentId FROM ContentDocumentLink where LinkedEntityId IN (SELECT Id FROM Custom_Object__c))

It gives you a:

Entity 'ContentDocumentLink' is not supported for semi join inner selects

I had to implement the option for the second query which gives a list of valid ContentDocumentIds to include in the download.

The code is at https://github.com/snorf/salesforce-files-download, feel free to try it out and let me know if it works or doesn’t work out for you.

One more thing, keep in mind that even if you’re an administration with View All you will not see ContentDocuments that doesn’t belong to you or are explicitly shared with you. You’ll need to either change the ownership of the affected files or share them with the user running the Python script.

Ohana!

Talk to the fridge! (using Alexa, Salesforce and Electric Imp)

Long time no blog post, sorry. I have meant to write this post forever but I have managed to avoid it.

Anyways, consider the scenario when you sit in your couch and you wonder:
– “What’s the temperature in my fridge?”
– “Did I close the door?”
– “What’s the humidity?”

You have already installed your Electric Imp hardware in the Fridge (Best Trailhead Badge Ever) and it’s speaking to Salesforce via platform events, you even get a case when the temperature or humidity reaches a threshold or the door is open for too long.

But what if you just want to know the temperature? And you don’t have time to log into Salesforce to find out.

Alexa Skills to the rescue!

Thanks to this awesome blog post:
https://andyinthecloud.com/2016/10/05/building-an-amazon-echo-skill-with-the-flow-api/

And this GitHub repository:
https://github.com/financialforcedev/alexa-salesforce-flow-skill

And example Flows from here:
https://github.com/financialforcedev/alexa-salesforce-flow-skill-examples

I’ll walk you through what’s needed to speak to your fridge.

I will only show the small pieces you need for setting this up, for details please read the original blog posts.

First of all you need an Alexa Skill, I have created one called Salesforce.

This is the interaction model:

{
  "intents": [
    {
      "intent": "FridgeStatus"
    }
  ]
}

And the Sample Utterances

FridgeStatus How is my fridge

I’ll not go into details about Lambda and the connected app needed, please refer to this documentation:
https://github.com/financialforcedev/alexa-salesforce-flow-skill/wiki/Setup-and-Configuration

The important thing here is the FridgeStatus in the Sample Utterances, you’ll need a flow called FridgeStatus.

Here’s mine:

Going into details:

And creating the response:

The Value is:

Your fridge temperature is {!Temperature} degrees celcius, the humidity is {!Humidity} percent, and the door is {!DoorStatus}

The result sounds like this:

So the next time you wonder about the temperature in the fridge you won’t have to move from the couch, awesome right?

The next step would be to ask Alexa about “What’s the average temperature during the last day?” and calculate the average from the BigObjects holding my temperature reading.

Cheers,
Johan

Visualise Big Object data in a Lightning Component

Good evening,

In my previous post (Upgrade your Electric Imp IoT Trailhead Project to use Big Objects
) I showed how you can use Big Objects to archive data and now I will show how you can visualise the data in a Lightning Component.

So now we have big objects being created but the only way to see them is by executing a SOQL query in the Developer Console (SELECT DeviceId__c, Temperature__c, Humidity__c, ts__c FROM Fridge_Reading_History__b).

I have created a Lightning Component that uses an Apex Class to retrieve the data.

Lets start with a screen shot on how it looks and then post the wall of code

And in Salesforce1

And here’s the code:
Lightning Component




    
    
    
    
    
    
    
    

Controller

/**
 * Created by Johan Karlsteen on 2017-10-08.
 */
({
    doinit : function(component,event,helper){
        var today = new Date();
        component.set("v.today", today.toISOString());
        console.log(document.documentElement);
        component.set("v.width", document.documentElement.clientWidth);
        component.set("v.height", document.documentElement.clientHeight);
        helper.refreshData(component,event,helper);
    },
    refreshData : function(component,event,helper) {
        helper.refreshData(component,event,helper);
    }
})

Helper

/**
 * Created by Johan Karlsteen on 2017-10-08.
 */
({
        addData : function(chart, labels, data) {
            chart.data.labels = labels;
            chart.data.datasets[0] = data[0];
            chart.data.datasets[1] = data[1];
        },
        redrawData : function(component, event, helper, readings, chart, datasets) {
            helper.addData(chart, readings.ts, datasets);
            chart.update();
        },
        displayData : function(component, event, helper, readings) {
            var datasets = [readings.temperature, readings.humidity];
            var chart = window.myLine;
            if(chart != null) {
                helper.redrawData(component,event,helper,readings, chart, datasets);
            }
            var config = {
                type: 'line',
                data: {
                    labels: readings.ts,
                    datasets: [{
                                 label: 'Temperature',
                                 backgroundColor: 'red',
                                 borderColor: 'red',
                                 data: readings.temperature,
                                 yAxisID: "y-axis-1",
                                 fill: false,
                             },
                             {
                                 label: 'Humidity',
                                 backgroundColor: 'blue',
                                 borderColor: 'blue',
                                 data: readings.humidity,
                                 yAxisID: "y-axis-2",
                                 fill: false,
                             }]
                },
                options: {
                    maintainAspectRatio: true,
                    responsive: true,
                    title:{
                        display:false,
                        text:'Temperature'
                    },
                    tooltips: {
                        mode: 'index',
                        intersect: false,
                    },
                    hover: {
                        mode: 'nearest',
                        intersect: true
                    },
                    scales: {
                        yAxes: [{
                            type: "linear", // only linear but allow scale type registration. This allows extensions to exist solely for log scale for instance
                            display: true,
                            position: "left",
                            id: "y-axis-1",
                        }, {
                            type: "linear", // only linear but allow scale type registration. This allows extensions to exist solely for log scale for instance
                            display: true,
                            position: "right",
                            id: "y-axis-2",

                            // grid line settings
                            gridLines: {
                                drawOnChartArea: false, // only want the grid lines for one axis to show up
                            },
                        }],
                    }
                }
            };
            var ctx = document.getElementById("temperature").getContext("2d");
            window.myLine = new Chart(ctx, config);
        },
    refreshData : function(component,event,helper) {
        var spinner = component.find('spinner');
        $A.util.removeClass(spinner, "slds-hide");
        var action = component.get("c.getFridgeReadings");
        var endDate = component.get("v.today");
        var results = component.get("v.results");
        action.setParams({
        	deviceId : "2352fc042b6dc0ee",
        	results : results,
        	endDate : endDate
    	});
        action.setCallback(this, function(response){
            var state = response.getState();
            if (state === "SUCCESS") {
                var fridgereadings = JSON.parse(response.getReturnValue());
                helper.displayData(component,event,helper,fridgereadings);
            }
            var spinner = component.find('spinner');
            $A.util.addClass(spinner, "slds-hide");
        });
        $A.enqueueAction(action);
    }
})

And the Apex Class that fetches the data:

/**
 * Created by Johan Karlsteen on 2017-10-08.
 */

public with sharing class FridgeReadingHistoryController {

    public class FridgeReading {
        public String deviceId {get;set;}
        public List ts {get;set;}
        public List doorTs {get;set;}
        public List door {get;set;}
        public List temperature {get;set;}
        public List humidity {get;set;}
        public FridgeReading(String deviceId) {
            this.deviceId = deviceId;
            this.ts = new List();
            this.doorTs = new List();
            this.door = new List();
            this.temperature = new List();
            this.humidity = new List();
        }
        public void addReading(Fridge_Reading_History__b  fr) {
            addReading(fr.Temperature__c, fr.Humidity__c, fr.ts__c, fr.Door__c);
        }
        public void addReading(Decimal t, Decimal h, DateTime timeStamp, String d) {
            String tsString = timeStamp.format('HH:mm dd/MM');
            this.ts.add(tsString);
            temperature.add(t);
            humidity.add(h);
            Integer doorStatus = d == 'open' ? 1 : 0;
            if(door.size() == 0 || doorStatus != door.get(door.size()-1)) {
                door.add(doorStatus);
                doorTs.add(tsString);
            }
        }
    }

    @AuraEnabled
    public static String getFridgeReadings(String deviceId, Integer results, DateTime endDate) {
        if(results == null) {
            results = 200;
        }
        FridgeReading fr = new FridgeReading(deviceId);
        system.debug('RESULTS: ' +results);
        List frhs = [
                SELECT DeviceId__c, Temperature__c, Humidity__c, Door__c, ts__c
                FROM Fridge_Reading_History__b
                WHERE DeviceId__c = :deviceId AND ts__c <: endDate
                LIMIT :Integer.valueof(results)
        ];
        for (Integer i = frhs.size() - 1; i >= 0; i--) {
            Fridge_Reading_History__b frh = frhs[i];
            fr.addReading(frh);
        }
        return JSON.serialize(fr);
    }
}

The component assumes you have Charts.js as a static resource, mine is here.

There are no test cases anywhere and the code is probably not production grade.

The next step would be to use aggregate functions on the Big Objects to show data over a longer period of time.

Cheers,
Johan

Upgrade your Electric Imp IoT Trailhead Project to use Big Objects

I first heard about Big Objects in a webinar and at first I didn’t really see a use case, and it was in BETA so I didn’t care that much but now that it was released in Winter ’18 everything changed.

My favourite Trailhead Badge is still the Electric Imp IoT one and I had thought it would be fun to store the temperature readings over a longer period of time. Since I run my integration in a Developer Editions I have 5 MB of storage available, this is not that much given that I receive between 1 and 2 Platform Events per minute.

Most objects in Salesforce uses 2 KB per object (details here) so with 5 MB I can store about 2500 objects (less actually since I have other objects in the org).

Big Objects gives you a 1 000 000 object limit so this should be enough for about 1 years worth of readings. Big Objects are meant for archiving and you can’t delete them actually so I have no idea what will happen when I hit the limit but I’ll write about it then.

Anyways, there are some limitations on Big Objects:
* You can’t create them from the Web Interface
* You can’t run Triggers/Workflows/Processes on them
* You can’t create a Tab for them

The only way to visualise them is to build a Visualforce Page or a Lightning Component and that’s exactly what I’m going to do in this blog post.

Archiving the data

Starting out, I’m creating the Big Object using the Metadata API. The object looks very similar to a standard object and I actually stole my object definition for a custom object called Fridge_Reading_Daily_History__c. The reason why I had to create that object is that I can’t create a Big Object from a trigger and I want to store every Platform Event.

The Fridge_Reading_Daily_History__c has the same fields as my Platform Event (described here) and I’m going to create a Fridge_Reading_Daily_History__c object from every Platform Event received.

The Big Object definition looks like this:



    Deployed
    
        DeviceId__c
        false
        
        16
        true
        Text
        false
    
    
        Door__c
        false
        
        9
        true
        Text
        false
    
    
        Humidity__c
        false
        
        10
        true
        4
        Number
        false
    
    
        Temperature__c
        false
        
        10
        true
        5
        Number
        false
    
    
        ts__c
        false
        
        true
        DateTime
    
    
    Fridge Readings History

Keep in mind that after you have created it you can’t modify that much of it so you need to remove it (can be done from Setup) and then deploy again.

In my previous post I created a Trigger that updated my SmartFridge__c object for every Platform Event, this works fine but with Winter ’18 you can actually create Processes that handles the Platform Events so I changed this. Basically you create a Process that listens to a Fridge_Reading__e object and finds the SmartFridge__c with the same DeviceId__c.

This is what my process looks like:

I added a criteria to check that no fields were null (I set them as required on my Fridge_Reading_Daily_History__c object)

Then I update my SmartFridge__c object

And create a new Fridge_Reading_Daily_History__c object

So far so good, now I have to make sure I archive my Fridge_Reading_Daily_History__c objects before I run out of space.

After trying different ways to do this (Scheduled Apex) I realised that I can’t archive and delete the objects in the same transaction (it’s in the documentation for Big Objects) and I don’t want to have a scheduled job every hour that archives to Big Object and then another Scheduled Apex job that deletes the Fridge_Reading_Daily_History__c that have been archived.

In the end I settled for a Process on Fridge_Reading_Daily_History__c that runs when an object is created

The process checks if the Name of the object (AutoNumber) is evenly divisible by 50

If so it calls an Invocable Apex function

And the Apex code looks like this:

/**
 * Created by Johan Karlsteen on 2017-10-08.
 */

public class PurgeDailyFridgeReadings {
    @InvocableMethod(label='Purge DTR' description='Purges Daily Temperature Readings')
    public static void purgeDailyTemperatureReadings(List items) {
        archiveTempReadings();
        deleteRecords();
    }

    @future(callout = true)
    public static void deleteRecords() {
        Datetime lastReading = [SELECT DeviceId__c, Temperature__c, ts__c FROM Fridge_Reading_History__b LIMIT 1].ts__c;
        for(List readings :
        [SELECT Id FROM Fridge_Reading_Daily_History__c WHERE ts__c <: lastReading]) {
            delete(readings);
        }
    }

    @future(callout = true)
    public static void archiveTempReadings() {
        Datetime lastReading = [SELECT DeviceId__c, Temperature__c, ts__c FROM Fridge_Reading_History__b LIMIT 1].ts__c;
        for(List toArchive : [SELECT Id,ts__c,DeviceId__c,Door__c,Temperature__c,Humidity__c
        FROM Fridge_Reading_Daily_History__c]) {
            List updates = new List();
            for (Fridge_Reading_Daily_History__c event : toArchive) {
                Fridge_Reading_History__b frh = new Fridge_Reading_History__b();
                frh.DeviceId__c = event.DeviceId__c;
                frh.Door__c = event.Door__c;
                frh.Humidity__c = event.Humidity__c;
                frh.Temperature__c = event.Temperature__c;
                frh.ts__c = event.ts__c;
                updates.add(frh);
            }
            Database.insertImmediate(updates);
        }
    }
}

This class will call the 2 future methods that will archive and delete. Yes they might not run in sequence but it doesn’t really matter. Also you might wonder why there’s a (callout = true) on the futures. I got an Callout Exception when trying to run it so I guess that the data is not stored inside Salesforce but rather in Heroku or similar and it needs to callout to get the data (I got the error on the SELECT line).

Big Objects is probably implemented like External Objects which makes sense.

The Visualisation is done in the next post:
Visualise Big Object data in a Lightning Component

Cheers,
Johan

Uploading CSV data to Einstein Analytics with AWS Lambda (Python)


I have been playing around with Einstein Analytics (the thing they used to call Wave) and I wanted to automate the upload of data since there’s no reason on having dashboards and lenses if the data is stale.

After using Lambda functions against the Bulk API I wanted to have something similar and I found another nice project over at Heroku’s GitHub account called pyAnalyticsCloud

I don’t have a Postgres Database so I ended up using only the uploader.py file and wrote this Lambda function to use it:

from __future__ import print_function

import json
from base64 import b64decode
import boto3
import uuid
import os
import logging
import unicodecsv
from uploader import AnalyticsCloudUploader

logger = logging.getLogger()
logger.setLevel(logging.INFO)

s3_client = boto3.client('s3')
username = os.environ['SF_USERNAME']
encrypted_password = os.environ['SF_PASSWORD']
encrypted_security_token = os.environ['SF_SECURITYTOKEN']
password = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_password))['Plaintext'].decode('ascii')
security_token = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_security_token))['Plaintext'].decode('ascii')
file_bucket = os.environ['FILE_BUCKET']
wsdl_file_key = os.environ['WSDL_FILE_KEY']
metadata_file_key = os.environ['METADATA_FILE_KEY']

def bulk_upload(csv_path, wsdl_file_path, metadata_file_path):
    with open(csv_path, mode='r') as csv_file:
        logger.info('Initiating Wave Data upload.')
        logger.debug('Loading metadata')
        metadata = json.loads(open(metadata_file_path, 'r').read())

        logger.debug('Loading CSV data')
        data = unicodecsv.reader(csv_file)
        edgemart = metadata['objects'][0]['name']

        logger.debug('Creating uploader')
        uploader = AnalyticsCloudUploader(metadata, data)
        logger.debug('Logging in to Wave')
        uploader.login(wsdl_file_path, username, password, security_token)
        logger.debug('Uploading data')
        uploader.upload(edgemart)
        logger.info('Wave Data uploaded.')
        return 'OK'

def handler(event, context):
    for record in event['Records']:
        # Incoming CSV file
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        csv_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
        s3_client.download_file(bucket, key, csv_path)

        # WSDL file
        wsdl_file_path = '/tmp/{}{}'.format(uuid.uuid4(), wsdl_file_key)
        s3_client.download_file(file_bucket, wsdl_file_key, wsdl_file_path)

        # Metadata file
        metadata_file_path = '/tmp/{}{}'.format(uuid.uuid4(), metadata_file_key)
        s3_client.download_file(file_bucket, metadata_file_key, metadata_file_path)
        return bulk_upload(csv_path, wsdl_file_path, metadata_file_path)

Yes the logging is a bit on the extensive side and make sure to add these environment variables in AWS Lambda:

SF_USERNAME - your SF username
SF_PASSWORD - your SF password (encrypted)
SF_SECURITYTOKEN - your SF security token (encrypted)
FILE_BUCKET- the bucket in where to find the mapping file
METADATA_FILE_KEY- the path to the metadata file in that bucket (you get this from Einstein Analytics)
WSDL_FILE_KEY - the path to the wsdl partner file in the bucket

I added an S3 trigger that runs this function as soon as a new file is uploaded. It has some issues (crashing with parenthesis in the file name for example) so please don’t use this for a production workload before making it enterprise grade.

Note: The code above only works in Python 2.7

Cheers

Upgrade your Electric Imp IoT Trailhead Project to use Platform Events

As an avid trailblazer I just have to Catch ‘Em All (Trailblazer badges) and the project to integrate Electric Imp in my fridge was a fun one.

Build an IoT Integration with Electric Imp


After buying an USB cable to supply it with power it now runs 24/7 and I get cases all the time, haven’t really tweaked the setup yet.

I have looked at the new Platform Events and I thought that this integration can’t be using a simple upsert operation on an SObject, it’s 2017 for gods sake! Said and done, I set out to change the agent code in the Trailhead project to insert a Platform Event every time it’s time to update to Salesforce.

First of all you need to define your platform event, here is the XML representation of it:



    Deployed
    
        DeviceId__c
        false
        false
        false
        false
        
        16
        true
        Text
        false
    
    
        Door__c
        false
        false
        false
        false
        
        10
        false
        Text
        false
    
    
        Humidity__c
        false
        false
        false
        false
        
        6
        false
        2
        Number
        false
    
    
        Temperature__c
        false
        false
        false
        false
        
        6
        false
        2
        Number
        false
    
    
        ts__c
        false
        false
        false
        false
        
        false
        DateTime
    
    
    Fridge Readings

In short it’s just fields to hold the same values as on the SmartFridge__c object.

The updates to the agent code can be found on my GitHub account here.

When a Platform Event is created it needs to update the SmartFridge__c object to work as before, this is done with a trigger

trigger FridgeReadingTrigger on Fridge_Reading__e (after insert) {
    List updates = new List();
    for (Fridge_Reading__e event : Trigger.New) {
        System.debug('Event DeviceId ' + event.DeviceId__c);
        SmartFridge__c sf = new SmartFridge__c(DeviceId__c = event.DeviceId__c);
        sf.Door__c = event.Door__c;
        sf.Humidity__c = event.Humidity__c;
        sf.Temperature__c = event.Temperature__c;
        sf.ts__c = event.ts__c;
        updates.add(sf);
    }
    upsert updates DeviceId__c;
}

In Winter ’18 you can use process builder on Platform Events but my developer edition is not upgraded until next Saturday.

So I made things a bit more complex by introducing Platform Events and a trigger but I feel better knowing that I use more parts of the platform. Next step will be to use Big Objects to store the readings from the fridge over time and visualize them.

Cheers

Using AWS Lambda functions with the Salesforce Bulk API


One common task when integrating Salesforce with customers system is to import data, either as a one time task or regularly.

This can be done in several ways depending on the inhouse technical level and the simplest way might be to use the Import Wizard or the Data Loader. If you want to do it regularly in a batch fashion and are fortunate enough to have AWS infrastructure available using Lambda functions is an alternative.

Recently I did this as a prototype and I will share my findings here.

I will not go into details about AWS and Lambda, I used this tutorial to get started with Lambda functions but most of it didn’t concern the Salesforce parts but rather AWS specifics like IAM.

I found this Heroku project for using the bulk api.

The full python code looks like this:

from __future__ import print_function
from base64 import b64decode
import boto3
import uuid
import csv
import os
from salesforce_bulk import SalesforceBulk, CsvDictsAdapter
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)

s3_client = boto3.client('s3')
username = os.environ['SF_USERNAME']
encrypted_password = os.environ['SF_PASSWORD']
encrypted_security_token = os.environ['SF_SECURITYTOKEN']
password = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_password))['Plaintext'].decode('ascii')
security_token = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted_security_token))['Plaintext'].decode('ascii')
mapping_file_bucket = os.environ['MAPPING_FILE_BUCKET']
mapping_file_key = os.environ['MAPPING_FILE_KEY']

def bulk_upload(csv_path, mapping_file_path):
    with open(csv_path, mode='r') as infile:
        logger.info('Trying to login to SalesforceBulk')
        job = None
        try:
            bulk = SalesforceBulk(username=username, password=password, security_token=security_token)
            job = bulk.create_insert_job("Account", contentType='CSV')

            # Mapping file
            mapping_file = open(mapping_file_path, 'rb')
            bulk.post_mapping_file(job, mapping_file.read())

            accounts = csv.DictReader(infile)
            csv_iter = CsvDictsAdapter(iter(accounts))
            batch = bulk.post_batch(job, csv_iter)
            bulk.wait_for_batch(job, batch)
            bulk.close_job(job)
            logger.info('Done. Accounts uploaded.')
        except Exception as e:
            if job:
                bulk.abort_job(job)
            raise e
        return 'OK'

def handler(event, context):
    for record in event['Records']:
        # Incoming CSV file
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        download_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
        s3_client.download_file(bucket, key, download_path)

        # Mapping file
        mapping_file_path = '/tmp/{}{}'.format(uuid.uuid4(), mapping_file_key)
        s3_client.download_file(mapping_file_bucket, mapping_file_key, mapping_file_path)

        return bulk_upload(download_path, mapping_file_path)

Make sure to add the following environment variables in Lambda before executing

SF_USERNAME - your SF username
SF_PASSWORD - your SF password (encrypted)
SF_SECURITYTOKEN - your SF security token (encrypted)
MAPPING_FILE_BUCKET - the bucket in where to find the mapping file
MAPPING_FILE_KEY - the path to the mapping file in that bucket

I also added a method (In my own clone of the project here) to be able to provide the mapping file as part of the payload, I’ll make sure to create a pull request for this later.

The nice thing with using the Bulk API is that you get the monitoring directly in Salesforce, just go to to see the status of your job(s).

I haven’t added the listen to S3-trigger yet but it’s the next part of the tutorial so shouldn’t be a problem.

Cheers,
Johan

Trailhead is awesome and gamification totally works!

About this time last year I decided to pursue a career within Salesforce, I was a bit tired of my current job and wanted a change. It was either a backend engineer role at iZettle or becoming a Salesforce Consultant. The consultant role was not new to me since that’s how I started my career. After signing the contract I decided to look at Trailhead since I heard a lot of good things about it.

I took my first badge, Salesforce Platform Basics at 2016-10-04 20:32 UTC and it was quite easy. My goal was to take a bunch before I started working on the 2:nd of January 2017. That didn’t happen but I started doing them from my first day in the office.

Having worked with Salesforce since 5/11/2012 I thought I knew most of the platform but that was far from the truth. Platform Cache, Hierarchical Custom settings, Shield, Communities, Live Agent, etc. I haven’t worked with much of it but today I still know how the features works and if a customer asks me about it I can at give a brief explanation about most things.

Back to the gamification part, at EINS we set a goal for 2017 to have at least 98% of all badges between us in the team. Using partners.salesforce.com for calculating this value is really hard so I set out to build a dashboard for the team so that we could see our score.

After some iterations it looks ok, Lightning Design System makes everything looks great.
Trailhead Tracker

It looks awesome on your phone too

The dashboard has really helped, mostly because people in the team now sees who took a badge over the weekend and when someone passes their total number of badges. This has encouraged everyone to go that extra mile and take that extra badge.

Healthy competition is always good and when you learn things that helps you make a better job while at it it’s definitely a win-win!

Feel free to check out the dashboard at http://trailhead.eins.se/, also if you click a persons name you can see which badges he/she is missing. This makes it easy when you’re looking for a quick badge to take on the subway or the bus.
Trailhead Tracker User Page

Another addition was the #architectjourney pyramid at the bottom, since we’re scraping certifications too
Trailhead Tracker Architect Journy

The last thing we added to the dashboard was Slack notifications when someone completes a badge or gets a Salesforce Certification, of course the first version of this spammed all over our #trailhead channel but that bug is long gone now.
Trailhead Tracker Slack

Exporting the data in CSV and importing it into Wave lets you gain some insights into when people take badges (and when they work)
Badges per Month

So in summary, Trailhead gamification totally works but you need a dashboard.

Cheers,
Johan

PS. My aim is to clean up the code and put it on GitHub when I have the time DS.

Using Jenkins and Git for Metadata backups and running Test Cases

One thing that makes Salesforce great is the possibility to use it and customize it quite far without having to invite developers. I can see the beauty of this since developers are expensive and slow, before you think about writing a comment on that last statement keep in mind I’m a developer myself. One thing that code developers (as opposed to click developers) bring to the table is source control. This is mostly because anyone who has ever written a piece of code bigger than “hello world” knows that it’s super hard to get code right the first time, or the second time, and so on.

Coming from the coding part of the world developers love source control, code reviews, continuous integration and so on. For a Salesforce using company without an in house development team this is usually not something they ever think about.

Before I continue my rambling, my point is that Source Control is something anyone can benefit from, especially since the Metadata API in Salesforce makes it very easy to retrieve everything via the API.

I have created a small GitHub project here.
If you have a Jenkins instance to spare you should be up and running within the hour taking a backup of your Metadata and also running your test cases (if you have any Apex code) to make sure a declarative change didn’t break your test cases, which you would find out about the next time you’re trying to deploy something in your very tight 1 hour service window (yes that will happen).

In short, clone the GitHub project and follow the README. If you don’t have a Jenkins instance you can easily find one here. I had an old VMWare server where I just deployed their VMWare image.

Having regular Metadata backups is great, especially if you have more than 1 Salesforce Administrator. It might not help you in a proactive way but you’ll get a full audit trail on when a Profile was changed or when a column in a List View disappeared.

If you run into any problems, just let me know and I’ll be happy to assist you, no one should run their Salesforce instance without regular backups.

I used this GitHub project as a starting point, big thanks to @JitendraZaa for doing most of the heavy lifting.

Cheers,
Johan