Understanding the field usage of any object in Salesforce

Overview

Understanding the field usage of any object in Salesforce

One of the biggest problems that I have addressed while working with Salesforce is to understand and evaluate the field usage of a custom object. This application does the work for you, generating a CSV/Excel file with the date of the last record that used each field, and the percentage of use across all of them.

To make this app work, you will need a System Administrator credential to log into Salesforce
This app is currently working with the Spyder IDE, which is part of Anaconda


Let's understand how it works!

Dependencies

First, we need our dependencies. We will use Pandas, datetime and Simple Salesforce

from simple_salesforce import Salesforce
import pandas as pd
import datetime

Credentials

Next, we are going to connect to Salesforce with Simple Salesforce

  sf = Salesforce(password='password',
            username='username',
            organizationId='organizationId')

Your organizationId should look like this, 00JH0000000tYml.
To find it, just follow the next steps (Lightning experience):

  • Log into Salesforce with your System Administrator credentials
  • Press the gear button
  • Press Setup, (setup for current app)
  • In the quick search bar (the one in the left) type Company Information
  • Click Company Information
  • Finally, look for Salesforce.com Organization ID. The ID will look like 00JH0000000tYml

The Object

Now you will need to plug the object name. The object name is the API Name of the object. Normally, if it is a custom object, it will finish like this, __c
To find the API NAME just follow these instructions:

  • Log into Salesforce with your System Administrator credentials
  • Press the gear button
  • Press Setup, (setup for current app)
  • Click on Object Manager in the header of the page
  • Find your object using the name and copy the API NAME which is next to the name of the object

This part of the code if going to use the name of the object to bring all the fields
  object_to_evaluate = "object"
  object_fields = getattr(sf, object_to_evaluate).describe()

The Date

This part is important and will make you think. The default code is going to bring the data from the last year. Is important to understand what happened during that period. If you release a new field a week ago, it will show that it was use a couple of days ago, but the usage will be really low, around a 2% (7/365). You can change the days to evaluate simple change the 365 for the number of days that you want.

last_year = (datetime.datetime.now() + datetime.timedelta(days=-365)).strftime("%Y-%m-%d"+"T"+"%H:%M:%S"+"Z")

The Result

Now we are going to iterate all the fields and get the created date from the last record that used the field, and the number of records that use that field during the period (one year).

{} \ AND {} != null \ ORDER BY Id DESC \ LIMIT 1".format(object_to_evaluate, last_year , field['name']) )['records']) field_detail['Field Name'] = field['name'] field_detail['Field Label'] = field['label'] field_detail['Found?'] = 'Yes' field_quantity = pd.DataFrame( sf.query("SELECT count(Id) \ FROM {} \ WHERE createddate > {} \ AND {} != null".format(object_to_evaluate, last_year , field['name']) ))['records'][0]['expr0'] field_detail['Quantity'] = field_quantity data.append(field_detail) if field_detail.empty: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['Yes, no data']} data.append(pd.DataFrame(error_data)) except: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['No']} data.append(pd.DataFrame(error_data)) # Concatenate the list of result into one dataframe data_to_csv = pd.concat(data, ignore_index=True)">
for field in object_fields['fields']:
    print(field['name'])
    try:
        field_detail = pd.DataFrame(
            sf.query("SELECT Id, createddate, SystemModStamp \
                      FROM {} \
                      WHERE createddate > {} \
                        AND {} != null \
                      ORDER BY Id DESC \
                      LIMIT 1".format(object_to_evaluate, last_year , field['name'])
                      )['records'])

        field_detail['Field Name'] = field['name']
        field_detail['Field Label'] = field['label']
        field_detail['Found?'] = 'Yes'

        field_quantity = pd.DataFrame(
            sf.query("SELECT count(Id) \
                    FROM {} \
                    WHERE createddate > {} \
                    AND {} != null".format(object_to_evaluate, last_year , field['name'])
                    ))['records'][0]['expr0']

        field_detail['Quantity'] = field_quantity                        
        data.append(field_detail)

        if field_detail.empty:
            error_data = {'Field Name': [field['name']],
                          'Field Label': [field['label']] , 
                          'Found?': ['Yes, no data']}
            data.append(pd.DataFrame(error_data))
    except:
        error_data = {'Field Name': [field['name']],
                      'Field Label': [field['label']] , 
                      'Found?': ['No']}
        data.append(pd.DataFrame(error_data))

# Concatenate the list of result into one dataframe
data_to_csv = pd.concat(data, ignore_index=True)

Some Formatting

Formatting is a nice to have to understand the result, especially if you are going to share the insights. We are going to rename some columns, format the dates column in a way that CSV/Excel can understand, and we are adding a % of use column.

data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True)
data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date
data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date
data_to_csv = data_to_csv.drop('attributes', axis=1)
max_value = data_to_csv['Quantity'].max()
data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value

The Files

Finally, we are going to export the files to CSV and Excel, so you can choose which one you prefer to use. The files will be stored in the same folder as the app. So, if you are running this app in your Desktop folder, the CSV and Excel files will be store in the same folder.

data_to_csv.to_csv('last Field Usage Date.csv')
data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")

If you like it, remember to
Buy Me A Coffee


The final code will look like this:

{} \ AND {} != null \ ORDER BY Id DESC \ LIMIT 1".format(object_to_evaluate, last_year , field['name']) )['records']) field_detail['Field Name'] = field['name'] field_detail['Field Label'] = field['label'] field_detail['Found?'] = 'Yes' field_quantity = pd.DataFrame( sf.query("SELECT count(Id) \ FROM {} \ WHERE createddate > {} \ AND {} != null".format(object_to_evaluate, last_year , field['name']) ))['records'][0]['expr0'] field_detail['Quantity'] = field_quantity data.append(field_detail) if field_detail.empty: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['Yes, no data']} data.append(pd.DataFrame(error_data)) except: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['No']} data.append(pd.DataFrame(error_data)) # Concatenate the list of result into one dataframe data_to_csv = pd.concat(data, ignore_index=True) # Format the CSV/Excel report data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True) data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date data_to_csv = data_to_csv.drop('attributes', axis=1) max_value = data_to_csv['Quantity'].max() data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value # Export the data to a CSV/Excel file data_to_csv.to_csv('last Field Usage Date.csv') data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")">
from simple_salesforce import Salesforce
import pandas as pd
import datetime

# Connection to Salesforce
sf = Salesforce(password='password',
                username='username',
                organizationId='organizationId')


# Change the name to the object that you want to evaluate. If is a custom object remember to end it with __c
object_to_evaluate = "object"

# Get all the fields from the Object
object_fields = getattr(sf, object_to_evaluate).describe()

# Define an empty list to append the information
data = []

# Create a date variable to define from when we want to get the data
last_year = (datetime.datetime.now() + datetime.timedelta(days=-365)).strftime("%Y-%m-%d"+"T"+"%H:%M:%S"+"Z")

# Iterate over the fields and bring the last record created Date where the field wasn't empty
# If the record is not found, store it in the CSV/Excel file as not found
for field in object_fields['fields']:
    print(field['name'])
    try:
        field_detail = pd.DataFrame(
            sf.query("SELECT Id, createddate, SystemModStamp \
                      FROM {} \
                      WHERE createddate > {} \
                        AND {} != null \
                      ORDER BY Id DESC \
                      LIMIT 1".format(object_to_evaluate, last_year , field['name'])
                      )['records'])

        field_detail['Field Name'] = field['name']
        field_detail['Field Label'] = field['label']
        field_detail['Found?'] = 'Yes'

        field_quantity = pd.DataFrame(
            sf.query("SELECT count(Id) \
                    FROM {} \
                    WHERE createddate > {} \
                    AND {} != null".format(object_to_evaluate, last_year , field['name'])
                    ))['records'][0]['expr0']

        field_detail['Quantity'] = field_quantity                        
        data.append(field_detail)

        if field_detail.empty:
            error_data = {'Field Name': [field['name']],
                          'Field Label': [field['label']] , 
                          'Found?': ['Yes, no data']}
            data.append(pd.DataFrame(error_data))
    except:
        error_data = {'Field Name': [field['name']],
                      'Field Label': [field['label']] , 
                      'Found?': ['No']}
        data.append(pd.DataFrame(error_data))

# Concatenate the list of result into one dataframe
data_to_csv = pd.concat(data, ignore_index=True)

# Format the CSV/Excel report
data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True)
data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date
data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date
data_to_csv = data_to_csv.drop('attributes', axis=1)
max_value = data_to_csv['Quantity'].max()
data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value

# Export the data to a CSV/Excel file
data_to_csv.to_csv('last Field Usage Date.csv')
data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")

HOPE IT HELPS!

If you like it, remember to
Buy Me A Coffee

Owner
Sebastian Undurraga
Sebastian Undurraga
A simple IDA Pro plugin to show all HexRays decompiler comments written by user

XRaysComments A simple IDA Pro plugin to show all HexRays decompiler comments written by user Installation Copy the file xray_comments.py to the plugi

Nox 20 Dec 27, 2022
Interfaces between napari and pymeshlab library to allow import, export and construction of surfaces.

napari-pymeshlab Interfaces between napari and the pymeshlab library to allow import, export and construction of surfaces. This is a WIP and feature r

Zach Marin 4 Oct 12, 2022
A tool to build reproducible wheels for you Python project or for all of your dependencies

asaman: Amra Saman (আমরা সমান) This is a tool to build reproducible wheels for your Python project or for all of your dependencies. What this means is

Kushal Das 14 Aug 05, 2022
Este projeto se trata de uma análise de campanhas de marketing de uma empresa que vende acessórios para veículos.

Marketing Campaigns Este projeto se trata de uma análise de campanhas de marketing de uma empresa que vende acessórios para veículos. 1. Problema A em

Bibiana Prevedello 1 Jan 12, 2022
Slientruss3d : Python for stable truss analysis tool

slientruss3d : Python for stable truss analysis tool Desciption slientruss3d is a python package which can solve the resistances, internal forces and

3 Dec 26, 2022
A simple python script that print the Mandelbrot set for every power of the formal formula.

Python Mandelbrot A simple python script that print the Mandelbrot set for every power of the formal formula.

Paride Giunta 2 Apr 15, 2022
Snek-test - An operating system kernel made in python and assembly

pythonOS An operating system kernel made in python and assembly Wait what? It us

TechStudent10 2 Jan 25, 2022
Pygments is a generic syntax highlighter written in Python

Welcome to Pygments This is the source of Pygments. It is a generic syntax highlighter written in Python that supports over 500 languages and text for

1.2k Jan 06, 2023
My HA controller for veg and flower rooms

HAGrowRoom My HA controller for veg and flower rooms I will do my best to keep this updated as I change, add and improve. System heavily uses custom t

4 May 25, 2022
Stori QA Automation Challenge

Stori-QA-Automation-Challenge This is the repository is created for the Stori QA Intern Automation Engineer Challenge! In this you can find the Requir

Daniel Castañeda 0 Feb 20, 2022
Recreate the joys of Office Assistant from the comfort of the Python interpreter

Recreate the joys of Office Assistant from the comfort of the Python interpreter.

Louis Sven Goulet 3 May 21, 2022
PIP Manager written in python Tkinter

PIP Manager About PIP Manager is designed to make Python Package handling easier by just a click of a button!! Available Features Installing packages

Will Payne 9 Dec 09, 2022
Coded in Python 3 - I make for education, easily clone simple website.

Simple Website Cloner - Single Page Coded in Python 3 - I make for education, easily clone simple website. How to use ? Install Python 3 first. Instal

Phạm Đức Thanh 2 Jan 13, 2022
Set up a sidechain for the XRPL quickly and easily

Sidechain Launch Kit Introduction This directory contains python scripts to tests and explore side chains. This document walks through the steps to se

Xpring Engineering 15 Dec 08, 2022
Security-related flags and options for C compilers

Getting the maximum of your C compiler, for security

135 Nov 11, 2022
Camera track the tip of a pen to use as a drawing tablet

cablet Camera track the tip of a pen to use as a drawing tablet Setup You will need: Writing utensil with a colored tip (preferably blue or green) Bac

14 Feb 20, 2022
A compilation of useful scripts to automate common tasks

Scripts-To-Automate-This A compilation of useful scripts for common tasks Name What it does Type Add file extensions Adds ".png" to a list of file nam

0 Nov 05, 2021
Un Assistente Vocale scritto in Python e altamente personalizzabile

Un Assistente Vocale scritto in Python e altamente personalizzabile

Marco 2 May 06, 2022
NeoInterface - Neo4j made easy for Python programmers!

Neointerface - Neo4j made easy for Python programmers! A Python interface to use the Neo4j graph database, and simplify its use. class NeoInterface: C

15 Dec 15, 2022
A Python3 script to decode an encoded VBScript file, often seen with a .vbe file extension

vbe-decoder.py Decode one or multiple encoded VBScript files, often seen with a .vbe file extension. Usage usage: vbe-decoder.py [-h] [-o output] file

John Hammond 147 Nov 15, 2022