Build an Account List Catalog
Use Case Example - Steps to build an Account List Catalog
This use case offer steps to build an Account List Catalog using the Claravine API, containing all configured lists. The output is a CSV file in this example.
Claravine API used in this use case:
The output of the Account List Catalog will be as followed:
List ID | List Name | List Column ID | List Column Name | List Value | List Created At | List Updated At |
---|---|---|---|---|---|---|
1500 | First List | 34086 | Paid Search Sources | 2025-04-08T15:58:53+00:00 | 2025-04-08T15:58:53+00:00 | |
1500 | First List | 34086 | Paid Search Sources | bing | 2025-04-08T15:58:53+00:00 | 2025-04-08T15:58:53+00:00 |
1500 | First List | 34086 | Paid Search Sources | yahoo | 2025-04-08T15:58:53+00:00 | 2025-04-08T15:58:53+00:00 |
1503 | Second List | 34094 | Region | APAC | 2025-04-08T16:00:04+00:00 | 2025-04-08T16:00:04+00:00 |
1503 | Second List | 34094 | Region | APAC | 2025-04-08T16:00:04+00:00 | 2025-04-08T16:00:04+00:00 |
1503 | Second List | 34094 | Region | EMEA | 2025-04-08T16:00:04+00:00 | 2025-04-08T16:00:04+00:00 |
1503 | Second List | 34095 | Country | Australia | 2025-04-08T16:00:04+00:00 | 2025-04-08T16:00:04+00:00 |
1503 | Second List | 34095 | Country | China | 2025-04-08T16:00:04+00:00 | 2025-04-08T16:00:04+00:00 |
1503 | Second List | 34095 | Country | England | 2025-04-08T16:00:04+00:00 | 2025-04-08T16:00:04+00:00 |
1503 | Second List | 34096 | Agency | Agency 1 | 2025-04-08T16:00:04+00:00 | 2025-04-08T16:00:04+00:00 |
1503 | Second List | 34096 | Agency | Agency 2 | 2025-04-08T16:00:04+00:00 | 2025-04-08T16:00:04+00:00 |
1503 | Second List | 34096 | Agency | Agency 3 | 2025-04-08T16:00:04+00:00 | 2025-04-08T16:00:04+00:00 |
A customer can follow these steps (script available below), and run them at a frequency based on business needs (ex. daily, weekly, monthly):
- Use the "List Pick Lists" endpoint (
GET /v1/lists/
) endpoint to retrieve all lists. - For each list returned above, use the "Get Pick List" endpoint (
GET /v1/lists/{listId}
) endpoint to get each list information in a JSON format. - Combine the JSON responses from the lists above into a single CSV file. Your application could also store the list data in a database for example.
- Upload the generated CSV file to Cloud Storage, or where your team can access globally, or where your workflow/application can consume it.
Python script to build an Account List Catalog
- Note: Make sure you have the
requests
library installed (https://pypi.org/project/requests/) - In order to run the script you need to edit the first code lines and add values for:
DOMAIN
(Account domain, e.g.DOMAIN = "your_account.claravine.com"
)API_KEY
(Claravine Public API Key)API_SECRET
(Claravine Public API Secret)
- Then save the following code into a file (e.g. "script_list_catalog.py") and execute it:
python script_list_catalog.py
import os
import sys
import csv
import time
import requests
from datetime import datetime
from collections import defaultdict
'''TODO: Edit the following lines to use your values'''
DOMAIN = '' # Account domain (e.g. 'account1.claravine.com')
API_KEY = '' # Claravine API Key
API_SECRET = '' # Claravine API Secret
'''End of lines to be eddited'''
# Set headers and URL
HEADERS = {
'accept': 'application/json',
'x-claravine-key': API_KEY,
'x-claravine-secret': API_SECRET
}
def convert_and_write_json_to_csv(json_list: dict, listId: int, list_name: str, csv_writer):
# TODO: Add or remove more metadata about lists
metadata = {
"List Created At": json_list['updated_at'],
"List Updated At": json_list['updated_at']
}
# Group list values by columns
column_groups = defaultdict(list)
for row in json_list['rows']:
for cell in row['row_data']:
column_groups[cell['col_id']].append({
"List Value": cell['value']
})
# Write records into CSV
for header in json_list['headers']:
col_id = header['col_id']
for entry in column_groups[col_id]:
record = {
"List ID": listId,
"List Name": list_name,
"List Column ID": col_id,
"List Column Name": header['name'],
**entry,
**metadata
}
csv_writer.writerow(record)
if __name__ == '__main__':
if (DOMAIN == '' or API_KEY == '' or API_SECRET == ''):
error_msg = f"[ERROR] Please edit the first lines of the script and provide values for: 'DOMAIN', 'API_KEY' and 'API_SECRET'."
raise Exception(error_msg)
# NOTE: (Optional) You can edit this line to specify the path folder/name where you want the CSV to be created.
# Otherwise, the CSV file will be created in: <current_dir>/list_dictionary/all_list_values-<timestamp>.csv
output_file_path = None
if not output_file_path:
# Define output file path to store CSV with results:
timestamp = datetime.now().strftime("%m_%d_%Y-%Hh%M")
output_dir = os.path.join(os.getcwd(), 'list_dictionary')
os.makedirs(output_dir, exist_ok=True) # Creates the directory path if needed
output_file_path = os.path.join(output_dir, f"all_list_values-{timestamp}.csv")
print(f"[WARNING] Using default filepath, the result List Catalog CSV file will be created at: {output_file_path}")
# Base List API URL
url_get_lists = f"https://{DOMAIN}/v1/lists/"
# Call GET v1/lists/ to fetch all lists
response_lists = requests.request("GET", url_get_lists, headers=HEADERS)
if (response_lists.status_code != 200):
error_msg = f"[ERROR] An unexpected error happened during the GET {url_get_lists} request. Message:", response_lists.text
raise RuntimeError(error_msg)
# Open CSV file once and write headers
with open(output_file_path, 'w', newline='') as f:
writer = csv.DictWriter(f, fieldnames=[
"List ID", "List Name", "List Column ID", "List Column Name",
"List Value", "List Created At", "List Updated At"
])
writer.writeheader()
for pick_list in response_lists.json()['lists']:
# Call GET v1/lists/{listId} to fetch one list at a time and append to the CSV file
url_get_list_data = url_get_lists + str(pick_list['id'])
list_name = pick_list['name']
print(f"Fetching list values for list: {list_name}. Url: {url_get_list_data}.")
response = requests.request("GET", url_get_list_data, headers=HEADERS)
if (response.status_code == 429):
# Too many requests in one minute, sleep for 1 min and try again
print("[Too many requests] Reached limit of API threshold, waiting for a few seconds before retrying request.")
time.sleep(60)
response = requests.request("GET", url_get_list_data, headers=HEADERS)
if (response.status_code == 200):
# Process list and write rows into CSV file
convert_and_write_json_to_csv(response.json(), pick_list['id'], list_name, writer)
else:
error_msg = f"[ERROR] An unexpected error happened during the GET {url_get_list_data} request. Message: {response.text}"
error_msg += f"\nCould not retrieve data from list: '{list_name}', skipping it."
print("[ERROR]", error_msg)
print("Finished processing all lists. List Catalog can be seen at: " + output_file_path)
Updated about 1 month ago